Risk Managemtn
Risk Managemtn
DOI: 10.1007/978-3-540-78642-9
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is
concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, roadcasting,
reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication
or parts thereof is permitted only under the provisions of the German Copyright Law of September 9,
1965, in its current version, and permission for use must always be obtained from Springer. Violations
are liable to prosecution under the German Copyright Law.
The use of general descriptive names, registered names, trademarks, etc. in this publication does not
imply, even in the absence of a specific statement, that such names are exempt from the relevant protective
laws and regulations and therefore free for general use.
5 4 3 2 1
springer.com
Preface
Risk management has become a critical part of doing business in the twenty-first
century. This book is a collection of material about enterprise risk management, and
the role of risk in decision making. Part I introduces the topic of enterprise risk
management. Part II presents enterprise risk management from perspectives of
finance, accounting, insurance, supply chain operations, and project management.
Technology tools are addressed in Part III, including financial models of risk as
well as accounting aspects, using data envelopment analysis, neural network tools
for credit risk evaluation, and real option analysis applied to information technol-
ogy outsourcing. In Part IV, three chapters present enterprise risk management
experience in China, including banking, chemical plant operations, and information
technology.
Lincoln, USA David L. Olson
Toronto, Canada Desheng Wu
February 2008
v
Contents
Part I Preliminary
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
David L. Olson & Desheng Wu
vii
viii Contents
Part I: Preliminary
Part I of the book is introductory, to include this chapter. It also includes an over-
view of human decision making and how it deals with risk. This chapter is written
by David R. Koenig, Executive Director of the Professional Risk Managers’
International Association (PRMIA).
We published a book focusing on different perspectives of enterprise risk man-
agement.4 That book discussed key perspectives of ERM, to include financial,
accounting, supply chain, information technology, and disaster planning aspects.
There are many others. Part II of this book gives other views of the impact of ERM
in financial and accounting, insurance, supply chain, and project management
fields. Part III presents papers addressing technical tools available to support ERM.
Most of these papers address financial aspects, as is appropriate because finance
and insurance are key to ERM. There also is a chapter addressing the impact of the
Sarbanes–Oxley Act on ERM in the U.S. Part II ends with a chapter addressing
analytic tools for information technology outsourcing analysis. Part IV of the book
includes three chapters related to ERM in China. These include applications in
banking, operations, and information technology.
Part III presents technical tools applicable for a variety of risk management needs.
Chapter 7 presents an historical account of the evolution of mathematics and risk
management over the last twenty years, with focus on current credit market
developments. The tool presented is collateralized fund obligations as a new credit
derivative, applied to dealing with the risk of snow in Montreal.
Chapter 8 addresses to the role of stable laws in risk management. After a review
on calibration methods for stable laws, Autoregressive Moving Average processes
(ARMA) and Generalized Autoregressive Conditionally Heteroscedastic processes
(GARCH) driven by stable noises are studied. Value at Risk computation under
several models is discussed.
Chapter 9 presents research relative to stable forecasting models in financial
analysis. Hybrid calibration techniques in pricing and risk management are given.
A credit risky markets of defaultable bonds with an arbitrary number of factors is
considered, more precisely a term structure model using Gaussian random yields.
1 Introduction 5
In such a model the forward rates are driven by infinitely many factors which leads
to hedges akin to practice, more stable calibrations and allows for more general
shapes of the yield curve. Hybrid calibration has two main advantages: on one side
it combines the advantages of estimation and classical calibration. On the other side
it can be used in a market which suffers from scarcity of (liquid) credit derivatives
data as the combination with historical estimation provides high stability. Risk
measures are derived using the results from the calibration procedure.
Chapter 10 employs alternate techniques to examine whether passage of the
Sarbanes–Oxley act (SOX) has had positive effects on the efficiency of public
accounting firms. These alternate techniques extend from use of the non-paramet-
ric, “frontier” oriented method of Data Envelopment Analysis (DEA), and include
more traditional regression based approaches using central tendency estimates.
Using data from 58 of the 100 largest accounting firms in the U.S. we find that
efficiency increased at high levels of statistical significance and discover that this
result is consistent for all of the different methods – frontier and central tendencies
used in this article. We also find that this result is not affected by inclusion or exclu-
sion of the Big 4 firms. All results are found to be robust as well as consistent.
Credit risk evaluation and credit default prediction attract a natural interest from
both practitioners and regulators in the financial industry. Chapter 11 reviews vari-
ous quantitative methods in credit risk management. Case study to identify credit
risks is demonstrated using two neural network approaches, Backpropagation
Neural Networks (BPNN) and Probabilistic Neural Network (PNN). The results of
the empirical application of both methods confirm their validity. BPNN yeilds a
convincing 54.55% bankruptcy and 100% non-bankruptcy out-of-sample predic-
tion accuracy. PNN produces a 54.55% bankruptcy and 96.52% non-bankruptcy
out-of-sample prediction accuracy. The promising results potentially provide tre-
mendous benefit to the financial sector in the areas of credit approval, loan securi-
tization and loan portfolio management.
Information technology (IT) outsourcing is one of the major issues facing organiza-
tions in today’s rapidly changing business environment. Due to its very nature of uncer-
tainty, it is critical for companies to manage and mitigate the high risks associated with
IT outsourcing practices including the task of vendor selection. Chapter 12 explores the
two-stage vendor selection approach in IT outsourcing using real options analysis. In the
first stage, the client engages a vendor for a pilot project and observes the outcome. Using
this observation, the client decides either to continue the project to the second stage based
upon pre-specified terms or to terminate the project. A case example of outsourcing the
development of supply chain management information systems for a logistics firm is also
presented in the paper. Our findings suggest that real options analysis is a viable project
valuation technique for IT outsourcing.
Thanks to Authors
This book collects works from many authors throughout the world. We would like
to thank them for their valuable contributions, and hope that this collection provides
value to the growing research community in ERM.
End Notes
1. Dickinson, G. (2001). Enterprise risk management: Its origins and conceptual foundation, The
Geneva Papers on Risk and Insurance 26:3, 360–366.
2. Gates, S. and Nanes, A. (2006). Incorporating strategic risk into enterprise risk management:
A survey of current corporate practice, Journal of Applied Corporate Finance 18:4, 81–90.
3. Walker, L., Shenkir, W.G. and Barton, T.L. (2003). ERM in practice 60:4, 51–55; Baranoff,
E.G. (2004). Risk management: A focus on a more holistic approach three years after
September 11, Journal of Insurance Regulation 22:4, 71–81.
4. Olson, D.L. and Wu, D. (2008). Enterprise Risk Management. World Scientific.
Chapter 2
The Human Reaction to Risk and Opportunity
D.R. Koenig
Introduction
Risk can be defined as the unknown change in the future value of a system. Kloman
defined risk as “a measure of the probable likelihood, consequences and timing of
an event.”1 Slovik and Weber identified four common conceptions of risk:2
● Risk as hazard
° Examples: “Which risks should we rank?” or “Which risks keep you awake
at night?”
● Risk as probability
° Examples: “What is the risk of getting AIDS from an infected needle?” or
“What is the chance that Citigroup defaults in the next 12 months?”
● Risk as consequence
° Examples: “What is the risk of lettering your parking meter expire?” (answer:
“Getting a ticket.”) or “What is the risk of not addressing a compliance let-
ter?” (answer: “Regulatory penalties.”)
● Risk as potential adversity or threat
° Examples: “How great is the risk of riding a motorcycle?” or “What is your
exposure to rising jet fuel prices?”
While these last four conceptions all tend to have a negative tonality to them, the
classical definition of “risk” refers to both positive and negative outcomes, which
the first two definitions of risk capture.
A risk event, therefore, can be described as the actualization of a risk that alters
the value of a system or enterprise, either increasing or decreasing its present value
by some amount.
Ductile Systems
Recent use of the term risk has been focused on negative outcomes, or loss. In
particular, attention has been highly concentrated on extreme losses and their abil-
ity to disrupt a system or even to cause its collapse. This may be every bit a function
of preference described as loss avoidance by Kahneman and Tversky where the
negative utility from loss greatly exceeds the positive utility from an equal gain.3
By definition, a ductile system is one that “breaks well” or never allows a risk
event to cause the entire system to collapse.4 A company cares about things that can
break its “system” like the drying-up of liquidity sources or a dramatic negative
change in perception of its products by customers, for example, as such events
could dramatically reduce or eliminate the value of the enterprise. Figure 2.1 below
depicts the path a risk event takes to its full potential. In other words, absent any
intervention, the full change in value of the system that would be realized from the
risk event is 100% of the potential impact of the risk event.
In this figure, the horizontal axis represents steps in time, noting that all risk
events take some amount of time to reach their full potential impact. The vertical
2 The Human Reaction to Risk and Opportunity 9
axis is the percent of the full impact that has been realized. All risk events eventu-
ally reach 100% of their potential impact if there is no intervention.
Hundreds of thousands of risk events are likely to be realized in any system and
some very small percentage would, if left unchecked, break the system. In a corpo-
rate setting, these system-breaking events would be those that resulted in losses that
exceed the company’s capital.
Through interventions, which include enterprise risk management programs, dis-
semination of knowledge and risk-awareness can help make systems more ductile and
thus more valuable. If the players in a system are risk-aware, problems are less likely
0%
1 2 3 4 5 6 7 8 9 10 11 12 13 14
- 10%
- 20%
Pe rcen t of P ote ntial L oss
- 30%
- 40%
- 50%
- 60%
- 70%
- 80%
- 90%
-100%
0%
1 2 3 4 5 6 7 8
- 10%
- 20%
- 30%
Percent of Potential Loss
- 40%
- 50%
- 60%
- 70%
- 80%
- 90%
- 100%
to reach their full potential for damage. This is so simply because some element of the
system, by virtue of the risk-awareness, takes an action to stop the problem before it
realizes its full impact. Figure 2.2 depicts the path of a risk event in a ductile system.
In a ductile system, no risk event reaches its full potential impact.
The general notion behind creating a ductile system is that if you can positively alter
the perception of possible future states of value of the system through enterprise risk
management, you can greatly increase the system’s present value. This comes about
through a reduced need for capital (reduced potential loss from a given risk event) and
its associated expense, a greater ability to take business risks (perceived and real
increases in growth) and more benefit from investor perception of the firm.
In classic theories of finance, risk has been used as a theoretical construct assumed
to influence choice.5 Underlying risk-return models in finance (e.g., Markowitz 1954)
is the psychological assumption that greed and fear guide behavior, and that it is the
final balance and trade-off between the fear of adverse consequences (risk) and the hope
for gain (return) that determines our choices, like investing or supply of liquidity.6 How
many units of risk is a person willing to tolerate for one unit of return? The acceptable
ratio of risk to return is the definition of risk attitude in these models.7
In our ductile system, we can easily recognize how a trimming of the possible
negative risk events and a shift right-ward towards higher expected gains from greater
business growth can positively impact value in the Markowitz world (Fig. 2.3).
But, the variance (i.e., the square of the standard deviation of outcomes around
the mean) used in such models is a symmetric measure, meaning the variation
above the mean has equal impact to variation below the mean. Psychological
research indicates that humans care much more about much more about downside
variability (i.e., outcomes that are worse than the average) than upside variability.8
The asymmetric human perception and attitudes towards risk mean that there is
more that we must understand in terms of the human impact on risk events and
valuation of a system than a standard Markowitz risk-return framework would
suggest, or our enterprise risk management system might not be as effective as it
could be. In other words, the enterprise risk management program will not be as
valuable and some cost/benefit calculations will incorrectly reach the conclusion
that no action is economically justified.
How does understanding the way in which risk events can be amplified matter?
How do transparency and confidence lead to an attenuation of risk events? How
do people psychologically process risk events and why does that matter? These are
just a few of the questions that must be asked about our enterprises and the risks
they face.
2 The Human Reaction to Risk and Opportunity 11
In the late 1980s, a framework for understanding how the human response to risk
events could contribute to the final “value” of the impact of a risk event was con-
ceived under the Social Amplification of Risk Framework or SARF.9
The theoretical starting point of the SARF is the belief that unless humans
communicate to each other, the impact of a risk event will be localized or irrel-
evant. In other words, its potential negative impact will be less than if the risk
event is amplified through human communication. Even though this framework
was developed in a setting focused on natural or physical risks, this foundation
is essential to understanding the transmission mechanism that can lead to things
like credit crunches, liquidity crises or dramatic devaluation of a system, firm
or assets.
A key component of the human communication process about risk is portrayed
through various risk signals (images, signs and symbols), which in turn interact
with a wide range of psychological, social, institutional and cultural processes in
ways that either intensify or attenuate perceptions of risk and its manageability
through amplification stations.10 Events may be interpreted as clues regarding the
magnitude of the risk and the adequacy of the risk management process.11
Amplification stations can include social networks, expert communities, institu-
tions, the mass media and government agencies, etc. These individual stations of
amplification are affected by risk heuristics, qualitative aspects of risk, prior atti-
tudes, blame and trust.
In the second stage of the framework, some risk events will produce ripple
effects that may spread beyond the initial impact of the risk event and may even
impact unrelated entities. Consider consumer reaction to the Tylenol poisonings.
Tylenol tampering resulted in more than 125,000 stories in the print media alone
and inflicted losses of more than $1 billion upon the Johnson & Johnson company,
including a damaged image of the product.12 Further, consumer demand and
12 D.R. Koenig
regulation following this led to the ubiquity of tamper-proof packages (and associ-
ated costs) at completely unrelated firms.
Similarly, the reaction to the events of 9/11 has led to an enormous cost on all
who travel, businesses wishing to hire foreign talent the United States or businesses
involved in import/export, for example. Other impacts from risk amplification can
include potentially system-breaking events like capital flight as in the Asian cur-
rency crisis of 1997–1998.
This process has been equated to the ripples from dropping a stone into a pond.13 As
the ripples spread outward, there is a first group directly impacted by the risk event, then
it touches the next higher institutional level (a business line, company or agency) and in
extreme cases reaches other parts of the industry or even extra-industry entities.
In 1998, the Asian currency and Russian debt crises had ripple effects that led to
the demise of the hedge-fund Long Term Capital Management (LTCM). This demise,
in turn, was perceived as having the potential to lead to a catastrophic disruption of
the entire global capital markets system and resulted in substantial financial losses
(and gains) for firms that believed they had no exposure to either Asia or Russia and
certainly not to hedge funds. This amplification came through human stations.
In 1992, the same researchers who conceived of SARF evaluated their theory
by reviewing a large database of 128 risk events, primarily physical risks, in the
United States. In their study, they found strong evidence that the social amplifica-
tion of a risk event is as important in determining the full set of risk consequences
as is the direct physical impact of the risk event. Applying this result to internal
risk assessments suggests that it would be easy to greatly underestimate the impact
of a risk event if only first order effects are considered and not the secondary and
tertiary impacts from social amplification or communication and reaction to the
risk event.
Again, considering the Tylenol tampering case, an internal risk assessment of a
scenario that included such an event might result in the risk being limited to be legal
liability from the poisonings and perhaps some negative customer impact. However,
it would be unlikely that any ex-ante analysis would have concluded the long-term
impact on product packaging and associated costs that were a result of the amplifica-
tion of the story. Or, if the scenario had involved such an event at a competing firm,
the impact might have even been assumed to be positive for the “unaffected” firm.
So, what are the factors that can increase the likelihood of social amplification or
attenuation? How are hazards or risks perceived? It turns out, not surprisingly, that
what people do not understand and what they perceive as having potentially
wide-ranging effects are the things they are most likely respond to with some kind
of action, e.g., a change in the valuation of a system.
Weber reviewed three approaches to risk perception: axiomatic, socio-cultural and
psychometric.14 Axiomatic measurements focus on the way in which people subjectively
2 The Human Reaction to Risk and Opportunity 13
transform objective risk information (e.g., the common credit risk measure Loss
Given Default and the equally common Probability of Default) into how the realiza-
tion of the event will impact them personally (career prospects, for example).
The study of socio-cultural paradigms focuses on the effect of group- and cul-
ture-level variables on risk perception. Some cultures select some risks that require
attention, while others pay little or no attention to these risks at all. Cultural differ-
ences in trust in institutions (corporation, government, market) drive a different
perception of risk.15
But, most important, is the psychometric paradigm which has identified people’s
emotional reactions to risky situations that affect the judgments of the riskiness of
events that go beyond their objective consequences. This paradigm is characterized by
risk dimensions called Dread (perceived lack of control, feelings of dread and per-
ceived catastrophic potential) and risk of the Unknown (the extent to which the risk is
judged to be unobservable, unknown, new or delayed in producing harmful impacts).
Recall that SARF holds that risk events can contain “signal value.” Signal value
might warn of the likelihood of secondary or tertiary effects. The likelihood of a
risk event having high signal value is a function of perceptions of that risk in terms
of the source of the risk and its potential impact. Slovic developed a dread/
knowledge chart represented below, that measures the factors that contribute to
feelings of dread and knowledge.16
In Fig. 2.4, “Dread risk,” captures aspects of the described risks that speed up our
heart rate and make us anxious as we contemplate them: perceived lack of control
over exposure to the risk, with consequences that are catastrophic, and may have
global ramifications or affect future generations.17 “Unknown risk,” refers to the
degree to which exposure to a risk and its consequences are predictable and observ-
able: how much is known about the risk and is the exposure easily detected.
Research has shown that the public’s risk perceptions and attitudes are closely
related to the position of a risk within the factor space. Most important is the factor
Dread risk. The higher a risk’s score on this factor, the higher its perceived risk, the
more people want to see its current risks reduced, and the more they want to see
strict regulation employed to achieve the desired reduction in risk.18
In the unknown risk factor space, familiarity with a risk (e.g., acquired by daily
exposure) lowers perceptions of its riskiness.19 In this factor, people are also willing
to accept far greater voluntary risks (risks from smoking or skiing for example) than
involuntary risks (risks from electric power generation for example). We are loath
to let others do on to us what we haply do to ourselves.20
From this depiction, we can recognize that both dread and our lack of familiarity
with something will likely amplify the human response to a risk event. In other
words, risks that are in the upper right hand corner of the dread/knowledge chart
are the ones most likely to lead to an amplification effect.
Slovic and Weber use terrorism as an example, noting that the concept of
accidents as signal helps explain our strong response to terrorism.21 Because the risks
associated with terrorism are seen as poorly understood and catastrophic, accidents
anywhere in the world may be seen as omens of disaster everywhere, thus producing
responses that carry immense psychological, socioeconomic, and political impacts.
14 D.R. Koenig
Not observable
Unknown to those exposed
Effect delayed
New risk
Risk unknown to science
Observable
Known to those exposed
Effect immediate
Old risk
Risks known to science
We might also include the 2007 subprime mortgage crisis as an example of a risk
event being amplified to affect general liquidity being provided to financial service
companies. The Unknown in this case is the extent to which companies are exposed
to subprime default risk and the Dread is that these defaults might affect home
prices, thus affecting consumer spending and thus affecting the general well-being
of banks and other companies.
One implication of the signal concept is that effort and expense beyond that
indicated by a first-order cost-benefit analysis might be warranted to reduce the
possibility of high signal events and that transparency may be undervalued, under-
appreciated or improperly feared.
The examination of risks that face a system should include a qualitative, and
even quantitative assessment of where those risks fall on the dread/knowledge spec-
trum to assess the risk to underestimating their impact through traditional risk
assessment techniques.
We have looked at the way in which people perceive risk in terms of dread and their
knowledge of a risk. But, what about how people process information about a risk
event once it has occurred? How are people likely to react to risk event? Research
indicates that people process information about risk events in two substantially dif-
ferent manners.22
2 The Human Reaction to Risk and Opportunity 15
Risk and uncertainty make us uneasy. We naturally prefer to move further down on
the unknown risk factor chart, making ourselves more comfortable with things that
we may not understand initially. Quantifications are one manner by which we try
to turn subjective risk assessments into objective measures. We attempt to convert
uncertainty, which is not measurable, into risk, which is believed to be
measurable.
16 D.R. Koenig
psychological aspects to how humans within our systems will respond to incentives
to perform better. In particular, work by Darley notes that rigid or overly quantified
incentive or criterial control systems can create new risks of their own which are
unknown or unexpected to those involved in the system.32
Darley’s Law says that “The more any quantitative performance measure is used
to determine a group or an individual’s rewards and punishments, the more subject
it will be to corruption pressures and the more apt it will be to distort and corrupt the
action patterns and thoughts of the group or individual it is intended to monitor.”
Darley’s Law is a good warning to organizations that employ overly objective
incentive or valuation systems. Humans are quite adept at manipulating rules to
personal benefit. Success in recognizing this and in aligning incentives with behav-
ioral objectives means that incentives must be carefully crafted so that the mix of
measurable and qualitative inputs to the award match the behavior desired from the
individual being incented. We must, as a first root, understand how humans respond
to incentives and controls before we are able to build structures to match desired
behaviors with compensation.
In 2001 the Risk Management Group (RMG) of the Basel Committee on
Banking Supervision defined operational risk in a causal-based fashion: “the risk of
loss resulting from inadequate or failed internal processes, people and systems…”
Darley describes compensation and incentive programs as being “criterial con-
trol systems.”33 We set criteria for people’s performances, measure, and reward or
punish according to a process or system. The general intent of criterial control sys-
tems is to develop calculations or, in the business vernacular, “metrics” of how
individual contributions have helped the organization to reach corporate goals. By
inference, the corporate goals are metrics like share price, earnings and market
share, expecting that the company will be rewarded by “the market” for making
goals and punished for not doing so. Such systems are designed to pay off those
who make their numbers and punish those who do not.
Incentive systems, simple or complicated, are typically based on objective meas-
ures upon which all parties agree, ex ante. Employers formulate a choice and
employees respond to the potential outcomes perceived and the risks with which
they associate them.
The appeal for the employer of such systems is in the perception that they
provide more predictable budgeting, they may make employees behave more like
owners and they help to retain attractive human capital.
Such systems, though, may inadvertently attract a concentration of a certain type
of human capital. Employees who are averse to subjective systems under which
they perceive less control are more likely to be drawn to highly objective or criterial
control systems. The cause of their preference may be related to a level of trust in
organizations, or something deeper in the personality of the employee. Whatever
the source, the more rigidity there is in a criterial control formula; the more tightly
defined will be the personality attracted to it and the greater the potential impact of
concentrated misalignment.
Prospect Theory research has yielded numerous examples of how the framing of
a choice can greatly alter how that choice is perceived by humans. If the behavior
18 D.R. Koenig
Conclusion
Within most organizations the debate about whether an enterprise risk management
function adds value is less contentious than even five years ago. However, there are
still ample situations in which risk management is either not being used, is not well
understood or is undervalued because of a lack of appreciation for the importance
of how humans respond to risk and opportunity and how risk management programs
can be structured to mitigate the risks of such reactions.
In effect, through enterprise risk management, we are attempting to reframe the per-
ceptions, of investors, customers and liquidity providers, of the system to which risk
management is being applied. We are seeking to increase its value by understanding what
risks are perceived to be most important by those most important to our enterprise.
Psychological research being applied in past decades to finance and econom-
ics suggests that many of our traditionally held assumptions about valuation and
utility are not as complete or effective as had been previously assumed. In partic-
ular, traditional models of valuation have not placed enough emphasis on the
perceived impact on value assigned by humans to loss, extreme loss and rare
events. When this increased valuation or loss avoidance is taken into account,
enterprise risk management systems, designed to create ductile systems (corpora-
tions, firms or other), receive greater importance and the cost-benefit decisions
about preemptive risk management initiatives become less subject to error via a
negative decision.
Understanding that risk events need not lead to an amplification of their impacts,
which risk events might spur emotional reactions, how transparency can reduce this
20 D.R. Koenig
effect via a movement down the unknown risk spectrum and understanding how peo-
ple evaluate prospects can dramatically and positively alter the value of our systems.
The literature on human responses to risk and opportunity, while relatively new,
is quite vast. Only a very small segment of that research has been discussed in this
chapter. Readers are recommended to study the works of Kahneman and Tversky,
Weber, Slovic and Darley in particular. For those interested in a highly concentrated
review of some of the psychological influences on finance theory, see Shiller.38
One final note which serves as a warning is that some of the research has found
evidence of something called single-action bias. This expression was coined by
Weber for the following phenomenon observed in a wide range of contexts.39
Decision-makers are very likely take one action to reduce the risk that they encoun-
ter but are much less likely to take additional steps that would provide incremental
protection or risk reduction. The single action taken is not necessarily the most
effective one. Regardless of which single action is taken first, decision-makers have
a tendency to stop from taking further action presumably because the first action
suffices in reducing the feeling of fear or threat. In the absence of fear or dread
response to a risk, purely affect driven risk management decisions will likely result
in insufficient responsiveness to the risk.40
As the understanding of human behavior advances so too will the practice of enter-
prise risk management, adding greater value to the systems in which it is practiced.
End Notes
17. Weber, E. (2004). Who’s afraid of poor old age? Risk perception in risk management deci-
sions, in Olivia S. and Utkus, Stephen P. (Eds.), Pension and Design Structure by Mitchell,
Oxford University Press.
18. Slovik and Weber. (2002). op cit.
19. Weber. (2004). op cit.
20. Angelova, R. (c. 2000). Risk-Sensitive Decision-Making Examined Within an Evolutionary
Framework, Blagoevgrad, Bulgaria.
21. Slovik and Weber. (2002). op cit.
22. Ibid.
23. Ibid.
24. Ibid.
25. Weber. (2004). op cit.
26. Weber, E. (2006). Experience-based and description-based perceptions of long-term risk:
Why global warming does not scare us (yet), Climate Change 77: 103–120.
27. Weber. (2004). op cit.
28. Weber. (2006). op cit.
29. Slovik & Weber (2002) op cit.
30. Weber. (2001). op cit.
31. Kasperson et al. (2003). op cit.
32. Darley, J.M. (2001). The dynamics of authority in organizations and the unintended action
consequences, in Darley, J.M., Messick, D.M., and Tyler, T.R. (Eds.), Social Influences on
Ethical Behavior in Organizations, pp. 37–52 , Mahwah, NJ: L.A. Erlbaum Assoc.
33. Darley, J.M. (1994). Gaming, Gundecking, Body Counts, and the Loss of Three British
Cruisers at the Battle of Jutland: The Complex Moral Consequences of Performance
Measurement Systems in Military Settings, Unpublished Speech to Air Force Academy, April
6, 1994.
34. Ibid.
35. Ibid.
36. Koenig. (2004). op cit.
37. Angelova. (2000). op cit.
38. Shiller, R.J. (1999). Human behavior and the efficiency of the financial system, in Taylor, J.B.
and Woodford, M. (Eds.), Handbook of Macroeconomics, Chap. 20, Vol. 1C.
39. Weber, E. (1997). Perception and expectation of climate change: Precondition for economic
and technological adaptation, in Bazerman, M., Messick, D, Tenbrunsel, A., and Wade
Benzoni, K. (Eds.), Psychological Perspectives to Environmental and Ethical Issues in
Management (pp. 314–341). San Francisco, CA: Jossey-Bass.
40. Weber (2004), op cit.
Part II
ERM Perspectives
Chapter 3
Enterprise Risk Management: Financial
and Accounting Perspectives
1. Portfolio risk can never be the simple sum of various individual risk
elements.
2. One has to understand various individual risk elements and their interactions in
order to understand portfolio risk.
3. The key risk, i.e., the most important risk, contributes most to the portfolio risk
or the risk facing the entire organization. Therefore, decision makers should be
most concerned about key risk decisions.
4. Using quantitative approaches to measure risk is very important. For example, a
key financial market risk can broadly be defined as volatility relative to the
capital markets. One measure of this risk is the cost of capital, which can be
measured through models such as the Weighted Average Cost of Capital
(WACC) and Capital Asset Pricing Model (CAPM).2
Typically, the major sources of value loss in financial institutions are identified as:
Market risk is exposure to the uncertain market value of a portfolio, where the
underlying economic factors are such as interest rates, exchange rates, and
equity and commodity prices.
Credit risk is the risks that counterparty may be unable to perform on an
obligation.
Operational risk is the risk of loss resulting from inadequate or failed internal
processes, people and systems, or from external events. The committee indi-
cates that this definition excludes systemic risk, legal risk and reputational
risk.11
During the early part of the 1990s, much of the focus was on techniques for
measuring and managing market risk. As the decade progressed, this shifted to
techniques of measuring and managing credit risk. By the end of the decade, firms
and regulators were increasingly focusing on Operational risk.
A trader holds a portfolio of commodity forwards. She knows what its market
value is today, but she is uncertain as to its market value a week from today. She
faces market risk. The trader employs the derivatives “greeks” to describe and to
characterize the various exposures to fluctuations in financial prices inherent in a
particular position or portfolio of instruments. Such a portfolio of instruments may
include cash instruments, derivatives instruments, borrowing and lending. In this
article, we will introduce two additional techniques for measuring and reporting
risk: Value-at-Risk assessment and scenario analysis.
Market risk is concerned both internally and externally. Internally, managers and
traders in financial service industry need a measure that allows active, efficient
management of the firm’s risk position. Externally, regulators want to be sure a
financial company’s potential for catastrophic net worth loss is accurately measured
and that the company’s economic capital is sufficient to survive such a loss.
Although both managers and regulators want up-to-date measures of risk, they do
estimate exposure to risks based on different time horizons. Bank managers and
traders measures market risks on a daily basis, which is very costly and time con-
suming. Thus, bank managers compromise between measurement precision on the
one hand and the cost and timeliness of reporting on the other.
Regulators are concerned with the maximum loss a bank is likely to experi-
ence over a given horizon so that they can set the bank’s required capital (i.e., its
economic net worth) to be greater than the estimated maximum loss and be
almost sure that the bank will not fail over that horizon. As a result, they are con-
cerned with the overall riskiness of a bank and have less concern with the risk of
individual portfolio components.12 The time horizon used in computation is rela-
tively long. For example, Under Basel II capital for market risk is based on the
10-day 99% VaR and for credit risk and operational risk is based on a 1-year
99.9% VaR.
28 D. Wu, D.L. Olson
There are two principle approaches to risk measurement: value-at-risk analysis and
scenario analysis.
Market Data
Pre-
VaR Report
Processing RiskWatch
Generation
and Batching
Trading
Position Data
Scenario Analysis
Credit risks are defined as the risk of loss due to a debtor’s non-payment of a loan
or other line of credit (either the principal or interest (coupon) or both). Examples
of Credit Risk Factors in the insurance industry are:
● Adequacy of reinsurance program for the risks selected
● Reinsurance failure of the company’s reinsurance program and the impact on
claim recoveries
● Credit deterioration of the company’s reinsurers, intermediaries or other
counterparties
● Credit concentration to a single counterparty or group
● Credit concentration to reinsurers of particular rating grades
● Reinsurance rates increasing
● Bad Debts greater than expected
A financial service firm has used a number of methods, e.g., credit scoring, ratings,
credit committees, to assess the creditworthiness of counter-parties (Refer to Chap.
10 for details of these methods). This would make it difficult for the firm to inte-
grate this source of risk with the market risks. Many financial companies are aware
of the need for parallel treatment of all measurable risks and are doing something
about it.14
If financial companies can “score” loans, they can determine how loan values
change as scores change. Then, a probability distribution of value changes can be mod-
eled relating to these changes produce over time due to credit risk. Finally, the time
series of credit risk changes could be related to the market risk, which enable market
risk and credit risk to be integrated into a single estimate of value change over a
given horizon.
“Operational risk is the risk of loss resulting from inadequate or failed internal
processes, people, and systems, or from external events.” The definition includes
people risks, technology and processing risks, physical risks, legal risks, etc, but
excludes reputation risk and strategic risk. The Operational Risk Management
framework should include identification, measurement, monitoring, reporting, con-
trol and mitigation frameworks for Operational Risk. Basel II proposed three alter-
natives to measure operational risks: (1) Basic Indicator, which requires Financial
Institutions to reserve 15% of annual gross income; (2) Standardized Approach,
which is based on annual revenue of each of the broad business lines of the
Financial Institution; and (3) Advanced Measurement Approach (AMA), which is
3 Enterprise Risk Management: Financial and Accounting Perspectives 31
based on the internally developed risk measurement framework of the bank adhering
to the standards prescribed.
The following lists the official Basel II defined business lines:
● Corporate finance
● Trading and sales
● Retail banking
● Commercial banking
● Payment and settlement
● Agency services
● Asset management
● Retail brokerage
The following lists the official Basel II defined event types with some examples for
each category:
● Internal Fraud – misappropriation of assets, tax evasion, intentional mismarking
of positions, bribery: Loss due to acts of a type intended to defraud, misappro-
priate property or circumvent regulations, the law or company policy, excluding
diversity/discrimination events, which involves at least one internal party.
● External Fraud – theft of information, hacking damage, third-party theft and
forgery: Losses due to acts of a type intended to defraud, misappropriate prop-
erty or circumvent the law, by a third party.
● Employment Practices and Workplace Safety – discrimination, workers com-
pensation, employee health and safety: Losses arising from acts inconsistent
with employment, health or safety laws or agreements, from payment of per-
sonal injury claims, or from diversity/discrimination events.
● Clients, Products, and Business Practice – market manipulation, antitrust,
improper trade, product defects, fiduciary breaches, account churning; Losses
arising from an unintentional or negligent failure to meet a professional obliga-
tion to specific clients (including fiduciary and suitability requirements), or from
the nature or design of a product.
● Damage to Physical Assets – natural disasters, terrorism, vandalism: Losses aris-
ing from loss or damage to physical assets from natural disaster or other events.
● Business Disruption and Systems Failures – utility disruptions, software fail-
ures, hardware failures: Losses arising from disruption of business or system
failures.
● Execution, Delivery, and Process Management – data entry errors, accounting
errors, failed mandatory reporting, negligent loss of client assets: Losses from
failed transaction processing or process management, from relations with trade
counterparties and vendors
Financial Institutions need to estimate their exposure to each type of risk for each
business line combination. Ideally this will lead to 7 × 8 = 56 VaR measures that
can be combined into an overall VaR measure. Other techniques to measure opera-
tional risks includes: Scenario Analysis, Identifying Causal Relationships, key risk
indicator (KRI), Scorecard approaches, etc.
32 D. Wu, D.L. Olson
Categories
Activities
Risk Appetite
The likely actions of internal auditing were identified. Those risks involving
high risk and strong controls would call for checking that inherent risks were in fact
mitigated by risk response strategies and controls. Risks involving high risk and
weak controls would call for checking for adequacy of management’s action plan
to improve controls. Those risks assessed as low call for internal auditing to review
accuracy of managerial impact evaluation and risk event likelihood.
36 D. Wu, D.L. Olson
Implementation Issues
Conclusions
Risks in a financial firm can be quantified and managed using various models.
Models also provide support to organizations seeking to control enterprise risk.
ERM provides tools to integrate enterprise-wide operations and finance functions
and better inform strategic decisions. The promise of ERM lies in allowing manag-
ers to better understand and use their firms’ fundamental relation to uncertainty in
a scientific framework: from each risk, strategy may create opportunity. We have
discussed various risk modeling and reviewed some common risk measures in
financial service company from the core financial and accounting perspective.
Gupta and Thomson identified problems in implementing COSO.23 Small com-
panies (fewer than 1,000 employees) reported a less favorable impression of
COSO. Complaints in general included vagueness and nonspecificity for auditing.
COSO was viewed as high-level, and thus open to interpretation at the operational
level. This seems to reflect a view by most organizations reflective of Level 1 and
Level 2 in Bowling and Rieger’s framework. Other complaints about COSO have
been published.24 One is that the 1992 framework is not completely appropriate for
2006. The subsequent COSO ERM is more current, but some view it as vague,
simplistic, and provides little implementation guidance.
3 Enterprise Risk Management: Financial and Accounting Perspectives 37
A number of specific approaches for various steps have been published. Later
studies have indicated about one half of the surveyed organizations to have either
adopted or were in the process of implementing ERM, indicating some increase.25
Carnaghan reviewed procedures for business process modeling.25 If such approaches
are utilized, more effective ERM can be obtained through COSO.
End Notes
1. Walker, L., Shenkir, W.G., and Barton, T.L. (2003). ERM in practice 60:4, 51–55.
2. Baranoff, E.G. (2004). Risk management: A focus on a more holistic approach three years
after September 11, Journal of Insurance Regulation, 22:4, 71–81.
3. Sharpe and William, F. (1964). Capital asset prices: A theory of market equilibrium under
conditions of risk, Journal of Finance, 19:3, 425–442.
4. Donnellan, M., and Sutcliff, M. (2006). CFO Insights: Delivering High Performance. Wiley,
New York.
5. Levinsohn, A. (2004). How to manage risk – Enterprise-wide, Strategic Finance, 86(5),
55–56.
6. Committee of Sponsoring Organizations of the Treadway Commission (COSO) (2004).
Enterprise risk management – integrated framework. Jersey City, NJ: American Institute of
Certified Public Accountants.
7. Alexander, G.J., and Baptista, A.M. (2004). A comparison of VaR and CVaR constraints on
portfolio selection with the mean-variance model. Management Science 50(9), 1261–1273;
Chavez-Demoulin, V., Embrechts, P., and Nešlehová, J. (2006). Quantitative models for oper-
ational risk: Extremes, dependence and aggregation. Journal of Banking and Finance 30,
2635–2658.
8. Florez-Lopez, R. (2007). Modelling of insurers’ rating determinants. An application of
machine learning techniques and statistical models. European Journal of Operational
Research, 183, 1488–1512.
9. Jacobson, T., Lindé, J., and Roszbach, K. (2006). Internal ratings systems, implied credit risk
and the consistency of banks’ risk classification policies. Journal of Banking and Finance 30,
1899–1926.
10. Elsinger, H., Lehar, A., and Summer, M. (2006). Risk assessment for banking systems.
Management Science 52(9), 1301–1314.
11. Crouhy M., Galai D., and Mark, R. (2000). A comparative analysis of current credit risk
models. Journal of Banking and Finance 24, 59–117; Crouhy M., Galai D., and Mark, R.
(1998). Model Risk. Journal of Financial Engineering 7(3/4), 267–288, reprinted in Model
Risk: Concepts, Calibration and Pricing, (ed. R. Gibson), Risk Book, 2000, 17–31; Crook, J.
N., Edelman, D.B., and Thomas, L.C. (2007). Recent developments in consumer credit risk
assessment. European Journal of Operational Research, 183, 1447–146.
12. Basel Committee on Banking Supervision (June 2004). International Convergence of Capital
Measurement and Capital Standards, Bank for International Settlements.
13. Pritsker, M. (1996). Evaluating value at risk methodologies: accuracy versus computational
time, unpublished working paper, Board of Governors of the Federal Reserve System.
14. Hull, J.C. (2006). Risk Management and Financial Institutions.
15. Morgan, J.P. (1997). CreditMetrics™-technical document.
16. Gupta, P.P., Thomson, J.C. (2006). Use of COSO 1992 in management reporting on internal
control. Strategic Finance 88:3, 27–33.
17. Gramling, A.A., and Myers, P.M. (2006). Internal auditing’s role in ERM. Internal Auditor
63:2, 52–58.
18. Matyjewicz, G., and D’Arcangelo, J.R. (2004). Beyond Sarbanes–Oxley. Internal Auditor
61:5, 67–72.
38 D. Wu, D.L. Olson
19. Matyjewicz and D’Arcangelo (2004), op. cit.; Ballou, B., and Heitger, D.L. (2005). A build-
ing-block approach for implementing COSO’s enterprise risk management-integrated frame-
work. Management Accounting Quarterly 6:2, 1–10.
20. Drew, M. (2007). Information risk management and compliance – Expect the unexpected. BT
Technology Journal 25:1, 19–29.
21. Extracted and modified from Bowling, D.M., and Rieger, L.A. (2005). Making sense of
COSO’s new framework for enterprise risk management, Bank Accounting and Finance 18:2,
29–34.
22. Bowling and Rieger (2005). op cit.
23. Gupta and Thomson (2004). op. cit.
24. Quinn, L.R. (2006). COSO at a crossroad, Strategic Finance 88:1, 42–49.
25. Carnaghan, C. (2006). Business process modeling approaches in the context of process level
audit risk assessment: An analysis and comparison. International Journal of Accounting
Information Systems 7:2, 170–204.
Chapter 4
An Empirical Study on Enterprise Risk
Management in Insurance
M. Acharyya
The objective of the research is to study the ERM of insurance companies. In line
with this it is designed to investigate what is happening practically in the insurance
industry at the current time in the name of ERM. The intention is to minimize the
gap between the two communities (i.e., academics and practitioners) in order to
contribute to the literature of risk management.
In recent years ERM has emerged as a topic for discussion in the financial com-
munity, in particular, the banks and insurance sectors. Professional organizations have
published research reports on ERM. Consulting firms conducted extensive studies
and surveys on the topic to support their clients. Rating agencies included the ERM
concept in their rating criteria. Regulators focused more on the risk management
capability of the financial organizations. Academics are slowly responding on the
management of risk in a holistic framework following the initiatives of practitioners.
The central idea is to bring the organization close to the market economy. Nevertheless,
everybody is pushing ERM within the scope of their core professional understanding.
The focus of ERM is to manage all risks in a holistic framework whatever the source
and nature. There remains a strong ground of knowledge in managing risk on an iso-
lated basis in several academic disciplines (e.g., economics, finance, psychology,
sociology, etc.). But little has been done to take a holistic approach of risk beyond
disciplinary silos. Moreover, the theoretical understanding of the holistic (i.e., multi-
disciplinary) properties of risk is still unknown. Consequently, there remains a lack
of understanding in terms of a common and interdisciplinary language for ERM.
Risk in Finance
In finance, risky options involve monetary outcomes with explicit probabilities and
they are evaluated in terms of their expected value and their riskiness. The traditional
approach to risk in finance literature is based on a mean-variance framework of port-
folio theory, i.e., selection and diversification.1 The idea of risk in finance is understood
within the scope of systematic (non-diversifiable) risk and unsystematic (diversifiable)
risk.2 It is recognized in finance that systematic risk is positively correlated with the
rate of return.3 In addition, systematic risk is a non-increasing function of a firm’s
growth in terms of earnings.4 Another established concern in finance is default risk and
it is argued that the performance of the firm is linked to the firm’s default risk.5 A large
part of finance literature deals with several techniques of measuring risks of firms’
investment portfolios (e.g., standard deviation, beta, VaR, etc.).6 In addition to the
portfolio theory, Capital Asset Pricing Model (CAPM) was discovered in finance to
4 Enterprise Risk Management in Insurance 41
price risky assets on the perfect capital markets.7 Finally, derivative markets grew tre-
mendously with the recognition of option pricing theory.8
Risk in Economics
Risk in Psychology
Risk in Sociology
two central concepts. First, risk and culture24 and second, risk society.25 The
negative consequences of unwanted events (i.e., natural/chemical disasters, food
safety) are the key focus of sociological researches on risk. From a sociological
perspective entrepreneurs remain liable for the risk of the society and responsible
to share it in proportion to their respective contributions. Practically, the responsi-
bilities are imposed and actions are monitored by state regulators and supervisors.
Nevertheless, identification of a socially acceptable threshold of risk is a key chal-
lenge of many sociological researches on risk.
Different disciplinary views of risk are obvious. Whereas, economics and finance
study risk by examining the distribution of corporate returns,26 psychology and
sociology interpret risk in terms of its behavioral components. Moreover, econo-
mists focus on the economic (i.e., commercial) value of investments in a risky situ-
ation. In contrast, sociologists argue on the moral value (i.e., sacrifice) on the risk
related activities of the firm.27 In addition, sociologists’ criticism of economists’
concern of risk is that although they rely on risk, time, and preferences while
describing the issues related to risk taking, they often miss out their interrelation-
ships (i.e., narrow perspective). Interestingly, there appears some convergence of
economics and psychology in the literature of economic psychology. The intention
is to include the traditional economic model of individuals’ formal rational action
in the understanding of the way they actually think and behave (i.e., irrationality).
In addition, behavioral finance is seen as a growing discipline with the origin of
economics and psychology. In contrast to efficient market hypothesis behavioral
finance provides descriptive models in making judgment under uncertainty.28 The
origin of this convergence was due to the discovery of the prospect theory29 in the
fulfillment of the shortcomings of von Neumann-Morgenstern’s utility theory for
providing reasons of human (irrational) behavior under uncertainty (e.g., arbitrage).
Although, the overriding enquiry of disciplines is the estimation of risk, they
comparing and reducing into a common metric of many types of risks are there
ultimate difficulty. The key conclusion of the above analysis suggests that there
exist overlaps on the disciplinary views of risk and their interrelations are emerging
with the progress of risk research. In particular, the central idea of ERM is to
obscure the hidden dependencies of risk beyond disciplinary silos.
The practice of ERM in the insurance industry has been drawn from the author’s PhD
research completed in 2006. The initiatives of four major global European insurers
(hereinafter referred as “CASES”) were studied for this purpose. Out of these four
4 Enterprise Risk Management in Insurance 43
insurers one is a reinsurer and the remaining three are primary insurers. They were at
various stages of designing and implementing ERM. A total of fifty-one face-to-face
and telephone interviews were conducted with key personnel of the CASES in
between the end of 2004 and the beginning of 2006. The comparative analysis (com-
pare-and-contrast) technique was used to analyze the data and they were discussed
with several industry and academic experts for the purpose of validation. Thereafter,
a conceptual model of ERM was developed from the findings of the data.
Findings based on the data are arranged under five dimensions. They are under-
standing; evaluation; structure; challenges, and performance of ERM.
Understanding of ERM
It was found that the key distinction in various perceptions of ERM remains
between risk measurement and risk management. Interestingly, tools and processes
are found complimentary. In essence, meaning that a tool can not run without a
process and vice versa. It is found that the people who work with numbers (e.g.,
actuaries, finance people, etc.) are involved in the risk modeling and management
(mostly concerned with the financial and core insurance risks) and tend to believe
ERM is a tool. On the other hand internal auditors, company secretaries, and
operational managers; whose job is related to the human, system and compliance
related issues of risk are more likely to see ERM as a process.
ERM: A Process
Within the understanding of ERM as a process, four key concepts were found. They
are harmonization, standardization, integration and centralization. In fact, they are
linked to the concept of top-down and bottom-up approaches of ERM.
The analysis found four key concepts of ERM. They are harmonization, stand-
ardization, integration and centralization (in decreasing order of importance). It was
also found that a unique understanding of ERM does not exist within the CASES,
rather ERM is seen as a combination of the four concepts and they often overlap. It
is revealed that an understanding of these four concepts including their linkages is
essential for designing an optimal ERM system.
ERM: A Tool
ERM: An Approach
In contrast to process and tool, ERM is also found as an approach of managing the
entire business from a strategic point of view. Since, risk is so deeply rooted in the
insurance business, it is difficult to separate risk from the functions of insurance
companies. It is argued that a properly designed ERM infrastructure should align
risk to achieve strategic goals. Alternatively, application of an ERM approach of
managing business is found central to the value creation of insurance companies.
In the study, ERM is believed as an approach of changing the culture of the organi-
zation in both marketing and strategic management issues in terms of innovating
and pricing products, selecting profitable markets, distributing products, targeting
customers and ratings, and thus formulating appropriate corporate strategies. In this
holistic approach various strategic, financial and operational concerns are seen
integrated to consider all risks across the organization.30
It is seen that as a process, ERM takes an inductive approach to explore the pit-
falls (challenges) of achieving corporate objectives for broader audience (i.e.,
stakeholders) emphasizing more on moral and ethical issues. In contrast, as a tool,
it takes a deductive approach to meet specific corporate objectives for selected audi-
ence (i.e., shareholders) by concentrating more on monitory (financial) outcomes.
Clearly, the approaches are complimentary and have overlapping elements.
4 Enterprise Risk Management in Insurance 45
In the survey suggested 82% suggested the leadership of CEO as being the key
driving force. In addition, Solvency II, Corporate Governance, Leadership of CRO,
and Changing Risk Landscape, are rated as the leading motivating forces for devel-
oping ERM.
The analysis establishes leadership of the CEO and regulations (Solvency II and
Corporate Governance) as the key driving forces of motivation towards insurers’ ERM.
Leadership
Regulations
but they do not necessarily drive their ERM initiatives. In other words, regulation
can be seen as a key driving force of ERM for some CASES but for others regula-
tion simply provides guidance to the internal motivation.
In summary, the leadership of CEO and CRO were found as a key motivation
towards ERM within CASES. However, such leadership was not an isolated issue
but essentially driven by many economic and political factors (e.g., market volatil-
ity, competition, globalization, etc.). All these sub factors effectively influence the
CEOs (and the top management) to add more value in the firm in order to remain
solvent and beat the competition. In addition, regulation was also found as a key
factor towards the motivation of insurers’ ERM.
Structure of ERM
The study revealed four key stages (i.e., identification, quantification, assessment,
and implementation), which build the structure of insurers’ ERM. In essence, they
are understood as the core management process of any organizational function.
The ERM design, as seen in the CASES, has four common stages: identification,
quantification, implementation and monitoring. The first stage involves an identifi-
cation of the risks faced by the organization. This is not just an identification of
risks for purposes of compliance but necessarily for strategic decision making. The
second important stage of ERM involves analysis and quantification of risks. The
third stage of ERM involves assessing what can be done about the risk that is now
understood. The key managerial concern is to determine the amount of chance
(i.e., opportunity) that an organization assumes in a certain level of loss. The initial
analysis assesses the capacity (or ability) of the organization in terms of available
resources. This gives insurers an understanding of their capability, which then helps
to find insurers’ current position and to decide where they want to be at a certain
time of future. Finally, the fourth stage is for actual implementation and ongoing
execution of the ERM process. So ERM, in a very broad sense, in the CASES
involves with these four stages. However, it is noticed that each CASE undertakes
different specific activities under each of these stages. However, in all four key
stages, organizational structure plays an important role. The following paragraph
discusses its various aspects as seen in the study.
The study revealed a three line organizational structure. The structure distinguishes
risk observing as an independent function from risk taking. However, risk taking
was found as a management function. The first line of defence takes owns and man-
4 Enterprise Risk Management in Insurance 47
ages risks in accordance with the set guidelines (e.g., Group Risk Policy). Although
the group CEO holds the overall responsibility for the management of risks faced
by the group, as the owner of risk, the primary responsibility of managing risks
goes to individual business units (or local units). The second line of defence (con-
stituting a part of central office) is often led by the CRO, who acts as risk observer
and facilitator, with primarily responsible for providing technical (and logistic) sup-
port to the first line of defence. The second line of defence however does not incur
any management responsibility. Consequently, it was not found directly liable for
mismanagement of risks. The third line of defence, often led by a group internal
auditor (who directly reports to the board), provides independent assurance on the
effectiveness of risk management (carried out by the first line of defence) and effi-
ciency of technical support (offered by the second line of defence). Since both the
second and the third lines of defence do not hold any risk management responsibil-
ity (they perform an advisory function), their functions (e.g., operational risk)
sometimes coincide. However, it is found that the objective of these two lines of
defence in relation to operational risk is distinct. In one hand, the group internal
auditors look at operational risks around the area of non-compliance of Group Risk
Policy (for example). On the other hand, the CRO is keen to develop tools and
techniques to manage large-scale operational risks and monitor the efficiency of the
tools and provide alternative solutions, where necessary, in association with the
relevant technical people. Alternatively, the job of CRO under ERM is found more
creative and innovative.
Challenges of ERM
The challenges of implementing ERM were found into two separate phases, i.e.,
operational and technical. The former (i.e., operational) is linked to the process and
the latter (i.e., technical) is linked to the tools as discussed earlier.
Operational Challenges
In the survey, 82% respondent identified the development of a common risk lan-
guage in communication issues as the key operational challenge. This is followed
by several other factors, i.e., a common culture and risk awareness (i.e., identifying
and studying the risk prior to the happening of the event), etc. In addition, the accu-
racy, consistency and adequacy of data were found as the key challenges.
Discussion
It is important to discuss why the identified issues, e.g., data accuracy, risk commu-
nication, risk awareness, a common risk language, and a common risk culture as
derived from the above process are perceived as the key challenges facing the
48 M. Acharyya
Technical Challenges
The analysis indicated that the CASES struggle significantly with technical chal-
lenges in implementing ERM. In the survey 71% respondents ranked the measure-
ment of operational risk as the top technical challenge. This was followed by
several other factors, e.g., measuring correlation of risk among risk types and lines
of businesses and risk profiling at the corporate level.
It was found that the management of operational risk within ERM in the CASES has
particular interest to calculate an amount of (economic) capital as necessary for solvency
requirements. Consequently, the management of operational risk has evolved as a quan-
titative exercise beyond the traditional aspects of operating errors. Recalling the previous
discussion it is understood that the two dimensions of ERM, i.e., organizational (process)
and technical (tool) are complimentary. In essence, operational risk arises from both
dimensions (i.e., tool and process) but they have different characters. Nevertheless, oper-
ational risk is not new in the insurance industry but the study discovered that measure-
ment of operational risk in numerical terms is a new idea. Therefore, conceptualizing and
defining operational risk, and identifying a complete list of risk indicators (which may
include purchasing inadequate reinsurance, incorrect data, and loss of reputation) is
problematic.33 Consequently, measurement of operational risk is a major technical chal-
lenge, although the recent regulatory constraints for measuring operational risk have
given initial momentum to the insurers’ ERM initiatives.
4 Enterprise Risk Management in Insurance 49
Risk Correlations
The issue regarding correlation (or dependency) comes with the complexity of
quantifying total risks of insurers. In order to combine the different parts of the
business it is important to consider correlations between risks (across types and
business lines). This arises because the capital charges for risks may not be accurate
(often it is higher) if the proper correlations are not considered. This is also found
as an important issue for diversification of risks. In addition to the appropriate
model, the key challenge to calculating correlations is accurate and adequate data.
Performance of ERM
The analysis finds that CASES do not use any specific framework or technique to
evaluate the performance of their ERM. The evaluation of companies’ perform-
ance by key stakeholders (credit rating agencies, financial analysts, and regulators)
is generally considered as crude benchmarking criteria. The analysis finds that the
execution of ERM is complex, time-consuming and costly. This is because ERM
depends on the company’s specific business model (retail or wholesale), its cul-
ture, the depth of knowledge of its staff in handling risks and also the size of the
organization. It is concluded that organizations having less (or more) volatile
profit streams have less (or more) structured ERM systems in place. In addition,
the effort of reinsurers towards developing ERM is seen to be greater than that of
primary insurers.
The analysis suggests that the benefits that managers find while practicing ERM
are general in nature. They include improved risk assessment in terms of under-
standing, identifying and prioritizing risks. Through risk mapping, management has
a better knowledge of the critical risks and their potential impact on the company.
It is argued that the organization through ERM will be better prepared to manage
50 M. Acharyya
its risks and maximize its opportunities within the acquisition, product, and funding
programs. In addition, the practice of ERM could provide a common language for
describing risks and its potential effects, which could improve general communica-
tion. Better knowledge of risk, in particular, the emergent risks, could enable
management to handle them more efficiently and effectively in terms of quantification
and modeling; which may help the efficient pricing of risk. The development of risk
awareness could mitigate the level of risk, thus requiring less capital, which would
ultimately reduce the cost of capital. Above all, the practice of ERM may enable
insurers to maintain competitive advantage. In addition, the research finds that
industry managers apparently do not see any disadvantages arising from ERM.
Although the centralization (as opposed to harmonization) of risk and capital man-
agement issues in the framework of ERM could cause a systemic failure in the
future.34
Until now the findings of the study were discussed under five headings, i.e., under-
standing, evolution, structure, challenges, and performance of ERM. The following
paragraphs will develop a model of ERM out of the above findings. The model
represents several internal risk models designed for several significant risks (i.e.,
market risk, credit risk, investment risk, insurance risks, operational risk, etc.) The
separate models are used, in aggregation, to estimate economic capital for three
purposes, i.e., compliance of solvency regulations; achieving targeted ratings; and
driving the business in the competitive market.
The study found that insurance companies are increasingly using the ERM
model as an essential part of making corporate decisions and delivering strategies.
One of the key characteristic of the model is that it discusses ERM both as a process
and a tool simultaneously.
The study noted two technical aspects of the ERM model. They are estimation
of the probability of default (or failure) and deployment of (economic or risk-
adjusted) capital on the basis of this estimation. However, the requirements of the
governance issues have emerged distinctly in relation to the components.
Stage 1: The model theoretically suggests that ERM should consider all risks irre-
spective of source and nature. Risks captured in an (imaginary) radar screen are
separated through a filter into numerically quantifiable and unquantifiable compo-
nents. The quantifiable risks, which contain financial (i.e., market (stock, FX,
interest rate), core business (insurance), credit (counterparty), and operational
(system and human error)) are then identified. Thereafter, a risk landscape (risk
register or profile) is opened to track the quantifiable risks. Even all quantifiable
4 Enterprise Risk Management in Insurance 51
risks are not considered for the purpose of ERM; rather a chunk of large risks
including emergent risks (which are best described as the unknown of known
risks, e.g., natural catastrophes, human pandemics, etc.) are there considered for
the next stage of ERM. The choice of significant risk is purely a unique exercise
for any organization because organizations’ corporate objectives and strategies are
distinct in the competitive marketplace. A second radar screen always remains in
operation to capture the new statistical correlations within the portfolio of signifi-
cant risks.
Stage 2: The significant risks are then modeled numerically in a predetermined
probability of default (failure) over a certain period of time. In addition, the efforts
remain always live to measure the unquantifiable risks as much as possible. Another
filter (imaginary) is then used to calculate total acceptable risks, which are essen-
tially linked to the risk appetite of the firm. In fact, the risk appetite is a complex
issue as it includes many subjective factors like organizational culture, customers’
preference, market environment, shareholders expectations, organization’s past
experience, etc. They are very specific to the firm and difficult to quantify numeri-
cally. In effect, the organizations often exhibit inconsistent risk preferences. Ideally,
risk appetite should reflect a clear picture of the current level of business risk of the
firm. Organizations’ risk tolerance is then determined numerically based on its risk
appetite. In essence, the risk tolerance of a firm drives its corporate strategies. One
of the complex tasks in ERM is the aggregation of various risk models. Several
reasons lead such complexities, i.e., non-linearity among the lines of business, dif-
ferent risk class and inconsistent risk measures, etc.35 Indeed, selection of the level
of tolerance (i.e., acceptable impacts or confidence level) and determination of time
horizon depends on the prudent judgment of the insurers.
Stage 3: Various techniques, including both the insurance market and capital market
are used to transfer and finance the total acceptable risk. A variable (risk-adjusted)
amount of capital is then deployed to finance these total acceptable risks. These
actions illustrate that the CASES deal with risks by first calculating and then choos-
ing from the available and alternative risk-return combinations.36 A third radar
screen comes into operation at this stage to observe the changes in the total accept-
able risks (including potential unexpected losses) and this information is then
deployed to adjust the amount of capital. This is commonly known as economic
capital.37 There always remains a residual risk (= liabilities – economic capital),
which insurers always to carry. At this stage risks are also reduced through addi-
tional mitigation measurers (e.g., improve controls).
Stage 4: Upon determining the economic capital the next step is to allocate risks
into different risk types and lines of businesses. The objective is to ensure the
proportional contribution of each line of business on the overall cost of capital of
the firm.38 Furthermore, determining the size of the economic capital and its
breakup of the subsidiaries is problematic because of the inconsistencies of regula-
tions among geographical locations. The idea of an economic balance sheet (in
contrast to statutory accounting balance sheet) is to reflect the forecasted market
volatility in the return taking the time value of money into account. This in turn is
52 M. Acharyya
linked to the calculation of shareholder (firm) value at a particular point (or period)
of time in order to derive future business strategies.
Stage 5: The performance of risk management is then disclosed (reported) to the
stakeholders (i.e., shareholders, bondholders, and policyholders). The policyhold-
ers and shareholders have different interests in insurers’ performance in terms of
the economic balance sheet. Ideally, policyholders want to see that the organization
operates with the maximum amount of capital but the shareholders prefer the oppo-
site. Third parties, i.e., government regulatory agencies, and rating agencies play an
influential role to monitor the performance of the insurers. Regulators are there to
maintain the interest of the policyholders and rating agencies provide their opinion
on the financial strength of the organizations, which interests both policyholders
and shareholders. The objective of the organization is to comply with the (solvency)
regulations and meeting the criteria of the rating agencies to achieve or maintain a
targeted level of rating. Finally, the system needs to repeat continually with neces-
sary adjustments in line with the corporate objectives and strategies.
It is important to mention here that the five-stage model is not unique but a
benchmark of managing insurers’ enterprise (i.e., all significant) risks. Indeed, the
execution could vary at the operational stage from one company to another. For
example, risk tolerances may be established in Stage 1 instead of Stage 2 to see of
the potential impact of various risks during identification phase in line with corpo-
rate objectives.
Conclusion
The objective of the research was to study the ERM in the insurance industry
empirically. Leadership and regulations were found the key motivation of ERM in
insurance. Moreover, the understanding of ERM is uneven. ERM is understood
both as a tool (objective view) and a process (subjective view). Four key stages of
the process, i.e., centralization, integration, standardization, and harmonization
were discovered. In addition, ERM was seen as an approach of managing business
holistically. There appears a need of close integration of the process oriented
knowledge of risk (i.e., corporate governance in terms of the fluctuation of per-
formance) with the subject oriented expertise of ERM (i.e., opportunity). The cen-
tral idea of the discussions suggests two perspectives of risk management. First,
risk as insurers’ core business functions (i.e., underwriting, investment, finance)
and second risk arising from the fluctuation of performance while performing the
core business functions. The former views risk management as a tool and the latter
as a process. At the corporate level, ERM combines both toll and process views of
risk management and suggests an approach of managing the total risks of the
organization in a single framework.
The design and implementation of ERM was found inconsistent across the
industry mainly because of the different level of risk appetite. The value of ERM
still remains as a speculation for the absence of concrete evidence. Nevertheless,
4 Enterprise Risk Management in Insurance 53
ERM is an evolving concept and there need more research on the topic from multi-
disciplinary perspective. Practically, insurers’ internal risk models are regarded as
a part of Solvency II framework. Principally, thinking widely on the sources of risk
and deploying appropriate mitigation tools/strategies will reveal opportunities.
Despite the complexity of integrating the objective and subjective concepts of risk,
the study reveals that insurance companies will increasingly use ERM system to
support their future growth opportunities (in line with corporate objectives) by
maintaining targeted level of capital. The central idea of virtually all functions
within ERM is to secure maximum profit (i.e., shareholder value) at the minimum
(i.e., lowest) level of risk. However, incorporating the benefits of business mix and
geographical diversification into the ERM model will remain an ongoing debate
between the organization and regulators and rating agencies.
Finally, the evolution of ERM is a part of firms’ initiative towards establishing
a market-oriented organizational culture to generating, disseminating, and respond-
ing appropriately to market requirements. The challenge is however to maximize
the link between the demand of the market (i.e., external requirements) and compe-
tency of the organization (i.e., internal requirements). Ideally, risk (i.e., the volatil-
ity) is the key component of such a complex link and ERM has been evolved to
minimize the total risk of the firm. Consequently, ERM is a value adding function.
In particular, it is important to remember that similar to other process/system, an
ERM, even robust, can not always guarantee the efficient and effective management
of risk of the origination. The success essentially depends on the dedication and
attitude of users (i.e., both at individual and group levels) towards identifying and
managing risks in their everyday functions for the best interest of their
organizations.
End Notes
7. Sharpe, W.F. (1964). Capital asset prices: A theory of market equilibrium under conditions of
risk, The Journal of Finance 19(3): 425–442; Lintner, J. (1965). The valuation of risk assets
and the selection of risky investments in stock portfolios and capital budgets, The Review of
Economics and Statistics 47:1, 13–37; Mossin, J. (1966). Equilibrium in a capital asset mar-
ket, Econometrica 34:4, 768–783.
8. Black, F., and Scholes, M. (1972). The valuation of option contracts and a test of market effi-
ciency, The Journal of Finance 27:2, 399–417; Black, F., and Scholes, M. (1973). The pricing
of options and corporate liabilities, The Journal of Political Economy 81:3, 637–654.
9. Eeckhoudt, L., Gollier, C., and Schlesinger, H. (1996). Changes in background risk and risk
taking behavior, Econometrica 64:3, 683–689.
10. Neumann, J., and Morgenstern, O. (1944). Theory of Games and Economic Behaviour. 2nd
edn., Princeton University Press, New Jersey.
11. Friedman, M., and Savage, L.J. (1948). The utility analysis of choices involving risk, The
Journal of Political Economy 56:4, 279–304.
12. Kahneman, D., and Tversky, A. (1979). Prospect theory: An analysis of decision under risk,
Econometrica 47:2, 263–292.
13. Kimball, M.S. (1993). Standard risk aversion, Econometrica 61:3, 589–611.
14. Rabin, M. (2000). Risk aversion and expected-utility theory: A calibration theorem,
Econometrica 68:5, 1281–1292.
15. Shiller, R.J. (2003). From efficient markets theory to behavioral finance, The Journal of
Economic Perspectives 17:1, 83–104.
16. Tversky, A., and Kahneman, D. (1991). Loss aversion in riskless choice: A reference-depend-
ent model, The Quarterly Journal of Economics 106:4, 1039–1061.
17. Willett, A. (1951). The Economic Theory of Risk and Insurance, Columbia University Press,
Philadelphia, Pennsylvania.
18. March, J.G., and Shapira, Z. (1987). Managerial perspectives on risk and risk taking,
Management Science 33:11, 1404–1418; Loewenstein, G.F., Weber, E.U., Welch, N., and
Hsee, C.K. (2001). Risk as feelings. Psychological Bulletin 127:2, 267–286.
19. Rippl, S. (2002). Cultural theory and risk perception: A proposal for a better measurement,
Journal of Risk Research 5:2, 147–165.
20. Weber, E.U., Blais, A.-R., Betz, N.E. (2002). A domain-specific risk-attitude scale: measuring
risk perceptions and risk behaviors, Journal of Behavioral Decision Making 15:4,
263–290.
21. Slovic, P, Finucane, M.L., Peters, E., and MacGregor, D.G. (2004). Risk as analysis and risk
as feelings: Some thoughts about affect, reason, risk, and rationality, Risk Analysis 24:2,
311–322.
22. Peter, T.-G., and Zinn, J.O. (2006). Current directions in risk research: New developments in
psychology and sociology, Risk Analysis 26:2, 397–411.
23. Tierney, K.J. (1999). Toward a critical sociology of risk, Sociological Forum 14:2, 215–242.
24. Douglas, M., and Wildavsky, A. (1982). Risk and Culture: An Essay on the Selection of
Technical and Environmental Dangers. University of California Press, Berkeley; Lash, S.
(2000). Risk culture. In: Adam, B., Beck, U., and Loon, J.V. (eds.). The Risk Society and
Beyond: Critical Issues for Social Theory. Sage, London: 47–62.
25. Beck, U. (1992). Risk Society: Towards a New Modernity. Sage, London.
26. Fisher, I.N., and Hall, G.R. (1969). Risk and corporate rates of return, The Quarterly Journal
of Economics 83:1, 79–92.
27. Perry, R.B. (1916). Economic value and moral value, The Quarterly Journal of Economics
30:3, 443–485.
28. Shiller, R.J. (2003). The New Financial Order: Risk in the 21st Century. Princeton University
Press, New York.
29. Kahneman and Tversky (1979), op cit.
30. Olson, D.L., and Wu, D. (2008). Enterprise Risk Management. World Scientific, Hackensack,
NJ.
31. Avery, G.C. (2003). Understanding Leadership: Paradigms and Cases. Sage, London.
4 Enterprise Risk Management in Insurance 55
One view of a supply chain risk management process includes steps for risk identi-
fication, risk assessment, risk avoidance, and risk mitigation.4 These structures for
handling risk are compatible with Tang’s list given above, but focus on the broader
aspects of the process.
Risk Identification
Risks in supply chains can include operational risks and disruptions. Operational
risks involve inherent uncertainties for supply chain elements such as customer
demand, supply, and cost. Disruption risks come from disasters (natural in the
form of floods, hurricanes, etc.; man-made in the form of terrorist attacks or wars)
and from economic crises (currency reevaluations, strikes, shifting market prices).
Risk Assessment
Theoretically, risk has been viewed as applying to those cases where odds are
known, and uncertainty to those cases where odds are not known. Risk is a prefera-
ble basis for decision making, but life often presents decision makers with cases of
uncertainty. The issue is further complicated in that perfectly rational decision mak-
ers may have radically different approaches to risk. Qualitative risk management
depends a great deal on managerial attitude towards risk. Different rational individ-
uals are likely to have different response to risk avoidance, which usually is
inversely related to return, thus leading to a tradeoff decision. Research into cogni-
tive psychology has found that managers are often insensitive to probability esti-
mates of possible outcomes, and tend to ignore possible events that they consider
to be unlikely.5 Furthermore, managers tend to pay little attention to uncertainty
involved with positive outcomes.6 They tend to focus on critical performance tar-
gets, which makes their response to risk contingent upon context.7 Some approaches
to theoretical decision making prefer objective treatment of risk through quantita-
tive scientific measures following normative ideas of how humans should make
decisions. Business involves an untheoretical construct, however, with high levels
of uncertainty (data not available) and consideration of multiple (often conflicting)
factors, making qualitative approaches based upon perceived managerial risk more
appropriate.
Because accurate measures of factors such as probability are often lacking,
robust strategies (more likely to enable effective response under a wide range of
circumstances) are often attractive to risk managers. Strategies are efficient if they
enable a firm to deal with operational risks efficiently regardless of major disrup-
tions. Strategies are resilient if they enable a firm to keep operating despite major
disruptions. Supply chain risk can arise from many sources, including the
following:8
● Political events
● Product availability
● Distance from source
● Industry capacity
● Demand fluctuation
● Changes in technology
● Changes in labor markets
● Financial instability
● Management turnover
5 Supply Chain Risk Management 59
Risk Avoidance
The oldest form of risk avoidance is probably insurance, purchasing some level of
financial security from an underwriter. This focuses on the financial aspects of risk,
and is reactive, providing some recovery after a negative experience. Insurance is
not the only form of risk management used in supply chains. Delta Airlines insur-
ance premiums for terrorism increased from $2 million in 2001 to $152 million in
2002.9 Insurance focuses on financial risks. Other major risks include loss of cus-
tomers due to supply change disruption.
Supply chain risks can be buffered by a variety of methods. Purchasing is usu-
ally assigned the responsibility of controlling costs and assuring continuity of sup-
ply. Buffers in the form of inventories exist to provide some risk reduction, at a cost
of higher inventory holding cost. Giunipero and Al Eltantawy compared traditional
practices with newer risk management approaches.10 The traditional practice, rely-
ing upon extra inventory, multiple suppliers, expediting, and frequent supplier
changes suffered from high transaction costs, long purchase fulfillment cycle times,
and expensive rush orders. Risk management approaches, drawing upon practices
such as supply chain alliances, e-procurement, just-in-time delivery, increased
coordination and other techniques, provides more visibility in supply chain opera-
tions. There may be higher prices incurred for goods, and increased security issues,
but methods have been developed to provide sound electronic business security.
Risk Mitigation
Tang provided four basic risk mitigation approaches for supply chains.11 These focus
on the sources of risk: management of uncertainty with respect to supply, to demand,
to product management, and information management. Furthermore, there are both
strategic and tactical aspects involved. Strategically, network design can enable better
control of supply risks. Strategies such as product pricing and rollovers can control
demand to a degree. Greater product variety can strategically protect against product
risks. And systems providing greater information visibility across supply chain mem-
bers can enable better coping with risks. Tactical decisions include supplier selection
and order allocation (including contractual arrangements); demand control over time,
markets, and products; product promotion; and information sharing, vendor managed
inventory systems, and collaborative planning, forecasting, and replenishment.
Supply Management
A variety of supplier relationships are possible, varying the degree of linkage between
vendor and core organizations. Different types of contracts and information exchange
are possible, and different schemes for pricing and coordinating schedules.
60 D.L. Olson, D. Wu
Operational risks in supply chain order allocation include uncertainties in demands, sup-
ply yields, lead times, and costs. Thus not only do specific suppliers need to be selected,
the quantities purchased from them needs to be determined on a recurring basis.
Supply chains provide many valuable benefits to their members, but also create
problems of coordination that manifest themselves in the “bullwhip” effect.14
Information system coordination can reduce some of the negative manifestations of
the bullwhip effect, but there still remains the issue of profit sharing. Decisions that
are optimal for one supply chain member often have negative impacts of the total
profitability of the entire supply chain.15
Demand Management
shifting demand over time, across markets, or across products. Demand management
of course is one of the aims of advertising and other promotional activities. However,
it has long been noted as one of the most difficult things to predict over time.
Product Management
Golda and Phillipi20 considered technical and business risk components of the sup-
ply chain. Technical risks relate to science and engineering, and deal with the
uncertainties of research output. Business risks relate to markets, human responses
to products and/or related services. At Intel, three risk mitigation strategies were
considered to deal with the risks associated with new technologies:
1. Partnerships, with associated decisions involving who to partner with, and at
what stage of product development
62 D.L. Olson, D. Wu
Outsourcing Risks
Other risks are related to partner selection, focusing specifically on the additional
risks associated with international trade. Risks in outsourcing can include:22
● Cost – unforeseen vendor selection, transition, or management
● Lead time – delay in production start-up, manufacturing process, or transportation
● Quality – minor or major finishing defects, component fitting, or structural
defects
Outsourcing has become endemic in the United States, especially information
technology to India and production to China.23 Risk factors include:
● Ability to retain control
● Potential for degradation of critical capability
● Risk of dependency
● Pooling risk (proprietarial information, clients competing among themselves)
● Risk of hidden costs
Ecological Risks
Options
There are various levels of outsourcing that can be adopted. These range from sim-
ply outsourcing particular tasks (much like the idea of service oriented architec-
ture), co-managing services with partners, hiring partners to manage services, and
full outsourcing (in a contractual relationship). We will use these four outsourcing
relationships plus the fifth option of doing everything in-house as our options.
Criteria
The next step of the SMART method is to score alternatives. This is an expression
by the decision maker (or associated experts) of how well each alternative performs
on each criterion. Scores range from 1.0 (ideal performance) to 0 (absolute worst
performance imaginable). This approach makes the scores independent of scale,
and independent of weight. Demonstration is given in Table 5.3:
Once weights and scores are obtained, value functions for each alternative are sim-
ply the sum products of weights times scores for each alternative. The closer to 1.0
(the maximum value function), the better. Table 5.4 shows value scores for the five
alternatives:
The outcome here is that in-house operations best satisfy the preference function of
the decision maker. Obviously, different weights and scores will yield different
outcomes. But the method enables decision makers to apply a sound but simple
analysis to aid their decision making.
Conclusions
Supply chains have become important elements in the conduct of global business.
There are too many efficiency factors available from global linkages to avoid. We
all gain from allowing broader participation by those with relative advantages.
Alliances can serve as safety nets by providing alternative sources, routes, or prod-
ucts for its members. Risk exposure within supply chains can be reduced by reduc-
ing lead times. A common means of accomplishing lead time reduction is by
collocation of suppliers at producer facilities.
This chapter has discussed some of the many risks associated with supply
chains. A rational process of dealing with these risks includes assessment of what
can go wrong, quantitative measurement to the degree possible of risk likelihood
and severity, qualitative planning to cover a broader set of important criteria, and
contingency planning. A wide variety of available supply chain risk-reduction strat-
egies were reviewed, with cases of real application.
66 D.L. Olson, D. Wu
While no supply chain network can expect to anticipate all future disruptions,
they can set in place a process to reduce exposure and impact. Preplanned response
is expected to provide better organizational response in keeping with organizational
objectives.
End Notes
1. Ritchie, B., and Brindly, C. (2007). Supply chain risk management and performance: A guid-
ing framework for future development, International Journal of Operations and Production
Management 27:3, 303–322.
2. Mentzer, J.T, Dewitt, W., Keebler, J.S., Min, S., Nix, N.W., Smith, C.D., and Zacharia, Z.G.
(2001). Supply Chain Management. Thousand Oaks, CA: Sage.
3. Tang, C.S. (2006). Perspectives in supply chain risk management, International Journal of
Production Economics 103, 451–488.
4. Chapman, P., Cristopher, M., Juttner, U., Peck, H., and Wilding, R. (2002). Identification and
managing supply chain vulnerability, Logistics and Transportation Focus 4:4, 59–64.
5. Kunreuther, H. (1976). Limited knowledge and insurance protection, Public Policy 24,
227–261.
6. MacCrimmon, K.R., and Wehrung, D.A. (1986). Taking Risks: The Management of
Uncertainty. New York: Free Press.
7. March, J., and Shapira, Z. (1987). Managerial perspectives on risk and risk taking,
Management Science 33, 1404–1418.
8. Giunipero, L.C., and Aly Eltantawy, R. (2004). Securing the upstream supply chain: A risk
management approach, International Journal of Physical Distribution and Logistics
Management 34:9, 698–713.
9. Rice, B., and Caniato, F. (2003). Supply chain response to terrorism: Creating resilient and
secure supply chains, Supply Chain Response to Terrorism Project Interim Report. Cambridge,
MA: MIT Center for Transportation and Logistics.
10. Giunipero and Aly Eltantawy. (2004). op cit.
11. Tang (2006), op cit.
12. Dickson, G.W. (1966). An analysis of vendor selection systems and decisions, Journal of
Purchasing 2, 5–17.
13. Moskowitz, H., Tang, J., and Lam, P. (2000). Distribution of aggregate utility using stochastic
elements of additive multiattribute utility models, Decision Sciences 31, 327–360.
14. Sterman, J.D. (1989). Modeling managerial behavior: Misperceptions of feedback in a
dynamic decision making experiment, Management Science 35, 321–339.
15. Bresnahan, T.F., and Reiss, P.C. (1985). Dealer and manufacturer margins, Rand Journal of
Economics 16, 253–268.
16. Carr, S., and Lovejoy, W. (2000). The inverse newsvendor problem: Choosing an optimal
demand portfolio for capacitated resources, Management Science 47, 912–927.
17. Van Mieghem, J., and Dada, M. (2001). Price versus production postponement: Capacity and
competition, Management Science 45, 1631–1649.
18. Tang (2006), op cit.
19. Hendricks, K., and Singhal, V. (2005). An empirical analysis of the effect of supply chain dis-
ruptions on long-run stock price performance and equity risk of the firm, Production and
Operations Management 25–53.
20. Golda, J., Philippi, C. (2007). Managing new technology risk in the supply chain. Intel
Technology Journal 11:2, 95–104.
21. Dickson, G.W. (1966). op cit.; Weber, C.A., Current, J.R., and Benton, W.C. (1991). Vendor
selection criteria and methods, European Journal of Operational Research, 50, 2–18; Moskowitz,
H., et al. (2000). op cit.
5 Supply Chain Risk Management 67
The state-of-the art of Risk Management Process (RMP) has primarily relied on
two main phases, (a) Risk Assessment and (b) Risk Response. Most of these studies
have had a significance emphasis on risk assessment but we can find a limited study
on the subject of risk response. So, the main objective of this research study is to
emphasize on the indispensable shift of our perspectives at the present time to a
more “Equilibrant” RMP, both for risk assessment and risk response. Based on this
view, this paper proposes a two-polar generic RMP framework for projects and
introduces some new elements. It can be concluded that a two-polar perspective
proposed in this research study can be used for risk management projects in most
effective and productive manner in real world’s problems.
We require managing risks, related to our projects. The need for project risk
management has been widely recognized1 but it is generally overlooked, from
concept to completion. “Sadly, many organizations do not know much about risk
management and do not even attempt to practice it.”2 Project risk management
has been defined as the art and science of identifying, assessing, and responding
to project risk throughout the life of a project and in the best interests of its
objectives.3
The main objective of this chapter is to emphasize on the indispensable shift
of our perspectives at the present time to a more “Equilibrant RMP.” For this pur-
pose, after looking at some RMPs in the state-of-the art, the paper introduces the
concept of “Equilibrium” in RMP and proposes a two-polar generic framework
for RMP.
Many studies have introduced risk management processes (RMPs), but there is
more work needed. Most studies proposing RMP applied in the project environ-
ment belong to one of the contexts given below:
● Project management context
● Civil engineering context
There has been some discussion about the relative importance of different phases
of RMP. The assumption would thus be that all phases support equally but in dif-
ferent ways the overall goal of improving project performance.21 We define the
critical success factor of the “Equilibrium” as due attention to all phases of RMP
which are important in turn. There is a consensus that the RMP must be com-
prised of two main phases.22 The first phase is risk assessment including risk
identification and risk analysis, which is analytical in nature. The second phase
is risk response, which is synthetic. The critical success factor of the “Equilibrium”
expresses that the initial phases of RMP play a fundamental role and the tail end
phases of RMP play a throughout role. Focusing on one and ignoring the other
misleads RMP. Indeed, one can assume that risk assessment and risk response are
poles of RMP in which; risk assessment is a decision-making tool and risk
response is the decision made and put in practice. It should be noted that, ignor-
ing the concept of the “Equilibrium” causes problems in design of and/or imple-
mentation of RMP. One of the biggest problems with many RMPs is that one or
more process steps are missing, weakly implemented, or out of order. “All RMP
steps are equally important. If you do not do one or more steps, or you do them
poorly, you will likely have an ineffective RMP.”23
The primary phase in RMP is risk assessment, so any faults and defections on this
phase are extends and accumulated to the next phases. So, effective RMP begins with
effective risk assessment.24 In the other words, one cannot manage risks if one does
74 S.M. Seyedhoseini et al.
not characterize them to know what they are, how likely they are, and what their
impact might be.25 On the other hand, one can consider that risk response phase has
a throughout role in RMP. Kliem and Ludin maintained that good risk management
requires good decision-making.26 Some investigators assert that importance of risk
response is premier than importance of risk assessment. They believe that it is risk
response which; really leads RMP toward the final results. Hillson stated “Identification
and assessment will be worthless unless responses can be developed and implemented
which really make a difference in addressing identified risks.”27 Fisher also stated that
all of the risk management activities are meaningless if they do not produce informa-
tion based on which the decision maker makes decisions for the benefit of the
program.28 Williams asserted that the purpose of risk analysis is always providing
input for an underlying decision problem.29
A Significant Gap
In the traditional view, initial phases of RMP are more significant cause they are
more fundamental. Based on this view, Elkjaer and Felding stated, “If risks are not
identified, they cannot be managed thus giving greatest weigh to the risk identifica-
tion phase.”30 This view has directed most risk management researches toward risk
assessment. This subject has originated a significant gap in the related researches
in the literature. Undoubtedly, we can assert that the main recent gap in RMP is in
the subject of risk response. Many researchers stress the mentioned gap while the
following statements confirm it:
– “Yet risk response development is perhaps the weakest part of RMP, and it is
here that many organizations fail to gain the full benefits of RMP.”31
– “Although there is wide agreement that the development of risk response plans
is an important element of project risk management, few solutions have been
proposed and there are no widely accepted processes, models or tools to support
the cost-effective selection of risk responses.”32
– “Risk response planning is far more likely to be inadequately dealt with, or
overlooked entirely, in the management of project risk.”33
– “A few specific tools have been suggested in the literature for determining risk
responses.”34
– “There are several systematic tools and techniques available to be promptly used
in risk identification; several quantitative and qualitative techniques also are
available for risk analysis; but, in risk response process, less systematic and
well-developed frameworks have been provided.”35
The above statements emphasize that existing RMPs are directed toward focusing
on risk assessment and neglecting risk response. Table 6.2, introduced by
Pipattanapiwong, supports the above statements.
6 Two Polar Concept of Project Risk Management 75
Regarding the critical success factor of “Equilibrium” in RMP, the two-polar perspec-
tive expresses that RMP has two main equivalent poles or columns including risk and
response. Here, we propose a two-polar RMP, which is compatible to project environ-
ment. This RMP commences with the box of “RMP start up” and finishes with the
box of “RMP shut down.” Table 6.3, shows the breakdown of our proposed RMP.
The proposed RMP has the following main properties:
● The proposed RMP is designed based on a two-polar concept. In deed, we have
designed all elements of our RMP, in respect with two main equivalent poles or
columns including risk and response.
● The proposed RMP is generic. This means that risk management analysts must
generate a RMP to match the size and complexity of their project.
● Our RMP is integrated to the overall project plan.
● The proposed RMP could be applied to each given level of project work break-
down structure (WBS). Project WBS is a top-down hierarchical chart of tasks
and subtasks required for completing the project.36
● The skeleton of our proposed RMP is based on the view of Plan-Do-Check-
Action (PDCA)37 (Kleim and Ludin 1997). The iteration loop consists of
76 S.M. Seyedhoseini et al.
Conceptual Framework
To establish a powerful RMP, the risk management analyst must define project,
risks and responses and distinguishes clear relationship among them. So the con-
ceptual model presented here is structured based on three pivotal elements i.e.
project, risks and responses. The key concepts are defined as following:
Project measure: The project scope is split into three key success factors
including project time, project quality and project cost (see
Table 6.4). These factors could be named as project meas-
ures. In principle, reaching the project scope requires us to
get the targets related to these three project measures.
Project ultimacy: It is the ultimate state of project in terms of project measures.
Risk event: It is a discrete occurrence that if it occurs, has a positive
(opportunity) or negative (threat) effect on the project meas-
ures (Simon et al. 2004).38 Indeed, risks affect on schedule,
quality and cost of project work elements and these affec-
tions generalizes to the project scope.
Risk measure: Risks have several characteristics, which could be used to
characterize risk events. We name these characteristics as
risk measures, which are described in Table 6.5.
6 Two Polar Concept of Project Risk Management 77
Risk class: Risk class implies typology of risk. A risk, from different
views, belongs to different classes.
Response action: Response action is a discrete activity that when carried out,
has a positive (ameliorator) or negative (deteriorator) effect
on the risks measures.
78 S.M. Seyedhoseini et al.
Response measure: Similar to those of risk, there are some measures to charac-
terize response actions. These characteristics could be named
as response measures. Response measures are explained in
Table 6.6.
Response class: Response class implies typology of response. A response,
from different views, belongs to different classes.
The conceptual framework to clarify the relationships among project, risks,
responses and their measures contains five important scenarios as follows:
– Implementing of response actions affects the risk measures
– Occurrence of risk events affects the project measures
– Response measures are used to characterize response actions
– Risk measures are used to characterize risk events
– Project measures are used to characterize project ultimacy
RMP Start up
Our proposed RMP begins with phase of starting up. In this phase, organization/project
management board decides about applying RMP for project and appoints the leader of
risk management. Then, the most important tasks are establishing the organizational
chart of risk management, constructing team of risk management and training RMP
team and project members. Here, some critical success factors are as following:
● Early starting up: It should be noted that risk management researchers empha-
size that RMP should start in a very early stage of the project process. Naturally,
when risk management is started early it is more difficult but more useful.51
● Teamwork: Most authors of risk literature think that risk management is essen-
tially a team effort.52 Also consider that leadership is a key.53 It is recommended
to demonstrate a visible and continuous senior leadership for the RMP.54
● Training: An organizational focus for training about risk management is essen-
tial. So, project members must receive sufficient training in risk management to
implement RMP effectively.55
● Organizational position: Risk management must have a suitable position in the
organizational chart of project organization. One of the major choices is whether
to have a centralized or decentralized risk management organization. The decen-
tralized risk management organization is the recommended approach, and gener-
ally results in an efficient use of personnel resources.56
Actuation
This phase is designed as an extended form of the phase of “risk management plan-
ning” in PMI (2000) or the phase of “establish the context” in standard AS/NZS
4360 (2004). The major activities of this phase are presented in Table 6.7. Some of
these activities will be explained in the next sections. It should be noted that actua-
6 Two Polar Concept of Project Risk Management 79
tion phase is repeated in each round of the proposed RMP. In deed, this phase is the
core part of “Action” within the loop of PDCA
We believe that almost, all of the techniques, which could be used in risk identifica-
tion, also, could be applied in response identification. Some of those techniques are
brainstorming, brain writing, interviewing, checklists, panel sessions, Delphi tech-
nique, etc.57 In addition to these techniques, we recommend using risk/response
classes and project WBS. However, the output of risk identification and response
identification are, respectively, serial lists of risks and responses.
Traditionally, most RMPs consider two risk measures: risk probability and risk
impact that is a two-dimensional notion.58 For example, Kerzner defines risk as
f(likelihood, impact).59 These two risk measures are both descriptive of the risk event.
This means that other risk measures are not addressed at all.60 We believe that to have
more complete simulation of risks, the risk management analyst is required to con-
sider not only these two measures, but also all pivotal risk measures as Table 6.5.
Also, regarding the two-polar perspective, the risk management analyst can use
response measures to model responses. Risk measures focus on the potential risk
event itself but response measures focus on the ability to address response actions.
Here, the next step to establish the measurement system is to scale the above meas-
ures. For example, some instance scaled measures are presented in Tables 6.8–6.11.
Hillson (2002) states that risk identification often produces nothing more than a
long list of risks, which can be hard to understand or manage.61 The list does not
provide any insight into the class of risk. The best way to deal with a large amount
82 S.M. Seyedhoseini et al.
During risk measurement and risk classification, the risk management analyst may
do some process on risks. The aim of risk processing is to better risk analysis
through decreasing complexity and size or increasing accuracy and precision.
Regarding risk measures and risk classes, one may do one of the following
processes:
– Risk screening: Removing risks
– Risk bundling: Combining some risks to one
– Risk adding: Adding new risks
– Risk refracting: Decomposing one risk to some risks
The risk management analyst can considers similar processes to the above men-
tioned ones for responses, which are response screening, response bundling,
response adding and response refracting.
Risk level is an index that indicates risk magnitude which; could be used to deter-
mine the priority of risks. For an assumed work element, a risk with higher level is
more critical. Traditionally, to determine risk level, the risk management analysts
use two risk measures including risk probability and risk impact as Fig. 6.1.
A requirement for using most measures is to project them on a one-dimensional
scale.63 Therefore, the risk management analyst may establish a function for deter-
mining risk level. For instance, according to Wideman (1992), the standard perception
84 S.M. Seyedhoseini et al.
Risk
probability Very
High
Medium
Low
A medium
level risk Risk impact
is that risk probability multiplied by risk impact results in risk level [(1)]. Conrow
(2003) has put more functions forward.64
Regarding the two-polar perspective, response level is an index that presents its
magnitude that could be applied to determine the priority of responses. In other
words, for an assumed risk, a response with higher level is better than a response
with lower level. Within a simple view, similar to the above risk level, we can determine
response level by response probability multiplied by response impact divided by
response resources, (see Fig. 6.2 and (2). The fraction of response impact divided
by response resources indicates efficiency of response.
In a comprehensive view, one could benefit from more risk measures to establish
a function for determining risk level. Based on the two-polar idea, a function which;
includes more response measures could be used to specify response level. The
terms (3) and (4), respectively, show these functions.
Re-
sponse Very
probability High
Medium
Low
According to Hillson (2001), there is no doubt that common usage of the word “risk”
sees only the downside.65 This is reflected in the traditional definitions of the word, both
in standard dictionaries and in some technical definitions (for example standard of
CAN/CSA-Q850-97 (1997) ).66 However, some professional bodies and standard
organizations have gradually developed their definitions of “risk” to include both
upside and downside (for example standard of AS/NZS 4360 (2004) ). One can consider
that the concepts of downside risk (threat) and upside risk (opportunity) are integrated
in a risk spectrum. As mentioned previously, risk has positive or negative effect on
measures of projects. Also, as discussed in the previous section, this effect could be
stated as risk level. Therefore, by mapping risk level in risk spectrum as Fig. 6.3, one
can determine that if risk is downside or upside.
Regarding the two-polar view, we define the concepts of downside response
(deteriorator) and upside response (ameliorator). As mentioned previously, response has
positive or negative effect on measures of risks. Thus downside response includes
action with negative effect on risk measures and upside response includes action
with positive effect on risk measures. Now, by mapping response level in response
spectrum as Fig. 6.4, one can determine that if response is downside or upside.
Naturally, downside responses are not favorable and must be crossed out from
responses list.
Regarding the two subjects of secondary risk and secondary response, one can
observe a tow-polar concept. Secondary risks are created after implementing responses
6 Two Polar Concept of Project Risk Management 87
Purely Purely
Fuzzy area
downside risk upside risk
Purely Purely
downside Fuzzy area upside
and secondary responses are those that are planned for secondary risks. The risk
management analysts may consider these items through assessment phases.
For an assumed round of the proposed RMP, at the end of the response assessment
phase, the planned responses should be executed. Therefore, implementation and
control are parts of “Do” within PDCA. To implement and control risks, all risks and
responses must have ownership. The task of risk ownership is risk control. Risk con-
trol includes tracking risk statement and monitoring it. The task of response owner-
ship is response control. Response control contains tracking response implementation
and monitoring it. As a useful guideline, to assign risk/response ownerships, the risk
management analysts may consider the previously classified risk/response in the
phase of risk/response analysis. However, it is very important that each person’s
responsibility and authority regarding all the risks and responses be determined.
Continuous application of control indicators, tools and forms, also, is a critical subject.
To control risks and responses, different indexes, tools and techniques are developed
which have already been specified in the phase of actuation and are put in practice in
this phase. The essential conditions to begin new round of the RMP are determined
in the phase of actuation. These conditions may be open-loop control (for example a
six-month period) or closed-loop control (for example while having an index reached
88 S.M. Seyedhoseini et al.
a particular threshold). Before starting the next round, it requires us to calculate success
measurement indicators for the previous round. Also it is useful to record all “lesson
learned” which could be valuable to be applied in the next rounds. However, this is
the part of “Check” within PDCA.
This final phase guarantees that the RMP completes its mission. It should be noted
that the RMP is shut down after closing the project. In shut down phase, some
major activities should be carried out as the following. Firstly, it should be cleared
that if risk management has been successful or not. As mentioned before, the RMP
success measurement indicators are established in the phase of actuation. Secondly,
it requires recording all data, information, knowledge, experiences and “lesson
learned” which; are earned during the RMP periods. This is a very useful input to
the next projects and can be a channel to integrate knowledge management pro-
grams of organization. Thirdly, regarding the models of Risk Maturity Model
(RMM)67 (Hillson 1997) and regarding the recent implemented RMP, the risk man-
agement analysts can distinguish the level of RMM of the project or the organiza-
tion and can use it as a useful guideline for the next projects.
Comparison
In this section, to emphasize the two-polar concept of our proposed RMP, some of
related aspects are compared, as in Table 6.15. According to the proposed RMP, it
is apparent that importance of risk is equal to importance of response. This is a hint
to considering the critical success factor of the “Equilibrium” for RMP.
Conclusion
Our investigations depicted that most of risk management studied researches and
consequently most of conventional RMPs have had a significance emphasis on risk
assessment but we found a limited study on the subject of risk response. To empha-
size on the indispensable shift of our perspectives at the present time to a more
“Equilibrant” RMP, both for risk assessment and risk response, in this research study
we proposed a two-polar generic RMP framework for projects and introduced some
response related aspects such as response measures, response level, response spectrum,
etc. We conclude that a two-polar perspective proposed in this research study can be
used for risk management projects in most effective and productive manner in real
world’s problems. We hope that taking this perspective directs the risk management
researchers toward developing more and more methods, tools and techniques in the
field of risk response.
6 Two Polar Concept of Project Risk Management 89
Table 6.15 Some aspects of the two-polar concept of the proposed RMP
Risk related items Response related items
Risk Response
Risk event Response action
Risk measure Response measure
Risk class Response class
Risk level Response level
Risk priority Response priority
Risk event occurrence probability Response action succession probability
Risk event Impact Response action impact
Risk effect delay Response effect delay
Risk uncertainty Response uncertainty
Risk uniqueness Response uniqueness
Risk assessment Response assessment
Risk identification Response identification
Risk analysis Response analysis
Risk measurement Response measurement
Risk classification Response classification
Risk priorization Response priorization
Risk screening Response screening
Risk bundling Response bundling
Risk adding Response adding
Risk refracting Response refracting
Risk event taxonomy structure (ETS) Response action taxonomy structure (ATS)
Risk event structuring matrix (ESM) Response action structuring matrix (ASM)
Risk level function Response level function
Risk spectrum Response spectrum
Downside risk (threat) Downside response (Deteriorator)
Upside risk (opportunity) Upside response (Ameliorator)
Secondary risk Secondary response
Risk ownership Response ownership
Risk control Response control
Risk tracking Response tracking
Risk monitoring Response monitoring
Acknowledgement We are grateful to chief and experts of the Project Management Research and
Development Center for theirs assistance in executing present study. This center is commissioned
to accelerate the proceduralization of Iranian Petrochemical projects (http://www.PMIR.com).
End Notes
1. Williams, T.M. (1995). A classified bibliography of recent research relating to project risk
management, European Journal of Operational Research, 85:1, 18–38.
2. Hulett, D.T. (2001). Key Characteristics of a Mature Risk Management Process, Fourth
European Project Management Conference, PMI Europe, London UK.
3. Wideman, R.M. (1992). Project and Program Risk Management: A Guide to Managing Project
Risks and Opportunities, Project Management Institute, Upper Darby, Pennsylvania, USA.
4. Saari, H.-L. (2004), Risk Management in Drug Development Projects, Helsinki University of
Technology, Laboratory of Industrial Management.
90 S.M. Seyedhoseini et al.
5. Al-Bahar, J., and Crandall, K.C. (1990). Systematic risk management approach for construction
projects, Journal of Construction Engineering and Management, 116:3 533–546.
6. Carter, B., Hancock, T., Marc Morin, J., and Robins, N. (1996). Introducing RISKMAN: The
European Project Risk Management Methodology, Blackwell, Cambridge, Massachusetts
02142, USA.
7. Institution of Civil Engineers, Faculty of Actuaries, Institute of Actuaries. (1998). Risk
Analysis and Management for Projects (RAMP), Thomas Telford, London, UK.
8. Rosenberg, L., Gallo, A., and Parolek, F. (1999). Continuous Risk Management (CRM)
Structure of Functions at NASA, AIAA 99-4455, American Institute of Aeronautics and
Astronautics.
9. U.S. DoD (Department of Defense), Defense Acquisition University, Defense Systems
Management College, (2000), Risk management guide for DoD Acquisition, Defense Systems
Management College Press, Fort Belvoir, Virginia, USA.
10. Humphrey, W.S. (1990). Managing the Software Process, Addison Wesley; Software
Engineering Institute (SEI), (2001), CMMI – Capability Maturity Model Integration, version
1.1 Pittsburgh, PA, Carnegie Mellon University. USA.
11. Kontio, J. (2001). Software Engineering Risk Management: A Method, Improvement Framework,
and Empirical Evaluation, Nokia Research Center, Helsinki University of Technology, Ph.D.
Dissertation.
12. Office of Government Commerce (OGC). (2002). Management of Risk (MOR): Guide for
Practitioners, London.
13. Haimes, Y.Y., Kaplan, S., and Lambert, J.H. (2002). Risk filtering, ranking and management
framework using hierarchical holographic modeling, Risk Analysis, 22:2, 381–395.
14. Del Cano, A., and De la Cruz, M.P. (2002). Integrated methodology for project risk management,
J. Construction Engineering and Management, 128:6, 473–485.
15. Chapman, C.B., and Ward, S.C. (2003). Project risk Management, Processes, Techniques and
Insights, 2nd edn., Wiley, Chichester, UK.
16. Project Management Institute (PMI). (2004). A guide to the project management body of
knowledge (PMBOK guide), Newtown Square, Pennsylvania, USA.
17. Simon, P., Hillson, D., and Newland, K. (2004). PRAM project risk analysis and management
guide, The Association for Project Management (APM), High Wycombe, UK.
18. Pipattanapiwong, J. (2004). Development of Multi-party Risk and Uncertainty management
process for an Infrastructure project, Dissertation submitted to Kochi University of Technology
for Degree of Ph.D.
19. AS/NZS 4360. (2004). Risk Management, Strathfield, Standards Associations of Australia,
www.standards.com.au.
20. Swabey, M. (2004). Project Risk Management, An Invaluable Weapon in any Project
Manager’s Armoury, White Paper, Aspen Enterprises Ltd.
21. Saari, H.-L. (2004). op cit.
22. Miler, J. (2005). A Method of Software Project Risk Identification and Analysis, Ph.D.
Thesis, Gdansk University of Technology, Faculty of Electronics, Telecommunications and
Informatics.
23. Conrow, E.H. (2003). Effective Risk Management: Some Keys to Success, 2nd edn. American
Institute of Aeronautics and Astronautics, Reston.
24. Rosenberg, et al. (1999). op cit.; U.S. DoE (Department of Energy). (2005). The Owner’s Role
in Project Risk Management, ISBN: 0-309-54754-7.
25. US DOE. (2005). op cit.
26. Kleim, R.L., and Ludin, S. (1997). Reducing Project Risk, Gower.
27. Hillson, D. (1999). Developing Effective Risk Response, Proceeding of the 30th Annual
Project Management Institute, Seminars and Symposium, Philadelphia, Pennsylvania,
USA.
28. Fisher, S. (2002). The SoCal Risk Management Symposium – It Made Me Think, Risk
Management Newsletter, 4:4.
29. Williams. (1995). op cit.
6 Two Polar Concept of Project Risk Management 91
30. Elkjaer, M., and Felding, F. (1999). Applied Project Risk Management – Introducing the
Project Risk Management Loop of Control, Project Management, 5:1, 16–25.
31. Hillson. (1999). op cit.
32. Ben, D.I., and Raz, T. (2001). An integrated approach for risk response development in project
planning, Operational Research. Society, 52, 14–25.
33. Gillanders, C. (2003). When Risk Management turns into Crisis Management, AIPM National
Conference, Australia.
34. Saari. (2004). op cit.
35. Pipattanapiwong. (2004). op cit.
36. Olson, D.L. (2004). Introduction to Information Systems Project Management, McGraw-Hill.
37. Kleim and Ludin. (1997). op cit.
38. Simon, et al. (2004). op cit.
39. Santos, S.D.F.R., and Cabral, S. (2005). FMEA and PMBoK Applied To Project Risk
Management, International Conference on Management of Technology, Vienna.
40. Elkjaer and Felding. (1999). op cit.
41. Garvey, P.R. (2001). Implementing a Risk Management Process for a Large Scale Information
System Upgrade – A Case Study, Incose Insight, 4:1.
42. Sandy, M., Aven, T., and Ford, D. (2005). On Integrating Risk Perspectives in Project
Management, Risk Management: An International Journal, 7:4, 7–21.
43. Charette, R. (1989). Software Engineering Risk Analysis and Management, McGraw Hill.
44. Clayton, J. (2005). West Coast CDEM Group Operative Plan, Civil Defense & Emergency
Management Group for the West Coast.
45. Wideman. (1992). op cit.
46. Labuschagne, L. (2003). Measuring Project Risks: Beyond the Basics, Working paper, Rand
Afrikaans University, Johannesburg.
47. Swabey. (2004). op cit.
48. Labuschagne. (2003). op cit.
49. Clayton. (2005). op cit.
50. Hillson. (1999). op cit.
51. Saari. (2004). op cit.
52. U.S. Department of Defense. (2000). op cit.
53. Chadbourne, S.B.C. (1999). To the Heart of Risk Management: Teaching Project Teams to
Combat Risk, Proceedings of the 30th Annual Project Management Institute, Seminars and
Symposium, Philadelphia, Pennsylvania, USA.
54. Graham, A. (2003)., Risk Management: Moving the Framework to Implementation: Keys to
a Successful Risk Management Implementation Strategy, A Report by the Graham and
Deloitte and Touche Site.
55. Chadbourne. (1999). op cit.; Graham. (2003). op cit.
56. U.S. Department of Defense. (2000). op cit.
57. Del Cano and de la Cruz. (2002). op cit.
58. Williams, T.M. (1996). The two-dimensionality of project risk, International Journal of
Project Management, 14:3.
59. Kerzner, H. (2003). Project Management: A Systems Approach to Planning, Scheduling, and
Controlling, 8th edn. Wiley.
60. Labuschagne. (2003). op cit.
61. Hillson, D. (2002). The Risk Breakdown Structure (RBS) as an Aid to Effective Risk
Management, Fifth European Project Management Conference, PMI Europe, Cannes,
France.
62. Dorofee, A.J., Walker, J.A., Alberts, C.J., Higuera, R.P., Murphy, R.L., and Williams, R.C.
(1996). Continuous Risk Management (CRM) Guidebook, Carnegie Mellon University
Software Engineering Institute (SEI), US.
63. Porthin, M. (2004). Advanced Case Studies in Risk Management, Thesis for Master of
Science in Technology, Helsinki University of Technology, Department of Engineering
Physics and Mathematics.
92 S.M. Seyedhoseini et al.
Canadian winters are extreme: cold and snow are a fact of everyday life. Canada spends
over $1Bn every year removing snow. As one example, consider the city of Montreal.
The city spends over $50M every year removing snow, about 3% of its total budget. It
does that through a fixed-price contract agreement with a third party, which starts on
November 15 and ends in April 15 – the snow season. During this time, the city’s
exposure to snow removal costs are – to a large degree – predictable. However, snow
precipitation outside of this period can become very costly: it is outside of the contrac-
tual arrangement, and the city may incur into expenses which may, on a relative basis,
exceed the ones during the snow season. The city is exposed to snow financial risk. But
snow financial risk affects also other corporations, such as ski resorts. For them, the
snow financial risk is opposite: low precipitation during the late part of the fall or early
spring will yield operational losses compared to years when snow fall is ample early in
the fall or late into the spring. They also face snow financial risk.
Sometime ago, a proposal was launched to partially mitigate this: a snow swap.
In this, a city will pay a premium to a dealer when snow is scarce outside the snow
season, and receive a premium if snow appears. Similarly, a ski resort will receive
payments if snow is scarce and will pay if snow is plentiful. The dealer arranges
this, and collects a commission for its services. The dealer has no risk exposure to
snow precipitation because it is exchanging offsetting payments between the two
parties. The snow swap did not succeed, however, because there was no agreement
as to where the measurements for snow precipitation were to occur. The snow
financial risk seemed to be solved by the snow swap, but the geographical spread
risk could not be absorbed by anyone.
Let us consider the hypothetical following proposition: a group of investors
(a fund) gets together, puts up some money upfront (merely as collateral), and
decides to take the geographical spread risk. It will pay the city in the case of out-
of-season snow falls in the city, and will pay the ski resort in case of no out-of-sea-
son snow falls at the resort. By contrast, it will receive payments from both if the
opposite occurs. With a nominal payment of $1M, and a nominal fee of 10%
($100,000), the deal will look as follows (Table 7.1):
The difference with the previous, unsuccessful snow swap is that in this case,
both the city as well as the ski resort gets to measure the snow precipitation at the
place of their choice, with the fund taking the geographical risk. To move ahead
with our example, let us assume the snow events in both places are correlated at
50%, and the fund will charge a $200,000 fee for its risk: this means that the cash-
flows for the fund will be (Table 7.2):
To get an idea of the quality of these funds, note that the expected return for the
$2M the fund had to invest to participate in the swap is $200,000, or 10%, compa-
rable to an investment in the stock market. The standard deviation, however, is
50%, which is more or less comparable to a game of poker. From an investment
viewpoint, this is not a very good proposition, as the risk is too high for the
expected return. Things become more interesting if the fund decided to do similar
swaps in other cities. If 100 independent swaps are considered, for a total of $200M
invested, the expected return continues to be 10%, but the standard deviation, as a
measure of risk, now drops to 5%. As an investment, this is now better than invest-
ing in the stock market and the fund has a future.
But things are slightly better. In our snow fund, we raised $200M to post as col-
lateral for 100 different swap agreements. This was to give rise to an expected
return of 10% ($20M) for the period (6 months), with a standard deviation of 5%.
Note that in calculating our cash flows, we have neglected the fact that the collateral
($200M) was not to be used except as a guarantee to the counterparties – cities and
ski resorts – that our fund would be able to honor its payment obligations even
when all deals may turn against the fund. In other words, the collateral is there just
to enable the fund to have the right credit rating for the deal. The fund would obtain
a rating of AAA, the best possible. But there is no reason to hold the $200M in
cash, one could easily invest them in T-Bills (short term interest notes issued by the
government of the United States), and hence earn LIBOR, the on-going risk-free
interest rate. In this way, our return will be LIBOR+10%, with a standard deviation
almost unchanged.
Situations such as this one are becoming common at the beginning of the twenty-
first century: a certain investment partnership takes on some risk, in an effort to
obtain a return. The risk is often times the result of providing risk mitigation to a
third party, but the fund absorbed residual risk, which is often times hard to deal with
7 The Mathematics of Risk Transfer 97
but it may be diversifiable, such as in our example. These funds, which often times
operate in areas where the traditional financial companies (banks, insurance compa-
nies, etc.) do not operate, and are sometimes based in domiciles with allow unregu-
lated activities (Cayman, Bermuda, etc.), are generally called hedge funds.
But is this type of activity new? From an abstract point of view, financial activity
is an affair in risk transfer. Stocks and bonds, the financial instruments of the nine-
teenth century, are designed to allow investors to participate in commercial enter-
prises; stock holders assume market risk, i.e., the risk that the firm does not meet
profitability expectations; bond investors are not exposed to that market risk, and
only assume default risk, i.e., the risk that the issuing entity cannot meet its finan-
cial obligations. This is also called credit risk, and losses can also occur without the
company defaulting: a mere credit downgrade will lead to a decrease in the market
value of the bond, and hence a loss, realized or not.
In the latter part of the twentieth century, market risk was traded massively
through the derivatives market. Investors could buy price protection related to
stocks, currencies, interest rates or commodities by purchasing options or other
derivatives; some are standard, others are tailor-made and labelled “over-the-
counter.” At the same time, default (or credit) risk was considered through ad hoc
considerations, but was not part of a quantitative treatment, and hence risk transfer
of credit risk was not common. Towards the end of the twentieth century, events
such as the Russian default, Enron and Worldcom, and the demise of Long Term
Capital Management, put credit risk at the forefront of financial institutions, and
credit transfer emerged.
Today, credit risk has been regulated in BIS-II, the resolution of the Bank for
International Settlements, but the credit market has just started (although at the
present time its volumes are very high). A host of new credit products are created
everyday. Later in the paper we will explore some of the newest ones, the
Collateralized Fund Obligations, or CFOs, designed to provide financing to inves-
tors in hedge funds. What is interesting, from a mathematical viewpoint, is that the
arrival of new credit-sensitive products is accompanied with new risks, which need
to be determined, and priced.
We will review some of the earlier properties of financial risk, and we will focus
on the analysis of CFOs as a means to highlight some of the new paradigms that we
will likely face in the near future.
Pricing Risk
There are three types of risk: diversifiable risk, tradable risk (or hedgeable risk),
and systemic risk. The first type of risk is the one we considered in the snow swap.
There was nothing we could do to mitigate it, but building a portfolio of independent
risks allowed us to diversify it to the point that it was worth taking. The second type
of risk is tradable risk, best explained through the following example. The main
difference with respect to our previous example is that, in this case, we will be able
to price the risk accurately, as described below.
98 M. Escobar, L. Seco
Imagine the following a very simple hypothetical situation (see Fig. 7.1).
There is an asset (a stock, a home, a currency, etc.) trading today at $1, which
can only be worth $2 or $0.50 next year, with equal probability; interest rates are
0%, i.e., borrowing is free. Consider also an investor who may need to buy this
asset next year and is therefore concerned with increase in value; for that reason
decides to buy insurance in the following form: if the asset raises to $2, then the
insurance policy will pay $1. If the asset drops in price however, the policy pays
nothing. This situation is summarized in Fig. 7.1. One would be tempted to price
this insurance policy with a premium obtained through probabilistic considerations,
and it would seem that $0.50 is the price that makes sense.
However, the following argument shows that this is not the case: if the investor
paid $0.50, then the seller of the policy could do implement the following investment
strategy: she borrows an additional $0.10, and buys 60% of the stock. If the stock
raises in value, after paying the $1 and returning the loan, she would make a profit of
$0.10. If, however, the stock drops in price, she will make a net profit of $0.20, as the
policy pays nothing and they only need to return the loan. In other words, $0.50 is too
much, as the issuer of the option will always make a profit: this phenomenon is called
arbitrage, and it is a fundamental assumption for pricing theories that arbitrage should
not exist (market design assumes that any chance of making free money will be elimi-
nated from the market from smart traders, affecting the price which will immediately
reach a non-arbitrage equilibrium.) A simple calculation will show that the no-
arbitrage price is exactly $1/3. As opposed to traditional insurance premiums, finan-
cial insurance for tradable risks is not based merely on probabilistic considerations.
This simple example (a “call option”) is the basis of the no-arbitrage pricing
theory,1 and we can quickly learn a few things from it. First, the price of a contract
that depends on market moves may be replicated with buy/sell strategies, which
mimic the contract pay-out but can be carried out with fixed, pre-determined costs.
Second, there is a probability of events which is implied by their price, and is perhaps
independent of historical events. In our example above, the implied probability of
an up-move has to be 66%, and the probability of a down-move is 33%, because
with those probabilities we can price the contract taking simple expectations.
However, a more profound revision of the previous example will convince the
reader with a background in diffusion processes that, if one takes the simple one-step
dS = mS dt + sS dW P
Where here S denotes the stock price, µ denotes the drift, σ is the volatility, and dW
are infinitesimal Brownian increments. An option on a stock is as a contract that
will pay a future value at expiration: the payoff depends on the value of the underly-
ing stock S, and will be denoted by f0(S). We denote by T the expiration time. Note
the similarity with our simple example above (in Fig. 7.1), the main difference
being that in our case now the stock trades continuously and we could therefore
replicate our option by trading the stock continuously. In this case, the Black–
Scholes–Merton theory shows that the price of the option contract is obtained by
solving the following backward parabolic Partial Differential Equation, or PDE, for
all times t<T prior to expiration:
⎧ ∂f s 2 2 ∂ 2 f ∂f
⎪ + S + rS − rf = 0
⎨ ∂t 2 ∂S 2 ∂S
⎪ f (S, T ) = f 0
⎩
At first sight, this expression has two counterintuitive features: the absence of µ
and the presence of the interest rate r in the PDE. A moment’s reflection however,
will convince us that this is not entirely surprising: after all, in our example in
Fig. 7.1 we already saw that the price of that option is independent of the probabili-
ties of up and down moves of the stock, and it will only depend on the cost of bor-
rowing. This was forced on us by our no-arbitrage assumption.
In more general terms, it turns out that option pricing can be established by tak-
ing expectations with respect to a “risk neutral” measure Q, which is perhaps dif-
ferent from the historical measure P. In our particular case, this implies that the
solution to the PDE is given by
⎧⎛ ⎛ u ⎞ ⎛ s 2 ⎞ ⎞ ⎫
2
⎪ ⎜ ln ⎜ −
⎟ ⎜ r −
2 ⎟⎠
( )⎟ ⎪
T − t
1 ∞ ⎪ ⎝ ⎝ s( L ) ⎠ ⎝ ⎠ ⎪
f (S, t ) =
2p (T − t )s
∫
0
f (u) exp ⎨
2(T − t )s 2 ⎬ du
⎪ ⎪
⎪ ⎪
⎩ ⎭
100 M. Escobar, L. Seco
which is easily checked? From this perspective, pricing becomes equivalent to find-
ing risk neutral probabilities and their pay-off expectations, and the PDE above is
nothing but the Feynmann–Kac formula for this expectation.
The Black–Scholes–Merton theory also shows that one can replicate the option pay-
off by continuously trading the stock so that we always own -∂Sf units of it.
This signified a tremendous revolution, that won Black and Scholes the Nobel prize
for Economics in 1997, as it not only established a pricing mechanism for the booming
options and derivative markets, but because it established certainty where there was risk:
derivatives could be replicated by buy/sell strategies with predetermined costs.
Their discovery revolutionized market risk perspectives. But Merton, who had re-
derived their pricing formalism using stochastic control theory, used this advance to
start the modern theory of credit risk. His viewpoint, which we present below, was just
as revolutionary.
Merton viewed a firm as shareholders and bond-holders. Bond-holders lent money to
the firm, and the firm promised to pay back the loan, with interest. Shareholders own the
value of the assets of the firm, minus the value of the debt (or liabilities); but firms have
limited liability, which means that if the value of the assets falls below the value of the
liabilities, in Merton’s view, the firm defaults, shareholders owe nothing and the bond-
holders use the remaining value of the assets to recover a portion of their loan. In other
words, the shareholders own a call option on the value of the assets of the firm, with a
strike price given by the value of the liabilities at the given maturity time of the loan. The
timing of his theory, which dates back to 1974, was perfect as the theory of option pricing
had just been developed one year earlier, and this opened the ground for credit risk pric-
ing and credit risk derivatives.
Strictly speaking, the Merton approach assumes that the liabilities of a firm (its
debt) expire at a certain time, and default could occur only at that time. Black and
Cox conceptually refined Merton’s proposal by allowing defaults to occur at any-
time within the life of the option, creating the “first passage default models.”4 The
reason for this modification is that, according to Merton’s model, the firm value
could dwindle to nearly nothing without triggering a default until much later; all
that matters was its level at debt maturity and this is clearly not in the interest of the
bond holders. Bond indenture provisions therefore often include safety covenants
providing the bond investors with the right to reorganize or foreclose on the firm if
the asset value hits some lower threshold for the first time. This threshold could be
chosen as the firm’s liabilities.
But the largest event in the credit market still had to wait until 1998, when the default
of Russia and the menace of the impeachment of President Clinton over the Monica
Lewinsky affair threw financial markets into disarray; the Russian default, and worries
about the political stability of the United States created a credit crunch as bond investors
fled from corporate debt for the more secure treasury bill market, introducing credit
spread dislocations of historical proportions. This situation culminated with the collapse
of Long Term Capital Management, a multi-billion dollar hedge fund that, anecdotally,
had lured Scholes and Merton to their board of directors.
The result of these massive historical events was the explosion of the credit
market. In it, financial players seek to buy and sell credit risk, either for insurance
7 The Mathematics of Risk Transfer 101
If A defaults, C can
C insures the bond
lose everything
Gets a little money from
B, and if A defaults, pays B does not lose anything,
B principal plus interest. except the payments it
made to C.
as counterparties to these type of deals, and the hedge fund style that does this is
called mortgage arbitrage (here, the term arbitrage is abused, in the sense that there
is no real arbitrage, just a statistical arbitrage as the tranches pay more on average
than other instruments with similar risk profiles.)
The valuation of such structures is based on computing the probability distribu-
tion of the event “mth default.” This is technically difficult because it requires one to
handle the multivariate distribution of defaults, and generally most credit models fail
to reliably capture multiple defaults. There are basically two procedures for evaluat-
ing these basket derivatives, multifactor copula models5 and intensity models.6
Escobar and Seco present a partial differential equation (PDE) procedure for
valuing a family of credit derivatives work within the structural framework, where
the default event is associated to whether the minimum value of an stochastic proc-
esses (firm’s asset value) have reached a benchmark, usually the firm’s liabilities.7
More precisely, they assume:
● The interest rate, r is constant
● The value of the assets, Vi(t), follows an Ito process with constant drift r and
volatility si2(t):
● Firm i defaults as soon as its asset value Vi(t) reaches the liabilities, denoted as
Di(t). This is the definition of default within the structural framework.8
Define X(t)=ln V(t) as the n-dimensional Brownian motion vector with drift
µ = (µ1,…,µn), µI = r-si2(t)/2 and co-variances si,j (t). The running minimum is defined as:
X i (t ) = min Xi (s )
0≤ s≤t
They show that the price is a function of the multivariate density p of the vector
of joint Brownian motions and Brownian minimums (it can be easily extended to
maximums)
P( X1 (t ) ∈dx1 ,..., X n (t ) ∈dxn , X 1 (t ) > m1 ,..., X n (t ) > mn )
= p( x1 ,..., xn , t , m1 ,..., mn , m , ∑ )dx1 ...dxn ,
For the case of more than two underlying components, p is the solution of a PDE
with absorbing and boundary conditions (a Fokker–Planck equation) given by
⎧ ∂p ∂p ij ( t ) ∂2 p
n n s
⎪ = − ∑ mi ( t ) • +∑ •
⎪ ∂t i =1 ∂xi i , j =1 2 ∂ xi ∂ x j
⎪
⎨ p ( x, t = 0 ) = Pi== 1d ( xi )
n
⎪
⎪ p ( xi ,..., xi = mi ,..., xn , t ) = 0, i = 1,..., n
⎪⎩ xi > mi , mi ≤ 0, i = 1,..., n
7 The Mathematics of Risk Transfer 103
⎛ m − mt ⎞ ⎧ 2 m m ⎫ ⎛ − m − mt ⎞
p( X 1 (t ) ≥ m1 ) = Φ ⎜ 1 − exp ⎨ 12 ⎬ Φ ⎜ 1 ⎟.
⎝ s t ⎟⎠ ⎩ s ⎭ ⎝ s t ⎠
He, Keastead and Rebholz provided an explicit formula for the joint density for
the case of two Brownian motions, or two underlying stocks.9 Formulas for the
general n-dimensional case remain unknown.
Let us consider now our next, and final, example, which will bring together the
hedge fund example of Sect. 1, and the credit derivatives of the previous one.
There are over 10,000 hedge funds in the world. Many of them try to obtain
returns independent of market directions but the majority of them, unlike our
snow fund example; they try to do it through financial instruments which are
traded in the financial exchanges: equities, bonds, derivatives, futures, etc. They
often times try to extract return from situations of inefficiency: for example, they
would buy a stock – also termed “taking a long position” – which they perceive
is undervalued with respect to the value of its underlying assets, and would sell
short – borrow, or “take a short position” – a stock they perceive is overvalued
with respect to the value of their assets, expecting a convergence to their fair
price, hence obtaining return in the long term, and not being subject to the direc-
tion of the stock markets, which will probably affect their long and short stock
portfolio the same way. Others may do the same with bonds: there will be bonds
which will earn slightly higher interest than others, simply because there are less
numbers of them, and hence trade slightly cheaper than other bond issues, large
and popular. Other funds, will monitor mergers between companies and try to
benefit from the convergence in equity value and bond value that takes place after
a merger by taking long and short positions in the companies’ stocks and/or
bonds. And we already mentioned those funds who try to benefit from the slightly
higher interest earning properties of tranches of mortgage pools with respect to
borrowing interest rates.
All of this gives investors with a wide universe to make investment choices. Let us
imagine that each of those funds gives us returns similar to the snow fund: LIBOR+10%
expected return, and 5% standard deviation. A portfolio of such investments will give
us the same expected return, but the standard deviation is likely to decrease, because
their return streams will be uncorrelated with each other. These investments, at least on
paper, look extremely attractive. However, for the risk diversification to truly exist, one
need to invest in a sufficiently large number of them; there is always the possibility of
fraud – these funds are largely unregulated and unsupervised-, convergence-based
trades may take a long time before they work, and deviations from our mathematical
104 M. Escobar, L. Seco
expectations may occur in the short term, etc. And, unlike stocks, or mutual funds, these
funds often require minimum investments of the order of $1M. That means that diver-
sifying amongst them will require substantial amounts of money.
There are several ways to invest in hedge funds: the three more frequent ones are:
● Fund of funds. They are simple portfolio of hedge funds. The assets of the fund-
of-funds are invested in a number of hedge funds (from 10 to 100). The chosen
hedge funds are usually of a variety of different trading styles to achieve maxi-
mum diversification.
● Leveraged products. Imagine an investor has $10M to invest in hedge funds. Instead
of allocating $1M to a portfolio of ten different hedge funds, they may borrow an
additional $30M from lenders, and invest the total amount $40M, in 40 different
hedge funds. The investor pays interest to the lenders, and keeps the remaining gains.
We will describe these types of investments in higher detail below.
● Guaranteed products. They are term products, issued at maturities of 5 years, for
example. The investor is guaranteed their money back after that period – 5 years –
with no interest of course. In lieu of interest, they will receive a variable amount,
which will be linked to the performance of the hedge fund portfolio. If the perform-
ance is good, the payment may be very large. If not, they simply get their money
back, without interest. They are issued by a high-quality institution, who will take
the investor’s assets, invest a portion on a bond that will guarantee the principal at
maturity of the note, and invest the rest – the interest earnings that the investor gives
up – in a leveraged product we described earlier, to maximize the return of the inves-
tor’s assets. They are very popular with retail products, as well as for institutions who
can only invest in bonds, as these can be structured as a bond (Fig. 7.3).
Leveraged products are attractive because of the following. Back in our snow swap
example, expected return was LIBOR+10%. LIBOR is the base lending rate. With
proper collateral, lending at LIBOR+1% is very feasible. That means that we can
borrow at LIBOR+1%, and invest at LIBOR+10%. In other words, for every dollar
we borrow we will make 9 cents for free, after paying all fees. Therefore, investors
should want to borrow as much as possible and invest all the borrowed amounts. If it
was not for the standard deviation, indeed that would be fantastic. The standard devia-
tion, as well as other risks, limits the borrowing capacity and appetite of investors.
Obtains return
Secure debt
Leveraged Investment
Insures principal
Leverage products are most often offered by banks; they lend to investors, inves-
tors take the first risk that the funds do not perform as expected, but the banks face
the secondary risk that the losses exceed the equity provided by the investors and a
portion of the lent amount may also be lost. Let us just mention that a number of
safety measures are put in place by the banks to prevent this from happening, such as
partial liquidation of the investments as the performance deviates from expectations.
Recently, leverage products are organized by banks, but the borrowed amount is done
through outside investors, through bond tranches very similar to the CDO structures
we reviewed in our previous section. To explain how it works, we consider the case
of the Diversified Strategies CFO SA, launched in 2002. Investors provided equity
worth $66.3M, which supported an investment of $250M in hedge funds. The addi-
tional funds ($183.70M) were raised through three bond tranche issues, as follows:
● AAA tranche ($125M)
● A tranche ($32.5M)
● BBB tranche ($26.2M)
We are not going to go in great detail into the details of the transaction; we will sim-
ply mention that the tranche structure is similar to a CDO; the bond investors provide
the capital, and upon maturity get their principal and interest. In the case the CFO struc-
ture fails to have enough assets to pay back its debts; the CFO will enter into default. In
that scenario, the AAA-tranche investors are first in line to get their money back (prin-
cipal plus interest). Next in line will be the A tranche, and the BBB tranche will be last
in line. In a default situation, the equity investors would have lost all their assets.
Because of the difference in default risk, each of the bond investors receives different
interest payments, highest for the BBB tranche, lowest for the AAA investors.
The interest payment their risk is worth – a credit spread – is a very interesting
risk pricing problem. It is easier than the CDO pricing problem we described ear-
lier, since here we only need to look at the performance of the entire fund perform-
ance, and we do not need to enter into individual default numbers. In fact, with the
assumption that the fund returns are normally distributed, it is very easy to deter-
mine the credit spread. The probability of default will be given by the quantile of a
normally distributed Ito process, which has a simple risk-neutral analog, and we
just price that using expectation under the risk neutral measure. In the case of the
Diversified Strategies CFO, the respective interest rates were as follows:
● AAA tranche. LIBOR+0.60%
● A tranche: LIBOR+1.60%
● BBB tranche: LIBOR+2.80%
Non-Gaussian Returns
Many of the mathematical theories that study financial problems make a fundamen-
tal assumption: returns are normally- or log normally-distributed. It is a reasonable
assumption that permits robust mathematical modeling. However, non-Gaussian
106 M. Escobar, L. Seco
properties of real market data are a fact, and considerable effort goes into the math-
ematical modeling of such situations that relaxes the Gaussian assumptions. In our
context, the non-Gaussian nature of real markets exhibits itself in two main ways:
Non-Gaussian marginal distributions. The graph below depicts the monthly return
frequency of a hedge fund index, the CSFB fixed income arbitrage index (Fig. 7.4):
There are clear non-Gaussian features, for example, fat tails, also called Kurtosis,
which in this case we can trace back to the events of 1998, and asymmetry, also know
as skewness. This second feature comes naturally for most series, as return can not
go below-1, event of total lost, but still could theoretically be as positive as wanted.
This left-bounded range, together with the drive of companies to emphasize above
average growth, leads to asymmetric distributions for the returns. Other common but
difficult to graph marginal features are: time dependent return volatilities, trends in
the return’s mean as well as cycles, just to mention a few Non-Gaussian dependence
structures. If one tries to determine the dependence amongst several assets fitting it to
a correlation matrix, one often finds that at certain times, the simultaneous occurrence
of certain events does not correspond to a correlation measure.
This is a high-dimensional phenomenon, which is not so easy to describe graphi-
cally, but we will try to explain with the following sets of pictures (Fig. 7.5).
In the first one, we see the correlation matrix of a hedge fund universe. The
matrix is read from left to right and from the bottom to the top, and numbers close
to +1 or −1 are represented with a dark pixel, whereas numbers close to 0 are rep-
resented with a light pixel. We see that correlations are mostly low, with few
instances of high correlations. This is consistent with our view of hedge funds.
The second picture represents the correlations taking into account only months
of unusual returns; say months where the returns exceed the Gaussian safety band
of 2 standard deviations from the mean. We see a very different correlation struc-
12
10
8
6
4
2
0
0
−0 47
−0 27
−0 07
−0 87
−0 67
−0 47
−0 27
−0 7
− 0 14
−0 34
−0 54
−0 74
−0 4
−0 14
−0 4
−0 54
−0 4
− 0 94
− 0 14
− 0 34
− 0 54
−0 4
−0 4
e
00
16
00
09
13
17
27
47
67
or
1
1
1
M
−1
.0
.0
.0
.0
.0
.0
.0
.0
.0
.0
.0
.0
.0
.0
.0
.0
.0
.0
.0
.0
.0
.0
.0
.0
.0
−0
−0
Monthly returns
ture, with increased high correlation numbers. We denote this as correlation risk, or
correlation breakdown phenomena (Fig. 7.6).
Given that correlation is one of the fundamental properties of hedge fund invest-
ing-remember our snow fund, correlation breakdown is a very damaging non-
Gaussian effect for hedge fund portfolios and related structures.
This previous presentation assumed correlation as the right measure to describe
dependence. The very emphasize in the correlation as “the measure” to describe
dependence structures has been strongly challenge since the nineties by the mathemati-
cally more general notion of Copulas, for which the Gaussian correlation is a particular
case.10 This area of research is quite complex from a mathematical viewpoint and at the
same time is difficult to provide a meaning and a reliable estimation framework to the
various parameters that appear; therefore it is still very much under development.
These non-Gaussian dependence structure features have an important impact all
over mathematical finance, leading to interesting results on apparently unrelated
issues like Portfolio theory and Derivative Pricing. On the former, Buckley–
Saunders–Seco studied the implications for Portfolio Theory of assuming multidi-
mensional Gaussian-mixture distributions for the underlying returns11
The following feature shows contour plots of probability density functions when
working with multidimensional Gaussian Mixtures. The top row contains two
bivariate Gaussian distributions potentially for the tranquil (left) and distressed
(right) regimes. The bottom row illustrates the composite Gaussian mixture distri-
bution obtained by mixing the two distributions from the top row (left) and a bivari-
ate normal distribution with the same means and variance/covariance matrix as the
composite (right).
108 M. Escobar, L. Seco
Investment opportunity sets for the tranquil and distressed regimes superim-
posed onto the same plot. The axes are the portfolio mean and variance (Fig. 7.7).
Typically the Gaussian Mixture approach optimal portfolio will be sub-optimal
with respect to both the tranquil and distressed mean-variance objectives
On the later, its effect on the default probabilities, and associated credit spreads
for CFO tranches has been studied in Ansejo et al.12 More precisely, it is shown
there that the credit ratings of CFO tranches are sensitive to the correlation break-
down probability, as summarized by Figs. 7.8 and 7.9. Figure 7.8 shows that the
probabilities of default spread over a substantial range when changing the probabil-
ity of a distress month 1-p (market conditions). For example the mezzanine tranche
probability of default could go from 2 to 9%. Figure 7.9 shows the sensitivities of
the spread yield to the market condition parameter p, which present a similar behavior
to probabilities of default.
There are important challenges ahead for academics and practitioners in the mathe-
matics of risk transfer, some of which are causing distress in financial markets eve-
rywhere since the very beginning of the century. A whole book would be a
minimum to explain in detail the nature of such challenges; here we aim at exposing
them as well as mentioning some of the recent works on those issues.
Exacerbating both the difficulty for proper estimation and the lack of data due to
dimensionality is the richness of financial data features. Since the 1980s for dis-
crete-time models and popular on the 1990s for continuous-time model is the fea-
ture of stochastic volatility. A purely unidimensional problem but with enough
complexity to keep pushing publications decades to come. Some of the difficulties
come from the unobservable nature of the volatility, which implies not only esti-
mating parameters but also requiring filtering to obtain the hidden process.
From the very beginning of the new century a new breed of stochastic unobserv-
able features have been nurtured by the academic and backed up by evidence from
practitioners. Among these stochastic correlations, the one among stock prices is
currently the most popular, but notice that it involves a whole set of, roughly, n2
hidden processes which require calibration and filtration. Some new stochastic fea-
tures have been listed quite recently, these are: stochastic covariation and correla-
tion between volatilities, between stock prices and cross volatilities and between
stock prices and correlations themselves. The next figure shows these features in
the context of two well known stock prices.
Each of those stochastic features has various implications not only for risk man-
agement objectives but also in the pricing of risk-oriented derivatives as those
explained in this document. Failure to proper model stocks and therefore to price
financial products inevitably leads to market unexpected adjustment with the cor-
responding chaos implied. This is one of the main reasons for the big losses in the
credit market during the year 2007; at the core of these losses was the mismanage-
ment of complex but popular products like CDOs and CFOs. These products
depend on hundred of companies for which no model has been found capable of
being simple and, at the same time, explaining their joint behavior.
112 M. Escobar, L. Seco
End Notes
1. Hull, J., and White, A. (2004). Valuation of a CDO and nth to default CDS without Monte
Carlo simulation, Journal of Derivatives 12:2, 8–23.
2. Black, F., and Scholes, M.S. (1973). The pricing of options and corporate liabilities, JPE 81,
81–98.
3. Merton, R.C. (1974). On the pricing of corporate debt: The risk structure of interest rate,
Journal of Finance, 29, 449–470.
4. Black, F., and Cox, J.C. (1976). Valuing corporate securities: some effects of bond indenture
provisions, AFAJ 31, 351–367.
5. Li, D.X. (2000). On default correlation: A copula approach, Journal of Fixed Income, 9, 43–
54; Laurent, J.P., and Gregory, J. (2003). Basket default swaps, CDO’s and factor copulas,
Working Paper, ISFA Actuarial School, University of Lyon.
6. Duffie, D., and Garland, N. (2001). Risk and valuation of CDO, Financial Analysts J. 57:1,
41–59.
7. Escobar, M., and Seco, L. (2006). A partial differential equation for credit derivatives pricing,
Centre de Recherches Mathematiques, 41, Winter.
8. Merton, R. (1974). On the pricing of corporate debt: the risk structure of interest rates, Journal
of Finance 29, 449–470; Black and Cox. (1976). op cit.; Giesecke, K. (2003). Default and
information, Working paper.
9. He, H., Keirstead, W., and Rebholz, J. (1998). Double lookbacks, Journal of Mathematical
Finance, 8, 201–228.
10. Harry, J. (1997). Multivariate Models and Dependence Structures. Chapman and Hall/CRC.
11. Buckley, I.R.C., Saunders, D., and Seco, L. (2008). Portfolio optimization when assets
have the Gaussian mixture distribution, European Journal of Operations Research, 185:3,
1434–1461.
12. Ansejo, U., Bergara, A., Escobar, M., and Seco, L. (2006). Correlation breakdown in the valu-
ation of collateralized debt obligations, Journal of Alternative Investments, Winter.
Chapter 8
Stable Models in Risk Management
P. Olivares
Introduction
It is a well known fact that the Gaussian assumption on market data is not supported
by empirical evidence. Particularly, the presence of skewness and a large kurtosis
can dramatically affect the risk management analysis, specially, the Value at Risk
(VaR) calculation through quantile estimators.
In this context stable, generalized hyperbolic and Gaussian mixing distributions
have been used with considerable success in order to explain asymmetry and heavy
tail phenomena.
The presence of heavy tails also affects standard estimation and model testing
procedures, due to the frequent presence of “outliers,” calling for more robust
methods.
In the 1960s Mandelbrot and Fama1 applied α-stable laws to the modeling of
financial data. The family of stable distributions not only describes heavy tails and
asymmetric behavior but also, the dependency on four parameters allows more flex-
ibility in the fitting and testing of stable models to empirical data.
Another nice property is that stable laws have domain of attraction, i.e., limits
of sums of independent identically distributed random variables, under mild
assumptions are also stables after a suitable renormalization.
The stable distribution has nevertheless two major drawbacks: the density prob-
ability function has no explicit form except in the cases of the Cauchy and the
Normal laws. Numerical methods are needed to compute it. Also, second and
higher moments do not exist; which constitutes a challenge to most statistical
methods.
In the next section the family of stable laws and its properties are introduced.
The next section reviews some calibration and simulation methods for stable distri-
butions. Next, a maximum likelihood approach (m.l.e.) is considered under the
framework of ARMA processes driven by stable noises. Asymptotic properties are
studied and numerical methods are discussed. Finally, we present some simulation
results for stable GARCH processes. The Value at Risk (VaR) for these stable models
is calculated and compared with its Gaussian counterpart, revealing important dif-
ferences between them. The procedure is also illustrated in real financial data.
In this section we introduce the stable distribution, some of its properties, different
parameterizations and simulation techniques.
A stable random variable X can be defined as follows:
Let a and b be two real positive numbers and X1 and X2 be independent
random variables equally distributed to X. There exist c C - R+ and d C - R such
that aX1 + bX2 = cX + d in distribution. Equivalent characterizations are also
possible.
A random variable X with stable distribution and parameters (α, β, σ, µ) is
denoted as S(α, β, σ, µ).
The interpretation of the parameters is as follows: α is a tail parameter,2
β is a coefficient of skewness, σ is a scale parameter and µ is a location
parameter.
The tail property is expressed for α C- (0, 2) as:
0.06
0.05
0.04
0.03
0.02
0.01
0
−10 −8 −6 −4 −2 0 2 4 6 8
Fig. 8.1 In continuous line an empirical stable density with α = 1.5 is obtained from simulated
data using Weron’s technique. In discontinuous line the approximated density function using
Nolan’s approach
Tail estimation methods estimate the tail index α by using the information about the
behavior of extreme data.
A simple approach is to consider, taking into account (1), the regression
equation:
log P(X > x ) = K a (1 + β)s − a log x (2)
for x large enough. Then the slope α is estimated using standard least squares tech-
nique. Expression (1) is true only for large values of x, hence, in practice, it is diffi-
cult to assess whether we are in the tail of the distribution. Moreover it depends on
the value of the unknown parameter α. On the other hand, if we go farther in the
tail fewer points are available for the estimates. In this sense, empirical studies sug-
gest to start from the 90% quantile. In simulation studies the method is reported to
overestimate α for values larger than 1.5, especially when the data achieve an asym-
metric behavior. The Hill estimator is based on the differences between logarithms
of the order statistics is also considered.6 Its asymptotic confidence interval is
known. A critical issue is the choice of the window size k. It is a compromise
between the position in the extreme of the tail and the variance of the estimator.
Indeed, the window size needs to be small enough to capture the tail position and
116 P. Olivares
large enough to control the variance. Numerical studies report the need of large
sizes to achieve accurate results.
The method is based on quantile estimation. The main idea is to use differences in
the quantile distribution properly normalized in order to get rid dependence on
location and scale parameters. Then, two functions on α and β are numerically cal-
culated from the sample quantiles values and inverted to get the corresponding
parameter estimates.
An interpolation algorithm allows to get more precise functional values. The
idea goes back to McCulloch (1986).7
A critical point here is the procedure established in order to calculate the in verse
function on the index set into the parametric space. Tables are available to this end.8
Proceeding by bilinear interpolation the estimates are obtained. A simpler alternative to
DuMouchel tables is to construct a grid of 100 × 100 points with the values of the
indices.
Once the sample index is calculated, the nearest tabulated index is taken and its
corresponding parameter is chosen. For more precision, a rather sophisticated
inversion method is implemented. It consists in finding the solution by moving
through segments of the grid of points (α, β). Precise tabulated values of ν require
a large amount of computation, though it is processed only once.
Method of L-Moments
Classical m.l.e. has long been implemented for stable distributions.12 The main dif-
ficulty in the estimation is that a closed form of the density is unknown. Probability
density function (p.d.f.) can be approximated by inverting the characteristic func-
tion via Fast Fourier Transform. Other related method relies in the Zolotarev’s inte-
gral representation. Once the p.d.f. is calculated on a grid, a quasi-Newton method
is implemented to maximize the likelihood:
An alternative to ascent methods is the Monte Carlo Markov Chain (MCMC) simu-
lated annealing approach. The main idea involved is to construct a grid on the
parametric space and find the maximum by moving through neighbor points in the
grid. The dynamic in moving from one point to another is as follows:
Starting from any point, among its neighbors, it is chosen at random with equal
probability one of them and, if the likelihood evaluated at this point is greater than the
previous one, the system will move to it with a certain probability. Repeating the
process, a reversible Markov Chain is constructed whose stationary probability law is
the desired one. This can be done using the Hansting Metropolis Algorithm. It turns
out that the limit probability law depends on a parameter called temperature. In order
to assure the probability law charges only optimum points the temperature is raised
slowly to infinity. The maximum of the log-likelihood given in (1.2) is calculated now
on the set of points in the parametric space belonging to the grid.
The main idea is to minimize the distance between the CF and the empirical char-
acteristic function (ECF) on an appropriate norm. While the minimization proce-
dure implies a lot of calculations, some simpler variants, exploiting particular
relations derived from CF in stable laws have been used. By the Law of Large
Numbers the ECF is a consistent estimator of the theoretical CF.
The method finds the minimum of the difference between both functions on the
parametric space on a weighted given norm. Optimal selection of discrete points t1,
t2,…,tp have been discussed.13 A weighting function W(t) with density w(t) with
respect to the Lebesque measure, typically an exponential law, is selected. Another
advantage of ECF methods is that they can be extended to non i.i.d. cases, particu-
larly to dynamic models with heteroscedastic volatility, by considering a multivari-
ate or conditional CF instead. Asymptotic properties as consistency and normality
still hold in this general case.
Regression Method
first ones by fitting a linear regression. The first adjustment can be repeated a
number of times to achieve better precision in the estimation. We applied a variant,
consisting in a recursive estimation of parameters, once an estimation set is
obtained; the data are standardized by subtraction of the location parameter and
dividing by the scale parameter. A first equation is obtained from general expres-
sion of the stable CF namely:
We consider now an ARMA process with stable noises, to simplify let us first study
an autoregressive process of order one(AR(1) ) given by:
X t = aX t-1 + σε t (3)
where (εt) are independent random variables with symmetrical stable distribution
S(α, 0, 1, 0). Its density is denoted by fα. The likelihood function based on observa-
tions X1, X2,…, Xn and assuming the initial distribution does not depends on the
parameter is given by:
L n (a ) = ∏ f (X k / X k-1 ),
First, we give some technical results about the uniform control of the density and
its derivatives, their proofs have been given.15
Lemma 1. For every xЄR
(i )sup / log fq (X) / ≤ h1 (x ) for a ⑀ [a m , a M ]
(ii )sup ⭸2 log fq ( x ) / ⭸a 2 ≤ h2 ( x ) for a [a m , a M ]
Weron’s algorithm16 is used to generate stable numbers and then stable ARMA
data. In Fig. 8.2 some simulations results for given parameters are included. We use
previously calculated values of the density with different parameter values and then
a bilinear interpolation to get the points needed in the optimization procedure is
applied. In this way we save a lot of calculation time.
The maximization of the likelihood is implemented with the use of a quadratic
sequential quasi-Newton technique. The Hill estimator is used as initial
approximation.
We perform a simulation study with sample sizes 250, 500 and 1,000, different
parameter sets (α є{1.5; 1.7; 1.9}; σ є {0.02; 1}, a є {0.3; 1}) and 60 repetitions
8 Stable Models in Risk Management 121
30
20
10
−10
−20
−30
−40
0 100 200 300 400 500 600 700 800 900 1000
Fig. 8.2 A simulated trajectory of an AR(1) stable with parameters α = 1.5; σ = 0.6; µ = β = 0
and a = 0.9
for every trajectory. After it, we calculate the mean and the standard deviation
of the estimates and we compare with the original values. The bias and the
standard deviation go to zero as the sample size increases in accordance with
Theorem 1.
The standard deviation is calculated for different sample sizes using an approxi-
mation of the Fisher’s information matrix.
We also compute the VaR for an Autoregressive model of order one when stable
and Gaussian noises are considered and we compare them with empirical simula-
tion data from a large number of observations. The results are given in Table 1.5.
They show the risk of considering Gaussian autoregressive models instead of stable
ones for a VaR at 5 and 10% levels. Similar results have been obtained for the
independent and identical distributed case.
∑ (a i + b i ) < 1
1n(θ )= -∑logσt+∑logfεt(Xt/σt)
From simulated stable GARCH (1,1) data with parameters α = 1.5, c0 = 0, k = 0.2,
a1 = 0.1, b1 = 0.6 and sample sizes 250, 500, 1,000, 2,000 and 10,000 m.l.e. are
obtained. The true parameters are recovered; moreover, the standard deviation of
the estimators decreases when the sample size increases. The results can be seen in
Fig 8.3.
A comparison between the VaR under normal and stable noises is done in Table 8.1
for parameters c0 = 0, k = 0.13, a1 = 0.08 and b1 = 0.57 with four different sample sizes.
The results illustrate the danger of using a incorrect model from a risk manage-
ment perspective. The parametric VaR under normal and stable laws differ consid-
erably from the historical one generated from a stable GARCH(1,1) model.
x 10−4
6
0
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
Fig. 8.3 Graph of the standard deviation taking α = 1.5; c0 = 0; k = 0.2; a1 = 0.1 and b1 = 0.6 for
several sample sizes
test rejects the hypothesis of normality of the returns. Another test regarding the
variance rejects the hypothesis of homoscedascity. For Sterling Pound and Canadian
exchange rates the fitted models are respectively:
X t = −0.0001 + s 1e t
s t 2 = 0.000002 + 0.023727 X t _ 12 + 0.898493s t _ 12
Xt = _ 0.0002 + s t e t
∑ t 2 = 0.000003 + 0.0633882 X t _ 12 + 0.904731s t _ 12
X t = 0.00007 + s t e t
s t = 0.00005 + 0.16711X 2 t_1 + 0.79553s 2 t_1
124 P. Olivares
Table 8.2 Value at risk is shown under a normal GARCH and a stable
GARCH for daily Dow Jones index over the period from 1996–2006
VaR 1% 5% 10%
Empirical 0.0203 0.0114 0.0082
GARCH 0.0435 0.0145 0.0100
GARCH normal 0.2357 0.0177 0.0120
End Notes
1. Mandelbrot, B.B. (1963). The variation of certain speculative prices. Journal of Business
26:394419; Fama, E., and Roll, R. (1971). Parameters estimates for symmetric stable distribu-
tions, Journal of the American Statistical Association 66, 331–339.
2. Samorodnitsky, G., and Taqqu, M.S. (1994). Stable non Gaussian random processes:
Stochastic models with infinite variance. Chapman and Hall, London.
3. Zolotarev, V.M. (1986). On representation of stable laws by integrals, Selected Translation in
Mathematical Statistics and Probability 6, 84–88.
4. Weron, R. (1996). On the Chambers Mallows Stuck method for simulating skewed stable ran-
dom variables. Statistics and Probability Letters 28, 165–171.
5. Alvarez, A., and Olivares, P. (2005). Methodes d’estimation pour des lois stables avec des
applications en finance. Journal de la Societe Francaise de Statistique, 146:4.
6. Hill, B.M. (1975). A simple general approach to inference about the tail of a stable distribu-
tion, Annals of Statistics 3:5, 1163–1174.
7. McCulloch, J.H. (1986). Simple consistent estimators of stable distribution parameters.
Communication Statistics Simulation 15, 1109–1136.
8. DuMouchel, W.H. (1971). Stable distributions in statistical inference. Ph.D. thesis, University
of Ann Arbor, Ann Arbor, MI.
9. Hosking, J.R.M. (1990). L-moments: analysis and estimation of distributions using linear
combinations of order statistics. Journal of Royal Statistical Society B 52, 105–124.
10. Maussel, H. (2001). Calculating quantile based risk analytics with l-estimators, Algo Research
Quarterly 4:4, 45–62.
11. Carrillo, S., Escobar, M., Hernandez, N., Olivares, P., and Seco, L. (2007). A theoretical com-
parison between moments and Lmoments. Working paper.
12. DuMouchel, W.H. (1973). On asymptotical normality of the m.l.e when sampling from stable.
Annals of Statistics 1, 948–957.
13. Carrasco, M., and Florens, J. (2000). Generalization of GMM to a continuum moment condi-
tion, Econometric Theory 16, 767–834.
14. Koutrovelis, I.A. (1980). Regression type estimation of the parameters of stable laws, Journal
of the American Statistical Association 75, 918–928.
15. Bollerslev, T. (1986). Generalized autoregressive conditional heteroskedasticity, Journal of
Econometrics 3, 307–327.
16. Weron, R. (1996). op cit.
17. Engle, R.F. (1982). Autoregressive conditional heteroskedasticity with estimates of the vari-
ance of U.K. inflation. Econometrica 50, 9871008.
Chapter 9
Hybrid Calibration Procedures for Term
Structure Models
T. Schmidt
Introduction
Preliminaries
This section follows Schmidt (2007).4 We give the necessary results while giving
reference to proofs. We generalize the approach by Kennedy (1994) to credit risk.
On one side, the case of Gaussian random fields can be considered as a special case
of the more general work in Schmidt (2006).5 On the other side, this special case
allows to compute the drift conditions directly, without the need to consider
stochastic differential equations on Hilbert spaces. The considered market contains
riskless bonds denoted by B(t, T) and bonds issued by a company with default risk,
denoted by (t, T). (rt) denotes the risk-free spot rate. We consider a finite time
horizon T* and a maximum time-to-maturity T**.
The objective measure is denoted by P. Consider a measure Q which is equiva-
lent to P. Our aim is to give conditions under which Q is also a martingale measure,
hence the considered model is free of arbitrage. The dynamics of bonds subject to
credit risk relate to two factors besides the risk-free interest rate: First, the credit-
worthiness of the bonds plays an important role. Creditworthiness is represented by
the probability of default, respectively the default intensity. The second component
is the price of the bond after default, named recovery.
It is possible to consider different types of recoveries in this framework, but for ease
of exposition we consider on fractional recovery of the par value. In this approach a
bond may face several so-called credit events in its life time. Each credit event refers
to a reduction of the face value and hence implies a downward jump of the bond price.
To this, we assume that the bond prices itself is given in terms of forward rates, i.e.,
t
t i ≤t
where the loss process L takes values in (0,1) and the times at which credit events
occur, 0 < t1 < t2,…, are the jump times of a Cox Process with intensity (lt)t≥0.
∗
The intensity is assumed to be a nonnegative G-adapted process with T l dt <∞
a.s., where the filtration G is given by ∫0 t
Gt := s ( B(s, T ), X (s, T ) : 0 ≤ s ≤ t , T ∈[ s, s + T ∗∗ ]) (1)
Ft := Gt ∨ s (1{t ≤ s} : 0 ≤ s ≤ t ).
f (t , T ) := m (t , T ) + X (t , T ), (2)
where m is a deterministic function and (X– (s, t) )s, t∈[0, T~] is a zero-mean, continu-
ous Gaussian random field with covariance function
9 Hybrid Calibration Procedures for Term Structure Models 127
In practice, this information is only available for a discrete tenor structure T1,.., Tn,
which is a basic motivation to consider market models. On the other hand, one can
either interpolate those using splines or some parametric families,7 or view the discrete
observations as partial information of the whole, but unknown term structure.
We take this last viewpoint and model the whole term structure. Later on, in the
calibration process, we account for the discrete observations by an approximation
argument.
The following result states the drift condition, under which the market is free of
arbitrage. If Assumption (A1) holds, then Q is an equivalent martingale measure iff
for all t ∈ [0,T*]
f (t , t ) = rt + lt Lt (3)
A number of interesting special cases exist in the literature. For example, the
Vasicek8-model is a special case. This is also the case for the intuitive four factor
implementation proposed in Schmid, Zagst and Antes.9
The main ingredient for efficient calibration procedures are pricing formulas which
lead to a fast implementation. In this section we provide numerous pricing formu-
las, which are all explicit and therefore the implementation is extremely fast. Proofs
are available from the author if desired.
Default Digitals
A basic derivative of an underlying which faces credit risk is the default digital put.
It promises a fixed payoff, say 1, if a default occurred before maturity, and zero
otherwise. We focus on the derivative where the payoff is settled at maturity.
It may be recalled that the default digital put with payoff at maturity is intrinsi-
cally related to the zero recovery bond, as
p d (t , T ) + B0 (t , T ) = B(t , T )
A2 Assume that both risk-free and defaultable forward rates admit a representa-
tion via Gaussian random fields. For the defaultable forward rates this is specified
in Assumption (A1) and we assume a similar structure for the risk-free bonds with
(X(s, t) )s, t∈[0, T~] being a zero-mean, continuous Gaussian random field with covariance
function c(s1 ˆ s2, t1, t2) and c(0, t1, t2) = 0. Furthermore, assume that the drift-condi-
tions as well as (3) are satisfied and the loss function (Lt) is deterministic. Besides
–
this, we assume joint independent increments of X and X .
If Assumption (A2) holds, the market is free of arbitrage. Furthermore, we
deduce from (3) that
f (t , t ) − f (t , t ) (5)
lt =
Lt
–
Instead of defining the dynamics of f and l and then deriving f , we want to pro-
–
pose the dynamics of f and f and investigate the consequences for l. This reflects
the fact that l is not observable in the market, while the forward rates are. Therefore,
we use (5) as a starting point for this section.
A first consequence of this approach is, that because L is deterministic, l turns
out to be a Gaussian random field. The assumption that the recovery rate is deter-
ministic is often used in practice but has serious drawbacks. Anyway, random
recovery can be easily introduced in the presented framework if it is assumed to be
independent of the other processes.
9 Hybrid Calibration Procedures for Term Structure Models 129
– – –
For ease of notation we write u instead of (u, u) and similarly m u, m u, Xu and X u
and consider t = 0 as the current time.
We will need a measure for correlation between risk-free and defaultable rate.
To this, define
Note that z(s, t1, t2) is not necessarily symmetric in t1 and t2. Furthermore, the
assumption of joint independent increments immediately yields
Proposition 1. Under (A2), the price of the zero recovery bond equals
1 T
B0 ( t, T ) = 1{τ >1} B( t, T )exp{− ∫ ( f (t , u) − f (t , u))du
Lu t
T t 1
− ∫ ∫ [lu (c( v, v, u) − c(t , v, u)) + (c (t , v, u) − c (t , v, u)]dvdu
t 0 Lu
1 T T 1
2 ∫t ∫t
+ [ u1 v(c(u ^ v, u, v) − c(t , v, u)) + 2 u (ζ(u ^ v, v, u) − ζ(t , v, u))
Lv
1
+ 2
(c (u ^ v, u, v) − c (t , v, u))]dvdu} (7)
Lu
T 1
B0 (t , T ) = 1{t >t} B(t , T )exp[ − ∫ ( f (t , u) − f (t , u)) du] g(t , T )
t Lu
1− L1 1
= 1{t >t} B(t , T ) B (t , T ) L g(t , T ) (8)
Remark 2. If the price of the zero recovery bond is available, the following for-
mula allows to calibrate the loss rate. Denoting the forward rate of the zero recovery
bond by f 0, we have
f t = rt + lt Lt = rt + ( ft − ft )Lt
0
f t − ft
⇔ Lt =
ft 0 − ft
130 T. Schmidt
Default Put
It is also possible to price a default put with knock-out feature. The put is
knocked out if a default occurs before maturity of the contract, which means
that the promised payoff is paid only if there was no default until maturity of
the contract. Hence this put protects against market risk but not against the loss
in case of a default.
For the conditional expectation w.r.t. Ft we simply write Et. Denoting the price
of a (knock-out) default put with maturity T on a defaultable bond with maturity T'
~
by Pk (t, T, T' ), the risk neutral valuation principle yields for 0 £ T £ T ' T'
T
P k (t , T , T ′ ) = Et [exp( − ∫ ru du)( K − B(T , T ′ ))+ 1{t >T } ]
t
P k (t , T , T ′ ) = B 0 (t , T )KF ( − d2 ) − B k (t , T , T ′ )F ( − d1 ), (9)
Note that the price of a put without knock-out can be obtained using similar
methods. The price of the knock-out bond equals
~ ( t ,T ,T ′ ) ~
s
− m ( t ,T ,T ′ )
B k (t , T , T ′ ) = B 0 (t , T ) e 2
B (t , T ′ ) k
= B 0 (t , T ) g (t , T , T ′ ) (10)
B (t , T )
The pricing of credit spread options can be done in a more or less similar fashion. To
ease the notational burden, we consider the derivatives prices at time t = 0. A credit
spread call with strike K offers the right to buy the underlying, i.e., the defaultable bond,
at maturity for a price which corresponds to a yield spread K above the yield of an
equivalent risk-free bond. Precisely, for maturity T of the credit spread call and maturity
–
T' of the underlying defaultable bond B the value of the credit spread call at T equals
Typically these securities are traded with a knock-out feature, s.t. the derivative
has zero value after default. Hence such credit derivatives protect against spread
widening risk, but not default risk.
Proposition 3. Under assumption (A2), the price of the (knock-out) credit spread
call with maturity T ∈ [0, T*] on a defaultable bond with maturity T' ∈ [T, t*] equals
T′ u
m1 := − ∫ [ m (0, u) − m (0, u) + ∫ (c (v ∧ T , v, u) − c(v ∧ T , v, u))] dv du,
~
T 0
T′ T′
s 1 := ∫ ∫ [ c (u ∧ v, u, v) − z (T , u, v) − z (T , v, u) + c(u ∧ v, u, v)] dv du,
T T
T′ T′ T T c (u ∧ v, u, v)
s 2 := ∫ ∫ 11 (u, T )l1 (v, T )c(u ∧ v, u, v) dv du + ∫ ∫ dv du
0 0 0 0 Lu Lv
T′ T 11 (u, T )
+2∫ ∫ z (u ∧ v, v, u) dv du,
0 0 Lv
T′ T′
r := ∫ ∫ 11 (u, T )[z (u ∧ T , v, u) − c(u ∧ T , v, u)] dv du
0 T
T T′ 1
+∫ ∫ [ c (u ∧ T , u, v) − z (u ∧ T , u, v)] dv du,
0 T Lu
m1 − ln K
d2 := + rs 2 , d1 := d2 + s 1 ,
s1
11 (u, T ) := 1{u ≤T }1u + 1{u >T }
B(T , Tn ) − B(T , Tn )
S (T ) =
∑
n
i =1
B0 (T , Ti )
The pricing of the credit default swap mainly relies on the pricing of the zero
recovery bond. Therefore, Proposition 1 immediately leads to a price of the credit
default swap and we obtain the following price of the CDS call
CSk (0, T , T ′ )
n
= E[exp( − ∫ ru du)( B(T , Tn ) − B(T , Tn ) − K ∑ B0 (T , Ti ))+ 1{t >T }
T
0
i =1
T Tn
= E[exp( − ∫ (ru + lu ) du)( B(T , Tn ) − exp( − ∫ f (T , u) du)
0 T
n
− K ∑ exp( − ∫ f 0 (T , u) du))+ ]
Ti
(11)
T
i =1
–
Usually the final repayment, represented by B (T, Tn), dominates the coupon
payments. This justifies the following assumption.
A3 For the considered maturity T ∈ [0, T**] and the tenor structure T < T1 < …
~
< Tn ≤ T ; assume that the random variable
n
f (T , u) du) + K ∑ exp( − ∫ f 0 (T , u) du)
Tn Ti
exp( − ∫ (12)
T T
i =1
Recall that m~ and σ~2 have been computed in Lemma A.6. The following result
gives the price of call on a credit default swap which is knocked out at default.
Proposition 4. Under assumptions (A2) and (A3) the price of a call on a credit
default swap with knock-out equals
9 Hybrid Calibration Procedures for Term Structure Models 133
SC (0, T , Tn ) = BC (0, T , Tn )F ( − d2 )
n
− [ B k (0, T , Tn ) + K ∑ B0 (0, Tn )]F ( − d1 ),
i =1
with deterministic
B(0, T ) Tn u
m := m + +∫ ∫ c(v, u, v) dv du,
B(0, Tn ) T 0
s s
2 2
Tn Tn
s 1 := ln[
m
2
+ 1] + ∫T ∫T
c (T , u, v ) du dv − [
m +
2
]
B(0, Tn ) Tn T Tn Tn
+ ln[ exp( − ∫ ∫ c (v, u, v) dv du − ∫ ∫ z (T , u, v) dv du)
B(0, T ) T 0 T T
B 0 (0, Ti )
n
+K ∑
Ti T Ti Tn
exp( − ∫ ∫ c 0 (v, u, v) dv du − ∫ ∫ 1T c(T , u, v)
i =1 B (0, T )
0 T 0 T T
ς(T , u, v)
+ dv du)],
LT
Tn Tn T T c (u ∧ v, u, u)
s 2 := ∫ ∫ 12 (u, T )12 (v, T )c(u ∧ v, u, v) dv du + ∫ ∫ dv du
0 0 0 0 Lu Lv
Tn T 12 (u, T )
+2∫ ∫ s (u ∧ v, v, u) dv du,
0 0 Lv
m − ln K
d2 := + rs 2 , d1 := d2 + s 1 ,
s1
12 (u, T ) := 1{u ≤T } − 1u + 1{u >T }
B (T , T1 , , Tn ) T
r := Cov[ln , − ∫ ru + lu du − ln B(T , Tn )]
B(T , Tn ) 0
If the swap is assumed to pay the “difference to par” on default, pricing formulas
are obtained in a similar way.
Remark 3. It is interesting, that the above formulas immediately lead to hedging
strategies for knock-out derivatives. We refer to Schmidt (2007) and Schmidt
(2003) for full details.
The main goal of this section will be to discuss a number of hybrid calibration pro-
cedures, to begin with a procedure based on Gaussian random fields and the
obtained formulas.
134 T. Schmidt
21
18
3
15
6
Mat
uri
9
tie 12
12
s ( 9
3y
15
to
18
24y 6
) 3 21 Jun-Aug 01
24
0.000 0.008 0.016 0.024
24
21
18
Mat 3
uri
15
6
tie 9
12
s (
3y 12
to 15
9
24y 18
) Jun-Aug 02
6
21
24
3
0.000 0.012 0.024 0.836 0.048
24
21
18
Mat 15
uri
3
tie
6
1
s ( 2
9
3y
12
9
to 15 Mar-May 03
24y 6
18
) 3 21
24
Fig. 9.1 Estimated covariance functions (the eigenvectors are given in Fig. 9.2) for greek Treasury
data
between the calculated prices and market prices. In this procedure, calculating model
prices is done in two steps. First, determine –c(s, t1, t2) and V (s, t1, t2) on the basis
of g–(u, v) and g(u, v) for u, v ∈ {u1,.., um}, t1, t2 ∈ {T1,.., Tn} and every considered
136 T. Schmidt
data time s ∈ {s1,.., sp}. For the second step, the prices of the considered derivatives are
computed using the c–(s, t1, t2) and V (s, t1, t2) determined in the first step.
Implementation
_ _
Consider the covariance function c . Then c can be decomposed into
using any orthonormal basis {ek : k ∈ N} of L2(m), the Hilbert space of functions
f: R ⱍ→ R which are square integrable w.r.t. a suitable measure m. We are free to
choose m, which allows putting different weights onto different maturities, as sug-
gested in Filipović (2001).13
Note that in order to determine the covariance function, one has to specify both the
{ek : k ∈ N} and the {lk : k ∈ N}. The idea is to retain the shape of the estimated covari-
ance function by taking fixed eigenvectors and just calibrating the eigenvalues so to obtain
a good fit. The eigenvectors will be obtained from a principal component analysis.
The first step is to estimate the eigenvectors using a set of historical data. Consider
a small time interval, so that stationarity of the considered random fields in this time
–
interval may be assumed. The historical data consists of observations of f (s, t) at a
set of time points T' : = {(si, tj) : 1 ≤ i ≤ n1, 1 ≤ j ≤ n2}. Hall et al. (1994) propose a
covariance estimator based on kernel methods in the case of real valued and stationary
processes.14 In the following, we apply their methodology to the random field case.
For the points a, b ∈ [0, T *]×[0,T **] we define the covariance estimator by
∑ a − ci b−d j
c i , d j ∈T ′
K( h , h ) ⋅ [ X (c i ) − X ][ X (d j ) − X ]
c(a, b) :=
∑ a − ci b−d j
c i , d j ∈T ′
K( h , h )
where K(c,d) is a symmetric kernel. Observe that the sum is over all time points in
T’, labeled ci and dj, respectively. Estimation of the covariance function –c (s, t1, t2)
is thus obtained by considering a1 = b1 = s.
Remark 4. An additional step may ensure positive definiteness of the estimator. The
following second step is optional, but ensures that the estimator is positive definite, thus
a covariance function itself. This yields increased performance for the eigenvector
decomposition below. We invert the characteristic function of our estimator,
f (l ) = ∫ cos(l t )r (t ) dt
9 Hybrid Calibration Procedures for Term Structure Models 137
4
3
2
1
0
-1
-2
1 2 3 4 5 6 7 8
Eigenvectors
3
2
1
0
-1
-2
1 2 3 4 5 6 7 8
Eigenvectors
3
2
1
0
-1
-2
-3
1 2 3 4 5 6 7 8
Eigenvectors
rˆ (t ) =
1
∫ cos(l t )[f (l )]+ dl
(2p )2
Figure 9.3 shows the result of the covariance estimation on a set of U.S. Treasury
data using historical data of four weeks. The implementation uses a Gaussian kernel
and the covariance estimator is plotted for maturities of three months to three years.
After obtaining an estimator for the covariance function, we can calculate its
eigenvectors up to a required precision. The eigenvector decomposition is done by
applying the Mises-Geiringer iteration procedure. Figure 9.3 also shows the calcu-
lated eigenvectors for the U.S. Treasury data. The first two eigenvectors show sig-
-1
-2
1 2 3 4 5 6 7 8 9 10 11 12
Eigenvectors
Fig. 9.3 The upper graph shows the estimated covariance function for U.S. Treasury data (May
2002). The estimation uses a Gaussian kernel and shows maturities of 3,6,..,36 months. The lower
graph shows the obtained eigenvectors. The first two eigenvectors correspond to the eigenvalues
3.4224 and 0.0569, respectively, while the further are of magnitude 10−15
9 Hybrid Calibration Procedures for Term Structure Models 139
nificant eigenvalues (3.4224 and 0.0569), while the remaining eigenvalues are of
much smaller magnitude. In this example, therefore, it turns out to be sufficient to
use the first two eigenvectors only.
More generally, assume that we have already determined the first N eigenfunc-
tions. Then we use the following covariance function for the calibration:
N
rˆ (l1 , …, l N , t1 , t2 ) := ∑ lk ek (t1 )ek (t2 ).
k =1
As before, a standard software package can be used to extract the l1,.., lN from
observable derivatives prices by a least-squares approach. Note that in comparison
to the previously presented model, a much smaller set of derivatives can be used for
the calibration. The implementation of this last step using credit derivatives data is
subject to future research.
Nevertheless, we already analyzed some bond data and estimated the covari-
ance functions and the eigenvectors/-values. Take, for example, the data from
Greek Treasury bonds. The estimation results may be found in Fig. 1. First, note
that the variance for bonds with small maturities is higher than for bonds with
large maturities. This is usually referred to as “volatility hump.” Second, for the
period June to August 2001 negative correlations for bonds with small versus
bonds with large maturities were observed. This reflects the fact that, in this
period, interest rates with short maturities compare to long-maturity ones moved
into the opposite direction.
Taking a closer look at the eigenvectors reveals the components of the covariance
function. The first eigenvector generates more or less the shape of the covari-
ance functions. The already-mentioned effect that larger maturities relate to a
smaller variance may be observed here as well. The second eigenvector covers the
wriggly structure of the covariance function.
In the paper of Roncoroni and Guitto (2000) two calibration procedures for infinite
dimensional term structures of interest rates (i.e., without credit risk) has been put
forward. We give a short outline.
Historical Calibration
The first proposed procedure gives a way of using historical data to estimate the
dynamics of the forward rates. To reduce the number of parameters to a finite
number it is assumed that the yield curve falls into a class of parametric
families (e.g., polynomial or spline). Thus, a observed yield curve may be
approximated well by F(a1,.., an) for a suitable n. The parameters itself follow
a diffusion in Rn,
140 T. Schmidt
da(t ) = b dt + ≤ dwt .
The goal is to estimate the parameters of this diffusion from historical data and
thereafter reduce the number of parameters by a principal component analysis for
a(t) = (a1(t),.., an(t) ). To this, historical data for yield curves are used. Every
observed yield curve leads (by suitably inverting F) to an observation of a, such that
b and Σ are easily estimated. Finally, a principal component analysis on a is used
to reduce the dimension of Σ to a suitably small n.
Historical-Implicit Calibration
The estimated dynamics of a implies certain covariance functional of f(t, T), namely
n
Cov( f (t , t1 ); f (t , t2 )) = ∑ lk fk (t2 ) fk (t2 )
k =1
where lk and fk can be derived from F and the estimated dynamics of a. However,
derivative’s prices computed from this dynamics typically do not match observed
market prices. The authors therefore suggest to allow lk to depend on time. These
functions are obtained by calibrating the now time-dependent model to prices of
derivatives.
Risk Measures
We assume that (A1) holds and that the defaultable forward rate follows (2)
–
under the real-world measure P. Note that (3) gives the relation of f ; and l, the
default intensity. We have the following result:
Proposition 5. The value at risk of a defaultable zero-recovery bond B0(.,T) over
a period ∆ is given by
⎛ ln( x + B0 (0, T )) − m ⎞ ⎛ (s l )2 ⎞
VaRa = exp( m l + s 2l /2)F ⎜ − rs l ⎟ + 1 − exp ⎜ − m l +
⎝ s ⎠ ⎝ 2 ⎟⎠
while the expected shortfall equals
1 ⎛ s 2 + s 2 + 2 rss l ⎞
exp ⎜ m + m l + l ⎟⎠
1−a ⎝ 2
⎛ − lnVaRa + m ⎞
F⎜ + ( rs l s + s 2 )(s l2 + s 2 + 2 rs l s )⎟
⎝ s ⎠
Proof. The proof heavily relies on the expression (7), derived in Proposition 7.
Note that this formula holds under P as well as under Q, just that the dynamics of
–
f and f as well as the default intensity differ. This gives that
1 T T l
+ ∫ ∫ [ u1v (c(u ∧ v, u, v) − c(0, u, v)) + 2 u (z (u ∧ v, v, u) − z (0, v, u))
2 0 0 Lv
1
+ (c (u ∧ v, u, v) − c (0, u, v))] dv du}
L2u
as well as
+ 1{0 ≤ x + B0 ( 0 ,T )} P (t ≤ ∆)
142 T. Schmidt
First,
∆ (s l )2
P(t ≤ ∆ ) = 1 − E P (exp( − ∫ lu du)) = 1 − exp( − m l + )t
0 2
where a small calculation gives
∆ m (u, u) − m (u, u)
ml = ∫ du
0 Lu
∆ ∆ c (u ∧ v, u, v) − 2z (u ∧ v, u, v) + c(u ∧ v, u, v)
(s l )2 = ∫ ∫ dudv
0 0 Lu Lv
if ξi are N(mi, si2) and the correlation is r. Hence the first term in (13) equals
∆
E P (exp( − ∫ lu du)1 ln( x + B0 ( 0 ,T )) − m
)
0 {x ≤ s
}
ln( x + B 0 (0, T )) − m
= exp( m l + s l2 / 2)F ( − rs l )
s
where
1
E (1{t > t} exp( m + sx )1{1{t >t } exp( m +sx ) > a} )
1−a
1
= E (1{t > t} exp( m + sx )1{exp( m +sx )> a} )
1−a
1 ∆
= E (exp( − ∫ lu du + m + sx )1{exp( m +sx )> a} )
1−a 0
Conclusion
End Notes
1. Schmidt, T. (2003). Credit Risk Modeling with Random Fields, Ph.D. thesis, University of
Giessen.
2. Kennedy, D.P. (1994). The term structure of interest rates as a Gaussian random field,
Mathematical Finance 4, 247–258.
3. Roncoroni, A., and Guiotto, P. (2000). Theory and calibration of HJM with shape factors, in
Mathematical Finance – Bachelier Congress 2000, Springer, Berlin Heidelberg New York,
407–426.
4. Schmidt, T. (2007). Hybrid calibration for defaultable term structures with gaussian random
fields, in International Conference on Management Innovation, Shanghai, Vol. 1, Shanghai
University of Finance and Economics and Risk China Research Center, University of
Toronto.
5. Schmidt, T. (2006). An infinite factor model for credit risk, International Journal of
Theoretical and Applied Finance 9, 43–68.
6. Adler, R.J. (1981). The Geometry of Random Fields, Wiley, New York.
7. Filipović, D. (2001). Consistency Problems for Heath-Jarrow-Morton Interest Rate Models,
Vol. 1760 of Lecture Notes in Mathematics, Springer, Berlin Heidelberg New York.
8. Vasicek, O. (1977). An equilibrium characterization of the term structure, Journal of
Financial Economics 5, 177–188.
9. Schmid, B., Zagst, R., and Antes, S. (2008). Pricing of credit derivatives, submitted.
10. Pang, K. (1998). Calibration of Gaussian Heath, Jarrow and Morton and random field interest
rate term structure models, Review of Derivatives Research 4, 315–346.
11. Roncoroni, A., and Guiotto, P. (2000). op cit.
12. Kennedy, D.P. (1997). Characterizing Gaussian models of the term structure of interest rates,
Mathematical Finance 7, 107–118.
13. Filipovic, D. (2001). op cit.
14. Hall, P., Fisher, N.I., and Hoffmann, B. (1994). On the nonparametric estimation of covari-
ance functions, Annals of Statistics 2115–2134.
15. McNeil, A., Frey, R., and Embrechts, P. (2005), Quantitative Risk Management: Concepts,
Techniques and Tools, Princeton University Press.
Chapter 10
The Sarbanes-Oxley Act and the Production
Efficiency of Public Accounting Firms
Introduction
a
The Authors, are grateful to this International Journal of Services Sciences for permission to
reproduce this article from vol. 1, no. 1 2008.
certain consulting services to their clients, such a restriction could reduce public
accounting firm revenues generated from MAS services and decrease their produc-
tive efficiency because of inappropriate staff compositions and sizes. On the other
hand, the Act (Section 404) requires business firm managements to assess the effec-
tiveness of their internal control systems and it requires auditors in their audit
reports to attest to management assessments. Furthermore, in response to SOX,
many companies also hire public accounting firms other than their auditors to docu-
ment and test their internal control systems. Thus, the mandated new attestation
services for audit clients, and the internal control systems documentation and test-
ing services for non-audit clients, can add to revenues generated from the custom-
ary accounting and audit services of public accounting firms and could possibly
also increase their production efficiency.
Given these opposing effects in different provisions of SOX, the question of
whether the efficiency of public accounting firms increased or decreased after the
passage of SOX becomes an interesting empirical research issue. A few studies using
client level data have looked at the effect of the Act on audit services and observed
improvements in auditor independence3 and an increase in audit fees charged by the
Big 4 in 2002.4 To our best knowledge, there is little empirical evidence, on how SOX
affects the efficiency of public accounting firms. In this study we therefore seek to
document empirically the effect of the Act, as a regulatory intervention by the Federal
Government, on the productive efficiency of public accounting firms.
We employ two different techniques based on two different estimating princi-
ples. Data Envelopment Analysis (DEA), which is one of the techniques we
employ, is non-parametric and oriented to frontier rather than central tendency esti-
mates.5 We also use the central tendency and parametric methods that are involved
in OLS regressions. In this way we protect against the “methodological bias” that
can occur when only one method of analysis is used.6
The first of these two methods is designed to evaluate productive efficiencies
which we use to evaluate the performances of public accounting firms using annual
operations data from 58 of the 100 largest accounting firms in the U.S. over the
period 2000–2004. We then use both DEA-based and conventional test procedures
to test for production efficiency differences between pre- and post-SOX periods.
Our statistical test results indicate that the production efficiency of public account-
ing firms increased after the passage of SOX. Moreover, our results are robust even
after controlling for service mix, the number of public clients, and the operating
size of public accounting firms.
The nature and extent of leading public accounting firm involvements in numerous
accounting scandals at high profile companies in the late 1990s and early 2000s led
to reforms of public accounting through attempted improvements in the independ-
ence of auditors and the quality of audit services. Section 201 of SOX prohibits
10 The Sarbanes-Oxley Act and the Production 147
auditors from providing eight types of services to their clients: bookkeeping, finan-
cial information systems design and implementation, appraisals or valuation servi-
ces, actuarial services, outsourcing internal audit services, management and human
resources services, broker/dealer and investment banking services, and legal or
expert services unrelated to audit services. In addition, auditors cannot offer any
service that the PCAOB determines to be impermissible. For non-audit services
other than those listed above, such as tax services, an approval by the audit com-
mittee is required.
These new rules and regulations are aimed at limiting certain “lucrative” servi-
ces of public accounting firms that might compromise their independence. If public
accounting firms are forced to give up revenues from these lucrative services for
which they are already organized and staffed, their production efficiency is likely
to be decreased. This possibility is further extended because prior studies report a
positive relation between service fees and the joint provision of audit and non-audit
services.7 By offering joint services, an accounting firm may benefit from potential
knowledge spillover across services. These synergies may then result in cost sav-
ings or revenue augmentations that increase production efficiency. Since public
accounting firms can no longer provide non-audit services to their audit clients,
their production efficiency is, instead, likely to be decreased.
Section 404 of SOX moves in the opposite direction. It requires auditors in their
audit reports to attest to management assessments of the internal control systems.
The new requirements offer opportunities for public accounting firms to generate
extra revenues from both additional audit procedures and accounting services.
Specifically, on the audit services side, auditors likely pass on the costs of additional
audit steps to their clients with a resulting increase in audit service revenues. On the
accounting services side, many firms hire other public accounting firms to docu-
ment, update and test their internal control systems as required by Section 404. This
provides public accounting firms an opportunity to generate revenues from addi-
tional accounting services. A recent survey conducted by Financial Executives
International on 217 firms with average revenue of $5 billion or more report that
firms in their sample spent an average of $4.36 million to comply with Section 404
in 2004. An average of $1.34 million was spent internally and $1.72 millions on
external accounting/consulting and software fees to comply with the provisions of
Section 404. The remaining $1.3 million was spent on additional audit fees for
attestations of the system, with a resulting average increase of 57% over the regular
financial statement audit fees.8
Research Hypothesis
In recent decades many public accounting firms offered MAS or consulting practices
in which they employed specialists in fields as varied as information systems and
human resources management. For many firms, the MAS part of the practice was the
fastest growing segment. Unlike traditional auditing or tax practices, MAS services
148 H. Chang et al.
offer opportunities for specialized services and potential for higher markup of fees
over costs. Non-audit services are lucrative businesses that yield higher margins than
do audit fees.9 MAS services are more efficient than A&A and TAX services in gen-
erating revenues from the same level of human resource inputs since the provision
of joint audit and non-audit services creates synergies. Therefore, Section 201 of the
Act, which constrains public accounting firms from offering certain consulting serv-
ices to their public clients, can both take away the synergy and reduce efficiency.
However, these consulting businesses remain available for serving non-audit or pri-
vate clients, so the provisions of Section 201 may not lead to a substantial reduction
in revenues generated from MAS services. Hence this section of the Act need not
significantly reduce the production efficiency of public accounting firms.
Section 404 requires management evaluation of internal control systems and
strengthens audit requirements. These provisions increase potential revenues to
accounting firms from additional audit services. Some evidence indicates that firms
with revenues of at least a billion dollars experience, on average, a 57% increase in
their audit fees in order to comply with SOX.10 Further, as described earlier, in
response to Sect. 2.1 many publicly traded companies hire auditors other than their
own to document and test their internal control systems. With large-scale implementation
of Section 404, we could expect public accounting firms to improve their efficiency
in the post SOX period because of increases in revenues from Section 404 compli-
ance services. This is especially true for the initial years (e.g., 2003 and 2004)
because accounting firms may have flexibility to charge a premium for accounting
and auditing services related to compliance partly because PCAOB has not yet set
up a standard of compliance. Therefore, we state our hypothesis in both null and
alternate forms as follows:
H0 (null): SOX has had no effect on the production efficiency of public
accounting firms.
HA (alternate): SOX has had a positive effect on the production efficiency of public
accounting firms.
Research Design
Our objective in this study is to evaluate the effect of SOX on the efficiency of public
accounting firms. Toward this end, we conduct our research in two stages. Stage 1 is
a univariate analysis which involves two steps. In the first step, we use Data
Envelopment Analysis (DEA) to estimate an efficiency score for each of our sample
of public accounting firms during the period 2000–2004. We then employ both DEA-
based and conventional test procedures in the second step to test for efficiency differ-
ences of these firms between the pre- and post-SOX periods. Stage 2 is a multivariate
analysis in which we specify and estimate two fixed-effects regression models to
assess the effect of SOX on the efficiency of public accounting firms after controlling
for potential confounding effects of explicitly identified contextual variables.
10 The Sarbanes-Oxley Act and the Production 149
q *j = max q
subject to
n
∑y
k =1
rk lk ≥ qyr , r = 1,..., s
n
∑x
k =1
ik lk ≤ xik , r = 1,..., s (1)
n
∑l
k =1
k =1
q , l k ≥ 0 ∀k
∑ (q ∑ (q
^ ^
Texp = j − 1) / j − 1) (2)
j ∈N 1 j ∈N 2
∑ (q ∑ (q
^ ^
Thn = j − 1)2 / j − 1)2 , (3)
j ∈N 1 j ∈N 2
Regression Analysis
As was discussed above, MAS services have been found to be more efficient than
traditional A&A and TAX services in generating revenues for the same level of
human resource inputs. The SOX Act impacts all three types of professional serv-
ices as offered by public accounting firms. Hence public accounting firms might
have adjusted their service mix in response to the regulatory intervention of SOX.
As a result, their efficiency could change due to changes in their service mix.
Therefore, we include two service mix variables, A&A% and MAS% in our regres-
sion model to examine the effect of SOX on the production efficiency of public
accounting firms. We do not include TAX% as the sum of A&A%, TAX% and
MAS% equals one.
Prior research on audit effort has demonstrated that human resource inputs for
clients with public ownership are significantly greater than that for clients with pri-
vate ownership.20 Publicly owned firms tend to be larger than private firms and have
to comply with listing requirements of exchanges when they are listed; thus, audits
of public clients are expected to require more inputs than those of private ones.
Audits of publicly owned clients can also expose an auditor to the risk of class
action lawsuits. This leads to higher insurance costs so a higher service fee will
generally be charged for public clients. These factors could all lead to a gain in
production efficiency. Thus, we include a dummy variable to control for the poten-
tial effect of public ownership of the firms being serviced.
Following Banker, Chang and Cunningham, we also include the number of
branch offices of the accounting firm as a control variable.21 Finding that the pro-
ductivity of accounting firms is negatively correlated with the number of offices
an accounting firm has, Banker, Chang and Cunningham argue that, as the
number of offices increases, the given human resources are spread over a larger
number of offices and this increases control and communication problems and
related expenses.
Prior studies have documented that the Big 4 accounting firms charge a premium
for their audit services.22 The Big 4 are also likely to charge a premium for other
services they provide. Clients are willing to pay the premium, in part, for Big 4
reputation. Further, the production correspondence at the scale levels achieved
by Big 4 firms may be different from the production performance possibilities of
non-Big 4 firms.23 To control for potential effects of a Big 4 price premium on pro-
duction efficiency, we add a dummy variable to our regression models when the Big
4 firms are included in our estimation.
Regression Models
and
where lnj is the logarithm of efficiency estimated from the DEA model in (1),
YEAR0t = 1 for t = 1, 3 and 4, and zero otherwise, A&A% represents the propor-
tion of revenues generated from A&A services, MAS% denotes the proportion of
revenues generated from MAS services, lnSEC_CLIENT represents the logarithm
of the number of public clients while lnOFFICES denotes the logarithm of the
number of branch offices, and BIG4 is a dummy variable taking on a value of one
if the firm is one of the Big4 firms and zero otherwise. We take the logarithm on
the estimated production efficiency to reduce heteroscedasticity.
Note that, YEAR01 is included to capture the difference in the efficiency
between the two years in the pre SOX period, years 2000 and 2001. YEAR03 and
YEAR04 are used to capture the efficiency difference between 2000 and the two
years, 2003 and 2004, after the passage of SOX. These three dummies enable us to
evaluate whether there is a significant difference in the production efficiency of
public accounting firms between the pre and the post SOX periods.
Our research design on the use of the two-stage approach represented in (4a) and
(4b) by first estimating production efficiencies and then seeking to correlate these
efficiencies with various contextual variables is motivated by prior research. For
instance, Ray regressed DEA scores on a variety of socio-economic factors to iden-
tify key performance drivers in school districts.24 Banker, Chang and Kao employed
the two-stage DEA method to evaluate the impact of IT investment on public
accounting firm productivity.25 Recently, Banker and Natarajan have provided theo-
retical justification for the use of the two-stage models in DEA to evaluate contex-
tual variables affecting DEA efficiency ratios.26
The sample of public accounting firms that is included in this study is obtained
from Accounting Today’s annual survey of the top 100 accounting firms in the US
for the period 2000–2004. All data reported in these annual surveys are for domestic
U.S. operations and exclude foreign holdings. This annual survey of the profes-
sion’s largest firms has become one of the most often cited sources in the field.27
10 The Sarbanes-Oxley Act and the Production 153
We confine our sample to these top 100 accounting firms because the revenue
information of other accounting firms is not publicly available. As the main objec-
tive of this study is to evaluate the impact of SOX on the production efficiency of
public accounting firms, we also eliminate any non-CPA firms (e.g., H&R Block,
Century Business Services, American Express, etc.) from the sample. Section 201
of SOX restricts the MAS services auditors can provide to their clients and Section
404 requires the evaluation and attestation of auditors to management evaluations
of the internal control system. The effect of both sections is likely to be minimal on
non-CPA firms. Observations in the year 2002 are excluded from our analysis
since nearly half of this year was in the pre-Act period (up until July 30, 2002)
while the other half was in the post-Act period. Our data do not allow us to
differentiate between these two periods in 2002. To minimize the problem of
misclassification, we focus our study on the sample after excluding observations
from 2002. Our final sample consists of 58 firms for which data are available for
the four-year period beginning 2000 and ending 2004 (excluding 2002), providing
us with a total of 232 (=58 × 4) firm-year observations for analyses.
We focus on production correspondences between total service revenues gener-
ated and human resources employed by public accounting firms. The total revenues,
measured in millions of dollars of revenues, include revenues from accounting and
auditing services (A&A), taxation services (TAX), and management advisory services
(MAS). The three human resource input variables considered are the number of
partners (PARTNERS), the number of other professionals (PROFESSIONALS) and
the number of other employees (OTHERS).
Personnel costs constitute a significant fraction of total costs for public account-
ing firms. A recent national survey indicates that employee costs and partner com-
pensation account for about 75% of the revenues, while capital costs are less than
7%, for accounting practices with revenues in excess of one million dollars.28 While
data on the total service revenue is obtained from the annual survey of Accounting
Today, the number of each of the three professional staff levels was hand collected
from annual reports of accounting firms that were filed with the American Institute
of Certified Public Accountants (AICPA). After the enactment of the SOX, any
public accounting firm that audits financial statements of public companies has to
register with the Public Company Accounting Oversight Board (PCAOB). One of
the requirements for such registration is the participation of the firm in the peer
review program. Hence, in the post-SOX period, all auditors of public firms must
have their annual reports filed with AICPA.
Table 10.1 provides descriptive statistics for total revenues and the three human
resource variables for all four years. To facilitate comparison, the total revenues
are inflation adjusted to 2,000 dollars. The high orders of the standard deviations for
all of the variables suggest that the firms in the sample vary significantly in size and
154 H. Chang et al.
Table 10.1 Descriptive statistics on outputs and inputs of public accounting firms
Variables Mean Std Dev Median
Year: 2000 (No. of obs. = 58)
REVENUES $475.6M $1,610.1M $25.5M
PARTNERS 187.9 509.2 29.5
PROFESSIONALS 1,524.9 5,347.6 135
OTHERS 582.8 1,756.8 65
Year: 2001 (No. of obs. = 58)
REVENUES $431.9M $1,431.5M $28.2M
PARTNERS 194.8 514.9 32
PROFESSIONALS 1,547.7 5,193.9 136
OTHERS 539.7 1,565.5 67
Year: 2003 (No. of obs. = 58)
REVENUE $397.3M $1,230.4M $32.2M
PARTNERS 196.5 493.5 33.5
PROFESSIONALS 1,315.5 3,817.3 143.5
OTHERS 486.9 1,383.7 67.5
Year: 2004 (No. of obs. = 58)
REVENUE $415.3M $1,270.6M $38.0M
PARTNERS 194.7 477.3 34
PROFESSIONALS 1,348.9 3,802.8 160.5
OTHERS 516.8 1,496.8 68
REVENUES, Total revenues expressed in million (M) dollars deflated to 2000. PARTNERS, Number
of partners. PROFESSIONALS, Number of professionals. OTHERS, Number of other employees
composition. Median values for all variables are much smaller than the means indi-
cating large disparities between the smallest and largest firms in the sample. The
mean total revenues dropped from 2000 to 2003 by about 16%, but increased in
2004 by about 5%. The mix of different types of employees (partners, professionals
and others) in 2001 changed slightly from that in 2000 showing a small increase in
the proportion of professionals with a corresponding decrease in the proportion of
other employees. However, the mix changed again in both 2003 and 2004, showing
a small increase in the proportion of partners with a corresponding decrease in the
proportion of professionals.
Table 10.3 shows the correlation matrix of the contextual variables. Since the
sample is skewed, we focus our attention on the Spearman rank correlation. As
expected, A&A% is negatively correlated with both TAX% and MAS%.
The number of SEC clients is positively correlated with the percentage of reve-
nues from A&A services (a correlation of 0.1637). The number of SEC clients has
a significantly negative correlation with the percentage of revenues from TAX serv-
ices (a correlation of −0.1344). The number of offices is significantly positively cor-
related with the number of public clients. This is consistent with the assumption that
public accounting firms set up offices locally in order to better serve their clients.
In the estimation of the efficiency of public accounting firms, we treat the total
revenues as the single output variable and the number of partners, the number of
professionals, and the number of other employees as three input variables. Using one
156 H. Chang et al.
Table 10.3 Correlation matrix for contextual variables and BIG4 variable
lnSEC_
A&A% TAX% MAS% CLIENT lnOFFICES BIG4
A&A% 1.0000 −0.1781 −0.736 0.1637 −0.0693 0.0940
– (0.007) 5 (0.012) (0.293) (0.153)
(0.001)
TAX% −0.2507 1.0000 −0.440 −0.1344 0.0237 −0.0841
(0.001) – 5 (0.041) (0.719) (0.202)
(0.001)
MAS% −0.7552 −0.4449 1.0000 −0.0880 0.0373 −0.0709
(0.001) (0.001) – (0.181) (0.572) (0.282)
LnSEC_ 0.2900 −0.1821 −0.145 1.0000 0.6592 0.4397
CLIENT
(0.000) (0.005) 2 – (0.000) (0.000)
(0.027)
lnOFFICES −0.0080 −0.0163 0.0178 0.6654 1.0000 0.4351
(0.923) (0.805) (0.787) (0.000) – (0.001)
BIG4 0.1649 −0.0922 −0.090 0.5264 0.5525 1.0000
(0.011) (0.162) 2 (0.000) (0.000) –
(0.171)
P-values in parentheses. Pearson correlations are below the diagonal, and Spearman correlations
are above the diagonal. Variable definitions appear in Table 10.2.
output and three inputs, we estimate the production efficiency using the DEA model
specified in (1). We summarize the mean estimated DEA efficiencies in Table 4. As
we observe from Table 10.4, the efficiency of public accounting firms increases by
about 10% from 0.626 in the pre SOX period to 0.699 in post SOX when the Big4
are excluded from the estimation. Similarly, the efficiency also increases by about
10% after the passage of SOX when the Big4 are included in the estimations.
As described earlier, we use two types of test procedures to test for the null hypothesis
that SOX has had no impact on the production efficiency of public accounting firms.
We present the statistical test results for the efficiency differences in Table 10.5.
The DEA based statistical tests all lead to rejection of the null-hypothesis – viz.,
SOX has had no effect on production efficiencies. The test statistics are all positive
which favors the alternate hypothesis of a positive effect with P values that are all
significant at better than 5% except for the inclusion of the Big 4 where the P value
for the exponential distribution is less than 10%. Similarly, results of the three non
DEA-based statistical tests indicate that the mean difference in production effi-
ciency between the pre and the post SOX periods is statistically significant at 1%
level except for the inclusion of the Big 4 where the P value for the Welch Two-
Sample test is less than 5%, indicating that the production efficiency of public
accounting firms increased after the passage of SOX.
10 The Sarbanes-Oxley Act and the Production 157
Table 10.4 Means and standard deviations of estimated production efficiencies for public
accounting firms
Relative efficienciesa
Excluding Big 4 firms Including Big 4 firms
Sample periods Mean Std. Dev. Mean Std. Dev.
Pre-SOX Period (2000&01) 0.626 0.146 0.518 0.158
Post-SOX Period (2003&04) 0.699 0.159 0.571 0.147
a
Production efficiencies are estimated from the DEA model in (1)
Table 10.5 Statistical test results of equality of production efficiencies between Pre-SOX
(2000&01) and Post-SOX (2003&04) periods for public accounting firms
Excluding Big 4 firms Including Big 4 firms
Test-stat. P-values Test-stat. P-values
a
DEA-based test T EXP
1.29 0.006 1.19 0.093
b
DEA-based test T HN
1.52 0.015 1.41 0.032
Welch two-sample test 3.57 0.000 2.37 0.018
Wilcoxon two-sample test 3.70 0.000 3.45 0.000
Kolmogorov-Sminov two-sample test 2.04 0.001 2.03 0.001
a
Test statistic when the inefficiency is exponentially distributed
b
Test statistic when the inefficiency is half-normally distributed
Regression Results
The OLS regression results of the fixed-effect models presented in Table 10.6
allow us to further refine and check our findings.b Columns 3 and 4 report results
when Big 4 firms were excluded and columns 5 and 6 report results when Big 4
firms were included. Consistent with, and extending, our previous findings, the
coefficients of YEAR03 and YEAR04 (see column 3) are both positive and statisti-
cally significant for the model without any interaction terms as (4a). Furthermore,
both InjYEAR03=1– InjYEAR01=1 and InjYEAR04=1– InjYEAR01=1 values are all positive and
significant, suggesting that public accounting firms, on average, improved their
production efficiency after the passage of SOX. Finally, the coefficient of lnSEC_
CLIENT is significantly positive as expected.
For the model with interaction terms (4b), the impact of SOX on efficiency can
be evaluated by inserting the sample means of MAS% and A&A% into the following
equations:
b
Estimation results with Tobit regressions (Tobin 1958) are similar and so are not reported.
158 H. Chang et al.
The statistical test results reported in Table 10.6 (see column 4) show efficiency
increases in the post SOX period at high statistical significance levels. Specifically,
when Big 4 firms were excluded production efficiency increased from 2000 to 2003
and to 2004 by about 11% and 15%, respectively and also increased from 2001 to
2003 and 2004 by about 8% and 12%, respectively. The results when Big4 firms
were included (see column 6) are very similar. Consequently, our hypothesis
regarding the impact of SOX on the production efficiency of large public account-
ing firms is confirmed. That is, the production efficiency of large public accounting
firms increases after the passage of SOX even after controlling for the thus identi-
fied contextual variables.
Sensitivity Checks
Conclusion
their consulting units well before the passage of SOX. Second is that SOX created
new opportunities for public accounting firms to provide additional accounting
services to their non-audit or private clients (e.g., internal control systems updates
and tests). Alternatively, it is possible that these accounting firms had adjusted their
human resource inputs in anticipation of the Act, thereby eliminating or ameliorat-
ing potential negative effects. Our results are also robust not only with respect to
outliers but also after controlling for the service mix, the number of public clients,
and the operating size of public accounting firms.
End Notes
model approach to the economic consequences of OSHA Cotton Dust Regulation, Australian
Journal of Management 26, 69–89; Abad, C., Banker, R., and Mashruwala, R. (2005).
Relative efficiency as a lead indicator of profit, Working paper, Washington University in St.
Louis.
16. Dopuch, N., Gupta, M., Simunic, D., and Stein, M. (2003). Production efficiency and the
pricing of audit services. Contemporary Accounting Research 20, 79–115; Feroz, E., Kim, S.,
and Raab, R. (2005). Analytical procedures: A data envelopment analysis approach. Journal
of Emerging Technologies in Accounting 2, 17–31.
17. Banker, R.D. (1993). Maximum likelihood, consistency and data envelopment analysis: A
statistical foundation, Management Science 39, 1265–1273; Banker, R.D., and Slaughter.
(1997). A field study of scale economies in software maintenance, Management Science,
43:12, 1709–1725; Banker, R.D., and Natarajan, R. (2004). Statistical tests based on DEA
efficiency scores, in Cooper, W.W., Seiford, L.M., and Zhu, J. Handbook on Data Envelopment
Analysis (Norwalk, CT: Kluwer); Simar, L., and Wilson, P.W. (2004) Performance of the
bootstrap for DEA estimations and iterating the principle in Cooper, W.W., Seiford, L.M., and
Zhu, J. (eds.). Handbook on Data Envelopment Analysis (Norwalk, CT: Kluwer).
18. Cooper, W.W., and Ray, S. (2008). A response to M. Stone: How not to measure the efficiency
of public services (and how one might). Journal of the Royal Statistical Society, Series A.
171:2, 433–448.
19. Cooper, Seiford, and Tone. (2006). op cit., chapter 8.
20. Palmrose, Z. 1989. The relation of audit contract type to audit fees and hours. The Accounting
Review 64: 488–499; Hackenbrack, K., and Knechel, W. 1997. Resource allocation decisions
in audit engagements. Contemporary Accounting Research 14, 481–499.
21. Banker, R.D., Chang, H., and Cunningham, R. (2003). The public accounting industry pro-
duction function, Journal of Accounting and Economics 35:2, 255–282.
22. Francis, J., 1984. The effect of audit firm size on audit prices: A study of the Australian mar-
ket. Journal of Accounting and Economics 6, 133–151; Craswell, A., Francis, J., and Taylor,
S. (1995). Auditor brand name reputations and industry specialization. Journal of Accounting
and Economics 20: 297–322.
23. Banker, Chang, and Natarajan. (2005). op cit.
24. Ray, S. (1991). Resource-use efficiency in public schools: A study of Connecticut Data.
Management Science, 1620–1628.
25. Banker, R.D., Chang, H., and Kao, Y. (2002). Impact of information technology on public
accounting firm productivity. Journal of Information Systems, 209–222.
26. Banker, R.D., and Natarajan, R. (2005). Evaluating contextual variables affecting productivity
using DEA. Working paper, Temple University.
27. Jerris, S., and Pearson, T. (1997). Benchmarking CPA firms for productivity and effi-
ciency: An update, The CPA Journal. March, 58–62; Banker, Chang and Cunningham
(2003), op cit.
28. Texas Society of Certified Public Accountants. 2005. Management of Accounting Practice
Survey. Dallas, Texas.
29. White, H. 1980. A heteroskedasticity-consistent covariance matrix estimator and a direct test
for heteroskedasticity. Econometrica 48, 817–838.
30. Belsley, D.A., Kuh, E., and Welsch, R.E. (1980). Regression Diagnostics. Wiley, New York.
31. Banker, Das, and Datar. (1989). op cit.
Chapter 11
Credit Risk Evaluation Using Neural Networks
Introduction
Credit risk evaluation and credit default prediction attract a natural interest from
both practitioners and regulators in the financial industry. The Bank for
International Settlements has been reporting a continuous increase in corporate
borrowing activities.1 In the first quarter of 2006 alone, syndicated lending for merg-
ers and acquisitions sharply exceeded the 2005 levels. In the euro area for exam-
ple, corporate demand for credit rose from 56% of international claims on all
non-bank borrowers at the end of December, 2005, to 59% at the end of March,
2006. These heightened borrowing activities naturally imply increased risk
related to credit default. A study by Office of the Superintendent of Bankruptcy
Canada and Statistics Canada2 reveals that while the number of Canadian firms
going bankrupt has declined, the average size of losses has significantly risen. In
2005 only 0.7% businesses failed, a sharp decline from the 1992 rate of 1.54%.
However, over the last quarter of the century net liabilities from business failures
increased dramatically. In 1980 the losses represented 0.32% of Canada’s net
assets, while in 2005 they rose to 0.52%. Both trends, the acceleration in corpo-
rate borrowing and the related risks of credit defaults, command the need for a
reliable and effective risk management system on part of financial institutions in
order to improve their lending activities. Moreover, the new international stand-
ard on capital adequacy outlined in Basel II,3 a regulatory requirement for finan-
cial services institutions, promotes the active involvement of banks in assessing
the probability of defaults. Therefore, the accuracy of any predictive models con-
stituting the foundation of a risk management system is clearly essential. Any
significant improvement in their predictive capabilities will be worth billions of
dollars and therefore deserves serious attention.
Academic theoretical models have contributed greatly to the improvement in
credit risk assessment. This study, an application of Backpropagation Neural
Networks (BPNN) and Probabilistic Neural Networks to form a bankruptcy prediction
model, constitutes yet another attempt at enhancing the measurement of default
risk. As powerful data modeling tools, neural networks are able to capture and
represent complex input and output relationships. The true power and advantage of
neural networks lie in their ability to represent both linear and non-linear relationships
and learn these relationships directly from the data being modeled. Conversely, the
traditional linear models cannot manage non-linear characteristics.
Empirical Approach
The empirical approach models the probability of default by learning the relation-
ships among the object variables from the data. The following methods include sta-
tistical and intelligent techniques that have been employed for the purposes of
classification.
K-nearest neighbor is one of the simplest approaches for classifying objects. The
purpose of this algorithm is to classify a new object based on attributes and training
samples. The objects are represented as points defined in a feature space. An object
is classified based on majority of K-nearest neighbor category, with the object
being assigned the class most common amongst its k nearest neighbors. Assume
that we have training data (x1, y1), …, (xn, yn) where xi is the training point and yi is
the corresponding point for each 1 ≤ i ≤ n. In the credit risk valuation, we can think
that xi is a financial institution and yi is the credit rating of this financial institution.
We wish to classify a new test point x. We need to calculate the dissimilarity
between the test point x and the training points. Firstly, we find the K training points
(K1, …, KK) which is closest to the test point with some given distance. The most
popular distance is Euclidean distance:
11 Credit Risk Evaluation Using Neural Networks 165
Secondly, we can set the classification or label y for the test point to be the most
common of the K nearest neighbors.
In spite of its simple algorithm, KNN shows a superior performance in pattern
recognition and classification tasks. Ripley demonstrated that the KNN error rate
was no greater than twice the Bayesian error rate.5 However, KNN’s significant
limitation is the lack of any probabilistic semantics when making predictions of
class membership in a consistent manner.6
Cluster Analysis
Cluster analysis is a set of algorithms and methods for grouping objects of similar
type into respective categories, and specifically, for partitioning of a dataset into sub-
sets (clusters) sharing common traits. Lim and Sohn adapted the clustering methods
to develop a cluster-based dynamic scoring model which dynamically accommodated
changes in the borrowers’ characteristics at the early stages of loan.7 For this purpose,
the dataset has been fragmented into a number of clusters and the observation horizon
has been fractioned in order to obtain different models based on different observation
periods. The empirical tests proved that the model’s misclassification rate was lower
to that of the classical single rule model. However, the limited data sample used for
testing does not render this model fully validated.
Logit or logistic regression lends itself well into an analysis where outcomes fall
between two discrete alternatives and that is why it has been a commonly used model
for bankruptcy prediction. It provides a crisp (as opposed to fuzzy) relationship
between explanatory and response variables based on the given data. We denote
⎛ p ⎞
logit( p) = log ⎜ = log( p) − log(1 − p)
⎝ 1 − p ⎟⎠
where p can represents the loan default or some parameters used to measure the
credit rating of some financial instructions or insurance companies.
Then we can fit the following model through the regression
logit (p) = f ( x1 , x2 , , xn )
ing bankrupt objects, logit yielded better results in cases where a higher accuracy in
classifying non-bankrupt firms was required.15 Currently, logit is being used in com-
bination with other models as hybrid techniques. One such application was proposed
by Tseng and Lin in the form of a quadratic interval logistic regression model based
on quadratic programming.16 The goal was to have a quadratic interval logit model
support the logit model to discriminate between groups in cases of a limited number
of firms for default prediction. The classification accuracy achieved was 78%. More
recently, Hua et al. used logistic regression analysis to enhance Support Vector
Machine (SVM) performance, and specifically, to decrease its empirical risk of mis-
classification.17 The model, Integrated Binary Discriminant Rule (IBDR), reduced
the misclassification risk of SVM outputs by interpreting and modifying the outputs
of the SVM classifiers according to the outcomes of logistic regression analysis. The
experiments showed that IBDR outperformed SVM in predictive capabilities.
Bayesian Methods
Posch et al. propose a Bayesian methodology that enables banks to improve their
credit scoring models by imposing prior information.18
As prior information, we use coefficients from credit scoring models estimated
on other data sets. Through simulations, we explore the default prediction power of
three Bayesian estimators in three different scenarios and find that they perform
better than standard maximum likelihood estimates.
Structural Approach
The structural approach refers to modeling the driving forces of interest rates and
firm characteristics and subsequently deriving the probability of failure. Several
methods have emerged aiming to assess the likelihood of default; these will be
briefly reviewed below.
The CreditMetrics framework developed by J.P. Morgan uses Monte Carlo simula-
tion to create a portfolio loss distribution at the time horizon and is based on
modeling changes in the credit quality ratings.19 Each obligor is assigned a credit
168 Z. Yang et al.
rating, and a transition matrix based on the “rating migrations” determines the
probabilities that the obligor’s credit rating will be upgraded or downgraded, or that
the obligation defaults. The portfolio value is calculated by randomly simulating
the credit quality of each obligor. The credit instruments are then repriced under
each simulated outcome, and the portfolio value is simply the aggregation of these
prices. Using the diversification benefits of a portfolio framework, the aggregate
risk of stand-alone transactions is reduced. Correlated credit movements of obligors
(such as several downgrades occurring simultaneously) are addressed, and any bor-
rower in the portfolio will result in increased capital requirements.
CreditPortfolioView was developed by Tom Wilson, formerly of McKinsey, as
a credit portfolio model by taking into account the current macroeconomic environ-
ment.20 This method uses default probabilities conditional on the current state of the
economy, rather than using historical default rate averages calculated from past
data. The portfolio loss distribution is conditioned by the current state of the econ-
omy for each country and industry segment.
One of the earlier and popular models is the asset based approach originally pro-
posed by Merton.21
KMV views a firm’s equity as an option on the firm (held by the shareholders)
to either repay the debt of the firm when due, or abandon the firm without paying
the obligations. The Merton model bases on two assumptions. The first is that the
total value of a firm is assumed to follow geometric Brownian motion,
dV
= m ⋅ dt + sdW
V
where V is the total value of the firm, µ is the expected continuously compounded
return on V, σ is the volatility of firm value and dW is a standard Weiner process.
The second critical assumption of the Merton model is that the firm has issued
just one discount bond maturing in T periods.
Under these assumptions, the equity of the firm is a call option on the underlying value
of the firm with a strike price equal to the face value of the firm’s debt and a time-to-
maturity of T. Then the KMV-Merton model will give very accurate default forecasts.
The probability of default is derived by modeling the market value of the firm as
a geometric Brownian motion. The superiority of this model lies in its reliance on
the equity market as an indicator, since it can be argued that the market capitalization
of the firm (together with the firm’s liabilities) reflect the solvency of the firm.
Another approach, by Jarrow and Turnbull, introduced the basic structure of a con-
stant default intensity model.22 It models default as a point process, where the time-
varying hazard function for each credit class is estimated from the credit spreads.
11 Credit Risk Evaluation Using Neural Networks 169
Consider a frictionless economy with a trading horizon [0, t]. Let v1 (t, T)
denote the time t value of the XYZ zero-coupon bond promising a dollar at time
T ≥ t. After we model the process of v1 (t, T), we can price the derivatives under-
lying the dynamic process. Versus, we can also calibrate the parameter of the
process v 1 (t, T) if we can observe the value of one derivative underlying
the process v1 (t, T). Jarrow and Turnbull’s model assume that this discrete-time
binomial process was selected to approximate a continuous-time Poisson bankruptcy
process v1 (t, T). They assume that the process will default with pesudoprobability
l mi at time ti and pesudoprobabiliy 1 − l mi while default does not occur.
Hull and While reduced form models focus on the risk-neutral hazard rate, h(t).23
This is defined so that h(t)dt is the probability of default between times t and t + dt
as seen at time t assuming no earlier defaults. These models can incorporate corre-
lations between defaults by allowing hazard rates to be stochastic and correlated
with macroeconomic variables.
Hull and White developed a model to value credit default swaps when the
payoff is contingent on default by a single reference entity and there is no coun-
terparty default risk. This model uses a hazard rate h(t) for the default probability
to incorporate a default density concept, which is the unconditional cumulative
default probability within one period regardless of other periods. The model
assumes an expected recovery rate and generates default densities recursively
based on a set of zero-coupon corporate bond prices and a set of zero-coupon
treasury bond prices. Then the premium of a credit default swap contract is cal-
culated using the default density term-structure. The two sets of zero-coupon
bond prices can be bootstrapped from corporate coupon bond prices and treasury
coupon bond prices.
data requirements make the model easy to implement, and the analytical calculation
of the portfolio loss distribution is very fast.
The above sampling of research consider only a single default time. Schönbucher
and Schubert proposed a feasible model based on the reduced form approach, for
the multivariate distribution of default times.25 The basis of the analysis of multivariate
dependence with copula functions is the following the theorem of Sklar.26 Let X1,
…, XN be random variables with marginal distribution functions F1, …, FN and joint
distribution function F. Then there exists an N dimensional copula C such that for
all x ∈ RN,
F ( x ) = C ( F1 ( x1 ), , Fn ( xn )).
Neural networks provide a new way for feature extraction (using hidden layers) and
classification (e.g., multilayer perceptrons). In addition, existing feature extraction
and classification algorithms can also be mapped in neural network architectures
for efficient, implementation in terms of hardware. In this section, we discuss two
neural network methods applied to credit risk evaluation in our research.
BPNN is the most widely used neural network technique for classification and prediction.29
Figure 11.1 provides the structure of the backpropagation neural network.
With backpropagation, the related input data are repeatedly presented to the
neural network. For each iteration the output of the neural network is compared to
the desired output and an error is calculated. This error is then backpropagated to
the neural network and used to adjust the weights so that the error decreases with
each iteration and the neural model gets progressively closer to producing the
desired output. This process is known as “training”.
11 Credit Risk Evaluation Using Neural Networks 171
z1
w11
v1
x1
w12
z2 v2
wx2 y^
wx1
w1m vm
xx
wxm zm y−y^
x2
x3
yi
xn
When the neural networks are trained, three problems should be taken into consid-
eration. First, it is very difficult to select the learning rate for a nonlinear network.
If the learning rate is too large, it leads to unstable learning. Conversely, if the learning
rate is too small, it results in exceedingly long training iterations. Secondly, settling
in a local minimum may be beneficial or detrimental depending on how close the
local minimum is to the global minimum and how small an error is required.
In either case, backpropagation may not always find the correct weights for the
optimum solution. One may reinitialize the network several times to guarantee the optimal
solution. Finally, the network is sensitive to the number of neurons in its hidden
layers. Too few neurons can lead to underfitting: too many can cause overfitting.
Although all training points are well fit, the fitting curve takes wild oscillations
between these points.31
In order to solve these problems, we preprocess the data before training. The nor-
malization function used to bound the data values by −1 and +1 is as follows:
xij − xijmin
Y = ( yij )m × n = ,
xijmax − xijmin
where X = (xij) is the input matrix, Y is the normalized matrix and Xijmax, Xijmin
are the associated maximum and minimum elements, respectively. The weights are
initialized with random decimal fractions ranging from −1 to 1. In addition, there
are about twelve training algorithms for BPNN. After preliminary analyses and tri-
als, we chose the fastest training algorithm, the Levenberg–Marquardt algorithm,
which can be considered as a trust-region modification of the Gauss–Newton
11 Credit Risk Evaluation Using Neural Networks 173
Computational Results
We apply the proposed methodologies to one example discussed in the literature. The
data of this example are referred to in Paradi et al.,34 which include two groups of
data. One is the 1995 data for the both the companies that were to go bankrupt during
1996 and the healthy companies. The other group is the 1996 data for the 1997 bank-
ruptcies. All the companies were from the manufacturing sector. Each company is
described by ten attributes, which includes total assets (TA), working capital (WC),
earnings before income, tax, depreciation and amortization (EBITDA), retained earn-
ings (RE), shareholders equity (EQ), total current liabilities (CL), interest expense
(IN), cash flow from operations (CF), stability of earnings (SE) and total liabilities
(TL). The 1995 data include 17 failed companies and 160 healthy companies and the
1996 data represents 11 failed companies and 115 healthy companies. The only crite-
rion for the healthy companies was that they did not go bankrupt before 1998. We use
the 1995 data for training and the 1996 data for prediction.
BPNN Results
In order to yield the running robustness of the neural networks, two network tar-
gets are set, either of which demotes two kinds of credit conditions. In details, we
let two numbers (3 and 5 or 3 and 7) represent two kinds of credit conditions (3
for bankruptcy and 5 or 7 for non-bankruptcy). The cutoff points for target setting
are 4 and 5, respectively. The performance goal for the former condition is 0.001
and the latter 0.0005. It is believed that the “3”–“5” model can be completed
faster than the “3”–“7” model since the diagnostic interval of the former is
smaller than that of the latter, though the precision settings differ. This is verified
by our results.
174 Z. Yang et al.
After the input and output patterns have been determined, some network parameters
need to be carefully chosen in order to yield a good network structure. Through our
experiments and experience, a one-hidden layer structure is selected. Five elements
for the hidden layer are fed to the BPNNs, as well as sigmoid and, pureline functions
for each of the layers.
The program has been written in C and Matlab using the neural network add-in.
Next, the network training module is executed and the weight matrices determining
the net structure are obtained. For the “3”–“5” demotion condition, the first-layer,
second-layer weight matrix and biases of BPNN are W1, W2 and B:
Figures 11.3 and 11.4 illustrate the training process of BPNN model.
In order to test the performance of the trained network, we implement the simula-
tion of the network response to inputs of the training sample. The results compiled
in Table 11.1 explain the accomplishment of training for BPNN networks.
After the training data have been successfully classified, we proceeded to devel-
oping the prediction models. The examination sample includes the 1996 data for
126 companies. Our model, using 5 as the cut-off point, successfully identified all
the healthy companies and misclassified five bankrupt companies. Table 11.2 illustrates
the prediction results for the bankrupt companies.
PNN Results
Our probabilistic neural network (PNN) creates a two-layer network. The first layer
has radial basis transfer function neurons, and calculates its weighted inputs using
the Euclidean distance weight function, and its net input using the product net input
function. The second layer has competitive transfer function neurons, and calculates
its weighted input using the dot product weight function and its net inputs using the
sum net input function. Only the first layer has biases and the biases are all set to
0.8326/spread. The second layer weights are set to the target.35 177 companies are
assigned in the pattern layer and two units in the class layer. This configuration rep-
resents 177 companies applied to each training session and a total of two classes
allowed for two kinds of credit conditions. The network targets are the same as the
targets for BPNN described in Sect. 4.1. For the training data, PNN identifies all
the healthy companies and misclassifies one bankrupt company. Table 11.3 shows
the details of the classification. Next, we applied our prediction PNN model to the
1996 data. We found that the model misclassified five bankrupt companies and four
healthy companies as shown in Tables 11.4 and 11.5. In relative terms, the model
produced 54.55% bankruptcy and 96.52% non-bankruptcy prediction accuracies.
Classification and prediction accuracies of two networks are shown in Table 11.6.
11 Credit Risk Evaluation Using Neural Networks 175
Fig. 11.3 Illustration of training process by BPNN model with two groups denoted by Number “3” and “5”
Fig. 11.4 Illustration of training process by BPNN model with two groups denoted by Number
“3” and “7”
176 Z. Yang et al.
Paradi et al. combined layered worst practice and normal DEA models and obtained
results of 100% out-of-sample classification accuracy for the bankrupt companies
and 67% for the healthy companies.36 Their method constitutes an excellent predictor
of company bankruptcy. In contrast, our study produces impressive non-bankruptcy
classification accuracies. Specifically, BPNN approach identifies all healthy com-
panies and provides 100% non-bankruptcy classification accuracies. PNN only
misidentifies four healthy companies, which gives 96.52% non-bankruptcy classi-
fication accuracies. Therefore, if we combine the DEA approach and the neural
network approach, the new model will likely result in exciting prediction accura-
cies, which would translate into substantial savings for financial institutions and
consequently warrants serious attention.
This chapter reviews selected credit risk detection techniques and then evaluates the
credit risk using two neural network models. Both models yield an impressive
100% bankruptcy and 100% non-bankruptcy classification accuracy in simulating
the training data set. BPNN provides 54.55% bankruptcy and 100% non-bankruptcy
178 Z. Yang et al.
End Notes
22. Jarrow, R., and Turnbull, S. (1995). The pricing and hedging of options on financial securities
subject to credit risk, Journal of Finance, 50, 53–85.
23. Hull, J., and White, A. (2000). Valuing credit default swaps I: No counterparty default risk,
Journal of Derivatives, 8:1, 29–40.
24. http://www.csfb.com/institutional/research/assets/creditrisk.pdf; http://www.bica.com.ar/
Archivos_MRM/CreditRisk+byFFT_versionJuly2004.pdf.
25. Schönbucher, P.J., and Schubert, D., Copula-dependent default risk in intensity models.
Technical report, Department of Statistics, Bonn University.
26. Sklar, A. (1959). Fonctions de répartition à n dimensions et leurs marges, Publications de
l’Institut de Statistique de L’Université de Paris 8, 229–231.
27. Hull, J., and White, A. (2004). Valuation of a CDO and nth to default CDS without Monte
Carlo simulation, Journal of Derivatives, 12:2, 8–23.
28. Nelsen, R.B. (1999). An introduction to copulas, 139 of Lectures Notes in Statistics. Springer,
Berlin Heidelberg New York; Li, D.X. (2000). On default correlation: A copula function
approach. Journal of Fixed Income 9, 43–54, 2000.
29. Hecht Nielsen, R. (1990). Neural computing, Addison Wesley, 124–133.
30. Specht, D.F. (1988). Probabilistic neural networks for classification, mapping, or associative
memory, IEEE International Conference on Neural Networks, San Dieg, CA, USA,
1525–532.
31. Lai, K.K., Yu, L., and Wang, S. (2006). Neural network metalearning for credit scoring,
Lecture Notes in Computer Science 4113 LNCS – I 403; Xiong, Z.B., Li, R.J. (2005). Credit
risk evaluation with fuzzy neural networks on listed corporations of China, Proceedings of the
2005 IEEE International Workshop on VLSI Design and Video Technology, 479–484.
32. Chen, H.-H., Manry, M.T., and Chandrasekaran, H. (1999). A neural network training algo-
rithm utilizing multiple sets of linear equations, Neurocomputing, 25, 55–72; Liang, L., and
Wu, D. (2005). An application of pattern recognition on scoring Chinese corporations finan-
cial conditions based on backpropagation neural network. Computers and Operations
Research 32, 1115–1129.
33. Kohen, T. (1989). Self-organization and associative memory. Springer, Berlin Heidelberg
New York; Yu, L.Y., Li, H.L., and Duan, Z.G. (2002). A neural network model in credit risk
assessment based on new risk measurement criterion, Proceedings of the Joint Conference on
Information Sciences 6, 1102–1105.
34. Paradi, J.C., Asmild, M., and Simak, P.C. (2004). Using DEA and worst practice DEA in
credit risk evaluation, Journal of Productivity Analysis 21, 153–166.
35. Wasserman, P.D. (1993). Advanced Methods in Neural Computing, Van Nostrand Reinhold,
New York.
36. Paredi, A., and Simak. (2004). op cit.
Chapter 12
Applying the Real Option Approach to Vendor
Selection in IT Outsourcing
Information technology (IT) outsourcing is one of the major issues facing organizations
in today’s rapidly changing business environment. Due to its very nature of
uncertainty, it is critical for companies to manage and mitigate the high risks associ-
ated with IT outsourcing practices including the task of vendor selection. In this study,
we explore the two-stage vendor selection approach in IT outsourcing using real
options analysis. In the first stage, the client engages a vendor for a pilot project and
observes the outcome. Using this observation, the client decides either to continue the
project to the second stage based upon pre-specified terms or to terminate the project.
A case example of outsourcing the development of supply chain management informa-
tion systems for a logistics firm is also presented in the paper. Our findings suggest that
real options analysis is a viable project valuation technique for IT outsourcing.
What began as a means of having routine processes completed by those external
to the firm has exploded into an industry that is on the frontier of product design
and innovation. We are speaking, of course, of outsourcing, the reason for many
corporate restructurings thus far in the twenty-first century. There does not appear
to be abatement in this trend. Outsourcing offers firms the ability, in the face of
limited resources, to attract specialized talent to rapidly solve a business issue. And,
by outsourcing to several firms simultaneously, corporations are able to mitigate the
risk of exposure to project failure by in-sourcing or single outsourcing.1
Outsourcing offers a firm flexibility.2 By purchasing specialized knowledge
through outsourcing agreements, firms no longer have to deploy internal resources
to solve an array of problems. As circumstances change, firms that outsource
have the ability to adjust and pursue different opportunities rapidly. In essence,
outsourcing is a real option the firm acquires and exercises as warranted.
Information technology is in the forefront of the outsourcing phenomenon. For
instance, Lacity and Willcocks report that IT outsourcing contracts alone were
expected to reach US$ 156 billion by 2004.3 It is also estimated that more than 50%
of companies in the United States outsourced their IT functions in 2006.4
Real options is an alternative valuation method for capturing managerial flexi-
bility that is inherent in IT projects.5 In this study, we explore the multi-stage ven-
dor selection issue in information technology outsourcing using real options
analysis. We use the example of outsourcing the development of supply chain man-
agement information systems for a logistics firm. We find real options to be a viable
project valuation technique for IT outsourcing.
Review of Literature
The past decade has seen an explosion in information technology (IT) outsourcing
for building basic computer applications, systems maintenance and support, routine
process automation, and even strategic systems.6 Estimates suggest that this trend
was likely to continue with projections of IT outsourcing contracts reaching $160
billion in 2005, up from $101 billion in 2000.7
In transferring IT activities to outside suppliers, firms expect to reap various
benefits, from cost savings to increased flexibility, and from improved quality of
services to better access to state-of-the-art technology.8 However, various undesir-
able results have also been associated with IT outsourcing including: service deg-
radation,9 the absence of cost reduction,10 and disagreement between the parties.11
In light of the high IT outsourcing failure rate, several researchers have argued for
adopting a risk management approach to studying and managing IT outsourcing
based on transaction cost theory.12 However, they neglect the vendor selection
issue in managing the IT outsourcing risk.
Vendor Selection
Because IT is an intangible product that can be heavily customized for each com-
pany, it might be very difficult to accurately assess vendor quality during the bid-
ding process. Moreover, even for situations where many aspects of performance can
be measured, not all aspects of IT project outcome may be measurable to a degree
where an outside party (vendor) can certify compliance.13 As such, the vendor
selection problem with non-verifiable outcomes is an important issue in practice
and has attracted attention in the IT outsourcing literature.14
We use a two-stage vendor selection process in IT outsourcing. In the first stage, the
client engages vendors for pilot projects and observes the outcome, while in
the second stage, the client can offer a contract only to high-quality vendor(s).
There are several characteristics of IT projects that make pilot projects particularly
attractive.15 IT projects are unique in that they involve both heterogeneity in vendor
quality and nonverifiable outcomes.
A number of factors aggravate the vendor selection difficulties for IT projects. First,
the unprecedented rate of technological change in IT makes it difficult at the outset to
lock project specifications into an enforceable contract that can be externally monitored
12 Real Option Approach to Vendor Selection in IT Outsourcing 183
Real Options
Firms consider the risk of new investments prior to undertaking a new project. The
firm accounts for risk through the capital budgeting function. In capital budgeting
decision-making, the goal is to identify those investment opportunities whose net
value to the firm is positive. Discounted cash flow (DCF) analysis is the traditional
capital budgeting decision model used.18 It involves discounting the expected, time
dependent cash flows for the time value of money and for risk via the calculation
of a net present value (NPV).
n
CFt
NPV = − IO + ∑ (1)
t =1 (1 + r )t
where IO equals the initial cash outlay for the project, CF is the cash flow, and r is
the discount rate.
The NPV represents the expected change in the value of the firm which will
occur if the project is accepted. The decision rule is straightforward: accept all posi-
tive NPV projects and reject all negative NPV projects. A firm is indifferent to a
zero NPV project as no change in current wealth is expected.
Today, most academic researchers, financial practitioners, corporate managers,
and strategists realize that, when market conditions are highly uncertain, expenditures
are at least partially “irreversible,” and decision flexibility is present, the static,
traditional DCF methodology alone fails to provide an adequate decision-making
framework.19 It has been suggested that current corporate investment practices have
been characterized as myopic due, in large part, to their reliance on the traditional
stand-alone DCF analysis.20 An alternative project valuation method is real options
analysis (ROA).
Real options are a type of option where the underlying asset is a real asset, not
a financial asset.21 In general, real options exist when management has the
opportunity, but not the requirement, to alter the existing strategic or the current
operating investment strategy. Real option analysis allows firms to more accurately
evaluate projects by explicitly valuing managerial flexibility.22 Managerial flexibil-
ity is valuable since it allows managers to continually gather information concern-
ing uncertain project and market outcomes, and change the firm’s course of action
184 Q. Cao, K. Leggio
invest in EXE is $2.960 million. In the prototype stage, CLI engages each company
for a pilot project and observes the outcome. Based on the outcome of the pilot
projects, CLI decides whether to continue the project with one of these two compa-
nies to the second stage or to terminate the project.
Real option analysis (ROA) is chosen by CLI as the methodology for the vendor
selection process. Using ROA, CLI is able to decide not only which vendor to select
but also determine what is the optimal level of investment at each stage. We will
provide a step-by-step demonstration of how CLI successfully utilizes the ROA
framework to render a viable decision in its vendor selection process.
The generally accepted methodology for valuing a financial call option is the
Black–Scholes formula.30 The difficulty with using this closed-form solution for
valuing real options is it is difficult to explain, is applicable in very specific situations,
and limits the modeler’s flexibility. On the other hand, the binomial lattice model,
when used to price the movement in the asset value through time, is highly flexible.
It is important to note the results are similar for the closed form Black–Scholes
model and the binomial lattice approach. The more steps added to the binomial
model, the better the approximation.
The binomial asset pricing model is based on a replicating portfolio that combines
borrowing with ownership of the underlying asset to create a cash flow stream equiva-
lent to that of the option. The model is created period by period with the asset value
moving to one of two possible probabilistic outcomes each period. The asset has an
initial value and within the first time period, either moves up to Su or down to Sd. In
the second time period, the asset value can be any of the following: Su2, Sud, Sd2. The
shorter the time interval, the smoother the distribution of outcomes will be.a
The inputs for the binomial lattice model are equivalent to the inputs for the
Black–Scholes model; namely, we need the present value of the underlying asset
(S), the cost of exercising the option (X), the volatility of the cash flows (σ), the
time until expiration (T), the risk free interest rate (rf), and the dividend payout per-
centage (b). We use these inputs to calculate the up (u) and down (d) factors and
the risk neutral probabilities (p).
u = es dt
(3)
1
d = e −s dt
= (4)
u
( rf − b )(dt )
e −d
p= (5)
u−d
a
For a thorough explanation of binomial lattice models, see Mun (2002).
12 Real Option Approach to Vendor Selection in IT Outsourcing 187
where dt is the change in time and p reflects the probably outcomes that determine
the risk free rate of return.
Initial research indicate the volatility of SSA’s cash flows are 15% annu-
ally, and the time period represented in the binomial lattice model is 1.0 period
per cellmovement (In SSA’s case, therefore, u = es dt = e0.15 1 = 1.1618
and d = 1 = 1 = 0.8607 ). Given a risk free rate of 7% and no dividends
u 1.1618
e(.07 − 0 )(1) − 0.8607
p= = 0.7034.b
1.1618 − 0.8607
The binomial lattice option model appears as in Fig. 12.1:
Results
By outsourcing SCMS, CLI expects to increase its asset value by $3.764 million regard-
less of which company it chooses to use for outsourcing. The underlying asset value for
CLI if it chooses to outsource to SSA or EXE is as follows (Figs. 12.1–12.5):
Sou5
Sou4
Sou3 Sou4d
Sou2 Sou3d
So Soud Sou2d2
Sod2 Sod3u
Sod3 Sod4u
Sod4
Sod5
b
EXE has a volatility of 34%. As a result, for EXE, u = 1.4049; d = 0.7118 and p = .0.5204.
188 Q. Cao, K. Leggio
7968.39
6858.46
5903.13 5903.13
5080.87 5080.87
4373.14 4373.14 4373.14
3764.00 3764.00 3764.00
3239.70 3239.70 3239.70
2788.44 2788.44
2400.03 2400.03
2065.73
1777.99
Fig. 12.2 Underlying asset lattice for CLI and SSA (000s)
20603.94
14665.27
10438.31 10438.31
7429.68 7429.68
5288.22 5288.22 5288.22
3764.00 3764.00 3764.00
2679.10 2679.10 2679.10
1906.91 1906.91
1357.28 1357.28
966.07
687.62
Fig. 12.3 Underlying asset value for CLI and EXE (000s)
5063.39
4149.85
3377.64 2998.13
2726.12 2372.26
2180.55 1847.66 1468.14
1728.40 1419.94 1055.40
1078.78 752.85 334.70
533.54 219.50
143.95 0.00
0.00
0.00
5063.39
4149.85
3377.64 2998.13
2726.12 2372.26
2180.55 1847.66 1468.14
1728.40 1419.94 1055.40
1078.78 752.85 334.70
533.54 219.50
143.95 0.00
0.00
0.00
The binomial tree indicates the IT project value will vary from $7.968 million
to $1.77 million at the end of time period five for SSA outsourcing; for EXE, the
project value will vary between $20.603 million and $687 thousand.
Next CLI calculates the equity value for the second option. This is done because
the value of the compound option is dependent upon the value of the second option.
At each node, CLI assesses the project cash flow and compares it to zero. CLI’s
goal is to its returns at each node. The formula is as follows:
− r f dt
Max(Benefits − Costs,[(p) up + (1 − p) down] e ) (6)
With this formula in mind, the value of the second, or intermediate stage option
for EXE and SSA are as follows:
For instance, in Fig. 12.5, the node 4,149. 85 is calculated by looking at the value
of that same node in the underlying asset in Fig. 12.2, 6,858.46, and subtracting the
cost of outsourcing, 2,905. We compare this value to the probability of an up event,
0.7034% times the up node value of 5,063.39 plus (1 – probability of an up event)
times the lower node (1-0.7034)*2,998.13 and we discount this sum back one
period at the risk neutral rate. The implementation of the formula is as follows:
MAX (6,858.46-2,905, (0.7034(5,063.39) + 0.2966(2,998.13)eˆ(-0.07(1) ) =
4,149.85.
After working our way through the intermediate option value, we move to the
option value for the first stage option. The First Phase Option value is dependent
upon the Intermediate Stage Option value. For instance, in Fig. 12.6, 3,277.64 is
calculated as follows: MAX(Intermediate Stage Option Value – Option Cost,
[p(previous up node value)+(1-p)(previous down node value)eˆ-r(dt). =
MAX(3,377.64-2,905, [(0.7034(4,149.85)+(1-0.7034)(2,372.62)eˆ-0.07(1) =
3,277.64.
Clearly, CLI should outsource (see Figs. 12.6 and 12.7). Both projects create
value for the firm which far exceeds the development costs. The results show the
190 Q. Cao, K. Leggio
3277.64
2632.88
2093.61 1747.66
1647.34 1326.70
991.84 652.85
440.30
43.95
7764.66
4961.93
3076.78 2670.82
1861.09 1496.33
823.28 448.08
217.42
0.00
End Notes
1. DiRomualdo, A., and Gurbaxani, V. (1998). Strategic intent for IT outsourcing, Sloan
Management Review, 39:4, 67–80.
2. Lee, J.N., and Kim, Y.G. (1999). Effect of partnership quality on IS outsourcing success:
Conceptual framework and empirical validation, Journal of Management Information
Systems, 15:4, 29–62.
3. Lacity, M.C., and Willcocks, L.P. (1998). An empirical investigation of information technol-
ogy sourcing practices: Lessons from experience, MIS Quarterly, 22:3, 363–408.
4. Brooks, J. (1987). No silver bullet: Essence and accidents in software engineering, IEEE
Computer, 20, 10–19.
5. Herath, H.S.B., and Bremser, W.G. (2005). Real option valuation of research and development
investments: Implications for performance measurement, Managerial Auditing Journal, 20:1,
55–73.
6. King, W.R. (2004). Outsourcing and the future of IT, Information Systems Management, 21:4,
83–84.
7. Fichman, R.G. (2004). Real options and IT platform adoption: Implications for theory and
practice, Information Systems Research, 15:2, 132–154.
8. Pinches, G. (1982). Myopia, capital budgeting and decision-making, Financial Management,
11:3, 6–20.
9. Moad, J. (1989). Contracting with integrators, using outside information system integrators on
an information systems project, Datamation, 35:10, 18.
10. Raynor, M.E., and Leroux, X. (2004). Strategic Flexibility in R&D, Research Technology
Management, 47:3, 27–33.
11. Earl, M.J. (1996). The risks of outsourcing IT, Sloan Management Review, 37:3, 26–32.
12. Lacity, M.C., and Hirschheim, R. (1993). Information Systems Outsourcing, Myths,
Metaphors, and Realities. Chichester, England: Wiley.
192 Q. Cao, K. Leggio
13. Grover, V., Cheon, M.J., and Teng, J.T.C. (1996). The effect of service quality and partner-
ship on the outsourcing of information systems functions, Journal of Management Information
Systems, 12:4, 89–116.
14. Violino, J.B., and Caldwell, B. (1998). Analyzing the integrators-Systems integration and
outsourcing is a $300 billion business, but are customers really getting their money’s worth?
Here’s what IT managers really think about their hired guns, Informationweek, 709, 45–113;
Porter, M. (1992). Capital Disadvantage: America’s Failing Capital Investment System,
Harvard Business Review. Boston, Sep/Oct; Willcocks L., Lacity M., and Kern T. (1999).
Risk mitigation in IT outsourcing strategy revisited: longitudinal case research at LISA,
Journal of Strategic Information Systems, 8, 285–314.
15. Vijayan, J. (2002). The outsourcing boom, Computerworld, 36:12, 42–43.
16. Kern, T., Willcocks, L., and van Heck, E. (2002). The winner’s curse in IT outsourcing: strate-
gies for avoiding relational trauma, California Management Review, 44:2, 47–69.
17. Mun, J. (2002). Real Options Analysis: Tools and Techniques for Valuing Strategic
Investments and Decisions, New Jersey: Wiley.
18. Copeland, T., and Antikarov, V. (2001). Real Options – A Practitioner’s Guide, New York,
Texere LLC.
19. Lacity and Hirschheim. (1993). op. cit.
20. Dixit, A., and Pindyck, R. (1994). Investment Under Uncertainty: Keeping One’s Options
Open, Journal of Economic Literature, 32:4, 1816–1831.
21. McFarlan, F.W., and Nolan, R.L. (1995). How to manage an IT outsourcing alliance, Sloan
Management Review, 36:2, 9–23.
22. Amram, M., and Kulatilaka, N. (1999). Real Options: Managing Strategic Investment in an
Uncertain World. Boston: Harvard Business School Press; Newton, D.P., Paxson, D.A., and
Widdicks, M. (2004). Real R&D Options 1, International Journal of Management Reviews,
5:2, 113.
23. Lewis, N., Enke, D., and Spurlock, D. (2004). Valuation for the strategic management of
research and development Projects: The deferral option, Engineering Management Journal,
16:4, 36–49.
24. Scheier, R.L. (1996). Outsourcing’s fine print, Computerworld, 30:3, 70.
25. Lacity and Willcocks. (1998). op. cit.
26. Alessandri, T., Ford, D., Lander, D., Leggio, K., and Taylor, M. (2004). Managing Risk and
Uncertainty in Complex Capital Projects, Quarterly Review of Economics and Finance, 44:4,
751–767.
27. Taudes, A. (1998). Software growth options, Journal of Management Information Systems
15:1, 165–185.
28. Schwartz, E.S., and Zozaya-Gorostiza, C. (2003). Investment under uncertainty in informa-
tion technology: Acquisition and development projects, Management Science, 49:1, 57–70.
29. Fichman. (2004). op. cit.
30. Black, F., and Merton Scholes, M. (1973). The pricing of options and corporate liabilities,
Journal of Political Economy, 81, 637–659.
Part IV
Applications of ERM in China
Chapter 13
Assessment of Banking Operational Risk
The main risks in banking management are credit risk, market risk and operational
risk (Oprisk). The British Bankers’ Association (BBA) and Coopers and Lybrand
conducted a survey in BBA’s 45 members in 1997 and the report showed that more
than 67% of banks considered the oprisks were of more concernment than market risk
and credit risk. 24% of banks had suffered more than 100 million pound losses during
the three years prior to the survey.1 The worldwide survey on oprisks by the Basel
Committee (2002) showed that respondent banks had reported 47,029 oprisk cases
with losses of over 1 million EURs, with each bank experiencing 528 oprisk cases on
average.2 Over the past decade, financial institutions have suffered several large oper-
ational loss events leading to banking failures. Memorable examples include the
Barings’ bankruptcy in 1995, the $691 million trading loss at Allfirst Financial.
Obviously, oprisk is a very serious problem in the banking system at present. These
events have led regulators and the banking industry to recognize the importance of
oprisk in shaping the risk profiles of financial institutions.
Unlike credit risks and market risks, oprisks have no agreed upon a universal
definition. There are three viewpoints for oprisks’ definition:3 a generalized concept
regards all kinds of risk except for market risk and credit risk as oprisk; a narrowed
concept regards only the risks related with the operational departments in financial
institutions as oprisks. Obviously, the generalized concept makes it difficult for
managers to measure oprisk accurately, and narrowed concept cannot cover all the
oprisks that cause banks to suffer from unexpected loss. Therefore, we prefer the
third definition – the concept between generalized and narrowed. This concept
firstly divided the events of banks into two types, controllable and non-controllable,
and then regards the risks from controllable events as oprisks. The definitions from
the Basel Committee and BBA are most representative ones belong to the third
conception. In the New Capital Accord II, the Basel has incorporated into its pro-
posed capital framework an explicit capital requirement for oprisk, defined as the
risk of loss resulting from inadequate or failed internal processes, people and sys-
tems or from external events.4 BBA indicated in their famous 1997 survey that it is
difficult to control oprisk base on coherent basis if there is not a proper frame of
risk management for a bank. BBA, according to their own banking practice, directly
defined oprisk as “the risk of direct or indirect loss caused by the imperfections or
errors of internal procedures, personnel and systems or external events.”5
The Basel proposed three distinct options for the calculation of the capital
charge for oprisk. The use of these approaches of increasing risk sensitivity is
determined according to the risk management systems of the banks. The Basel
was intended to improve risk management by allowing the use of different meth-
ods to measure credit risk and oprisk, and allowing banks and supervisors select
one or more methods most in accord with their banking operation and financial
markets status. For all types of risks, the Basel encourages banks to use their own
method for assessing their risk exposure. Indeed, the absence of reliable and large
enough internal operational loss databases in many banks has hindered their
progress in modeling their operational losses. The three approaches for oprisk
measuring (see Table 13.1) proposed by Basel Accord II are Basic Indicator
Approach (BIA), Standardized Approach (SA) and Advanced Measure
ment Approach (AMA).6 In AMA the oprisk capital requirement can be described as
∑ i∑ jg (i, j ) × EI (i, j ) × PE (i, j ) × LGE (i, j ) , i means operation type, j means risk
type, g (i, j) is the operator to convert expected loss EL into capital requirement;
parameter g is enacted by supervision department according to the operation loss
data for the whole banking industry; EI(i, j) means oprisk exposure of (i, j); PE(i, j)
means occurring probability of loss events on (i, j); LGE(i, j) means the loss degree
when the events occur on (i, j). Those three parameters – EI(i, j), PE(i, j), and
LGE(i, j) are estimated by banks internally. However parameter γ reflected the risk
distribution of whole banking industry mainly, but not always, associated with the
Standardized by supervisors
13 Assessment of Banking Operational Risk 197
risk distribution of special institution and special operation. Meanwhile AMA has
some obstacles in practice, such as most banks lack of the internal historical data
needed to estimate oprisk, the external data are not matching with the potential loss
of the bank, etc. The Loss Distribution Approach (LDA) based on the hypothesis
of oprisk occurring probability and aftereffect severity. LDA estimates the special
experiential probability distributions of the two factors by some techniques, such as
Monte Carlo Simulation. While LDA only be implemented in a few big banks
because of the lack of comparable internal data from different banks to be able to
estimate the various distributions hypothesizes.7 Oprisk-VaR models in financial
institutions have been proposed.8 Oprisk-VaR regards various internal controlling
methods in correlative operation flows as reference points, and then estimates the
maximal loss (ML) of every reference point when they lose control of system and
probability (P) of control lose. So the VaR of this point is MD×P. There is a huge
difficulty in VaR practice of using historical simulation because of the lack of the
historical data causing oprisk. Simultaneity, the oprisk events have a lower proba-
bility and a huge loss.
There are many disputes between supervisors and bankers about the definition,
measuring and controlling of oprisk because of lacking practical experience in
banks. Basel Accord II has not provided risk sensitivity tools for banks to measure
and manage oprisk exposure. Owing to the difficulties in obtaining rating data,
oprisk has been controlled by operation handbooks or risk listings for a long time.
The potential losses from oprisk, market and credit risk are different. The prob-
abilities of credit and market risk follow the normal distribution, and can be
described and quantified by the probability distribution. Therefore banks can take
effective risk measurement using their historical data. Unexpected oprisk has a
lower frequency, but more serious consequences. Other research focused on
measurement elements and management framework,10 and introducing fuzzy math-
ematics and dynamic model into this field.11
The economic advantages of more advanced methods are more obvious for the larger
banks. As to the complex methods a number of requirements must be fulfilled. The
banks must be able to quantify their risk according to the basic principles in Basel II.
In addition, a number of routine requirements must be fulfilled. First of all, the Banks
need to set a strong frame to provide the technical and decision making for oprisk
management. We confirmed the frame consisted of three aspects: oprisk stratagem
established by the bank’s directorate; policies implemented by the independent oprisk
management department in bank; and risk supervising process (see Fig. 13.1).
The unpredictable oprisk in the time series made statistical methods unreliable.
Additionally, considering our current commercial banks’ incomplete internal fac-
tors and our immature capital market, etc., these conditions cannot satisfy the
hypothesis of oprisk model in mature markets.
198 C. Zhang et al.
Independent OpRisk
Directorate Management Department
OpRisk
OpRisk
Management
Supervising Policy
stratagem
Sustained
mend
Identification
⎧⎪m(f ) = 0
⎨
⎩⎪∑ A ⊆Q
m( A) = 1 (1)
Suppose Bel1 and Bel2 are belief functions over the same frame Θ, with BPA m1 and
m2 and focal elements A1,…,Ak and B1,…,Bl, respectively. If Bel1 Å Bel2 exists and
basic probability assignment is m, then the function m: 2Θ ® [0,1] defined by
⎧ K ∑ m1 ( Ai )m2 (B j ) A ≠ ∅
⎪
m( A) = ⎨ Ai IB j = A
⎪⎩ 0 A=∅ (2)
−1
⎛ ⎞
here, K = ⎜ 1 − ∑ m1 ( Ai )m2 (B j )⎟ , ∀A ⊆ Θ, Ai , B j ⊆ Θ. (3)
⎝ Ai IB j = Ø ⎠
The weights in combining formula (2) are accordant, while evidence weights in bank-
ing oprisk assessment are normally inconsistent; therefore we weight averaged the
basic probability assignment by estimating the distance between evidences. According
to Bayesian Probability Theory, the lesser the distance between evidences, the more
comparability and reliability the evidences have; influence of distance on the evi-
dences reliability has positive correlation with the quantity of evidence sources.
If Θ is a frame of discernment comprising different propositions, and m1 and m2
are BPA over Θ, then the distance between m1 and m2 can described by
1 ρ 2 ρ ρ ρ
dis(mi , m j ) = (|| m i || + || m j ||2 −2 m i , m j ) (4)
2
ρ ρ Ai I B j
m i , m j = ∑ ∑ mi ( Ai )m j ( B j ) , Ai , B j ∈ P(Q ) (5)
i j Ai YB j
13 Assessment of Banking Operational Risk 201
The greater the distance between the evidences, the less their similarity, so we can
define the similarity of m1 and m2 and the supporting degree of evidence mi in the
system are:22
n
Sup(mi ) = ∑
j =1, j ≠ i
Sim(mi , m j ) (7)
Sup(mi )
w i = Crd (mi ) = n , ∑ wi = 1 (8)
∑ Sup(mi )
i =1
We get the weight of evidence from same group experts wi to combine the evidences
using weighted average Dempster’s rule of combination.
To establish a uniform and standardized rating system for commercial banks, CBRC
issued The Internal Guidelines of Supervisory and Rating for Commercial Banking
(IGSRCB) in January 2006. It is based on the CAMEL rating23 and combined with
the actual situation of commercial banks in China. This paper focuses on “manage-
ment” rating of commercial banks in the IGSRCB and based on the framework of
oprisks management (Fig. 13.1), and then used a designed questionnaire to gather
the relevant knowledge of operating risk assessment case from experts. We designed
the oprisk rating indicators system in following four aspects:
Strategic plan indicators in oprisk are mainly consisted in careful planning from
circumstance analysis, material measures and confirmable inspect ensuring the plan
fulfilled. The collocation of resources should be in consistent with strategic plans
and integration risk management with planning and decision-making.
Service quality indicators involved the extensive communication between bank and
clients, efforts by governors to improve clients’ relationship, understand potential
client needs and reduce credit risk; the competitive power of interest rates; rational-
ity of pricing in banking service.
202 C. Zhang et al.
Assess the bank’s risk analysis process, policies, and oversight based on the size
and complexity of the credit union, the type and volume of e-Commerce services,
technological investment risk. The bank should have a tested contingency plan in
place for the possible failure of its computer systems.
Staff should be thoroughly trained in specific operations, as well as the bank indus-
try philosophy. A training program should be in place and cross training programs
for office staff should be present. Key persona absence and labor force intermitting
must be avoided.
This was evaluated for compliance with all applicable laws and regulations, reputa-
tion, juristic and public obligation, rationality of compensation policies for senior
management, avoidance of conflict of interest, responsiveness to audit suggestions,
requirements and professional ethics and behavior.
We can get the oprisk indicators system as Table 13.2.
F = {f1,f2,f3,f4} is the set of top indicators of oprisk, qi is weight of fi (i = 1,2,3,4),
we can conform the weights of indicators qi = (0.25,0.25,0.3,0.2) according to the
analysis of CAMEL system on banking operation management. The top indicator
strategic plan (f1) has four sub-indicators F1 = {f11, f12, f13,f14}. Also we can get the
sub-indicators F2, F3, F4 of f2, f3, f4.
13 Assessment of Banking Operational Risk 203
We selected the experts who have more than 10-years work experience and more
than 5-years management experience in banks as our survey population and
grouped them into three types – outside manager, technologist and internal operator
by their positions and specialty, denoted experts set as E = E1, E2, E3, E1-outside
managers, E2-technologist, E3-operator. Therefore we have the grade set F11 = {E1
(f11);E2(f11);E3(f11)} of three groups’ experts on f11.
The weights of evidences in same group wi were estimated by the distance dis-
cussed in formula (8).
The weights of evidences from different groups were estimated by the experts’
specialties and background. We give E1 higher weights in f1 and f4, give E2 a higher
weight in f3, and E3 a higher weight in f2 by experts’ meeting. According to the
experts’ judgments we got the initial weights of different groups on the four top
4
indicators f1, f2, f3, f4, namely eij satisfied ∑ eij = 1, eij means the weight of group i
j =1
on indicator j. Then get the weights of different groups normalized there are e*1j =
(0.4,0.267,0.267,0.4), e*2j = (0.3,0.3,0.433,0.3), e*3j = (0.3,0.433,0.3,0.3).
204 C. Zhang et al.
We designed the survey questionnaire to cover the four top guidelines F = {f1,f2,f3,f4}
mentioned in part 3. The experts must estimate the risk exposure, probability and
loss of risk events. We adopted three state-controlled banks’ oprisk judgments from
experts here. The set of evaluation grades is H = {1,2,3,4,5} = {excellence, good,
neutral, worse, worst}.
Step 1: Distributed assessments (belief degrees) and same group experts’ weights
Ei = {( H , b ) , n = 1∧, 5; i = 1, 2, 3}, 0 ≤ b
n n ,i n,i ≤1
Here,
b H , i = 1 − ∑ n =1 b n , i ,(i = 1, 2, 3)
5
Following is the BPA of four main indexes from each group of experts using
bank A as an example. (The initial BPA matrixes m1, m2, m3, and m4 as
follows.)
⎡ 0.04 0.24 0.44 0.24 0.04 ⎤ ⎡ 0.04 0.36 0.52 0.04 0.04 ⎤
m1 = ⎢0.024 0.461 0.236 0.255 0.024⎥ , m2 = ⎢0.022 0.732 0.202 0.022 0.022⎥ ,
⎢ ⎥ ⎢ ⎥
⎣0.026 0.683 0.239 0.026 0.026⎦ ⎣ 0.02 0.92 0.02 0.02 0.02 ⎦
⎡ 0.04 0.54 0.34 0.04 0.04 ⎤ ⎡ 0.04 0.64 0.04 0.24 0.04 ⎤
m3 = ⎢0.133 0.686 0.133 0.024 0.024⎥ , m4 = ⎢ 0.681 0.244 0.025 0.025 0.025⎥
⎢ ⎥ ⎢ ⎥
⎣0.028 0.673 0.243 0.028 0.0028⎦ ⎣0.015 0.94 0.015 0.015 0.015⎦
Step 2: Confirmed the distance dis(mi, mj) of four indicators rating data of bank
A, calculated Crd(mi) as the weight of same groups experts wij (i = 1,2,3,4 means
four main indicators, j = 1,2,3 means three groups of experts) of basic probability
assignment of evidences using formula (6)–(8).
Then amended the BPA according wij of bank A, and derived the combining
results of oprisk rating by the adjusted combining rule.
Step 3: Normalized weights of experts from different groups and give the basic
probability mass (Attribute /expert group i):
Combining the probability of each level on four indicators we can get the
general score of oprisk management in bank A using integration grade
F = wj . E' . qi.
Finally, we can get the general score of operational risk management in bank B
and C by same method, and compare the results of the three banks (Table 13.4).
Conclusion
We can find the outcome of the DS evidential approach aggregation is also a distribution
on the top attribute (see the shadows in Tables 13.3 and 13.4). A general score can
be calculated from the distribution by adding each assessment grade value weighted
by the associated belief degree in the distribution. However the score will normally
be different from weighted sum method. From Table 13.4 we can detect clearly and
easily that where and what caused the oprisk in a bank. We conclude that:
Table 13.3 Probability of each level and score of operational risk management in Bank A
Probability in each level Subentry Index
5 4 3 2 1 grade weight qi Score
f1 0.03 0.463 0.304 0.173 0.030 3.29 0.25 0.823
f2 0.027 0.674 0.245 0.027 0.027 3.65 0.25 0.912
f3 0.067 0.633 0.238 0.031 0.031 3.67 0.30 1.102
f4 0.249 0.606 0.027 0.092 0.027 3.96 0.20 0.792
Score 1.865 9.504 2.442 0.646 0.115 14.572 1.00 3.629
● The results of subentry comparing by main index is: the subentry scores of bank
A are in the middle basically for the indicator f4 (efficiency of directorate) is the
best and the f1 (strategic plan) is the worst among three banks; Bank C has
the best score among three banks because of the obvious advantage in its main
indicators f1, f2 (service quality) and f3 (internal control); and the f4 of bank B is
the “short leg” for its operational risk management. Before we combined these
evidence we also get the assessment of sub-indicators, such as in f3 (internal
control) there are four sub-indicators. Using these sub-indicators’ rating, the
managers in banks can detect more detailed information on oprisk controlling.
So that it is reasonable for managers to change their policy and procedures to
control and mitigate the oprisk.
● We can give suggestions on the policies for mitigating oprisk in each bank:
managers in bank A need inspect their strategic plans related to operation flew,
analyze the circumstance in scrutiny so as to insure the collocation of resources
be in consistent with strategic plans; It is important for bank B to improve the
performance of directorate. They need inspect the rationality of policies in inter-
est, such as compensation for senior management; although bank C got the best
score among three banks, it is necessary for them to make great efforts in per-
formance directorate.
● From the characters of different group’s experts we can find that the scores from
outside managers and technologist were more steadier and the BPAs were
higher, especially the managers’ (See the mij matrixes). The conflicts between
the evidences of the three groups’ experts were very little.
In this demonstration, DS evidential theory supplied us the tool for mining
uncertain information into Scorecard Approach. It was hard to get a rational assess-
ment if we use the methods discussed in Table 1 alone for the insufficient data. Now
we can use the uncertain information by DS theory combination rule to improve the
Scorecard Approach.
Acknowledgement This paper has the support from China Natural Science Fund (J0624004)
Soft Science Research Program of Anhui (03035005), Literae Humaniores Program of Anhui
(2004SK003ZD), and Natural Science Fund of Anhui (050460403). The anonymous referees’
comments have made important contributions to the improvement of this paper. We also want to
appreciate the help and instructing from Professor J B Yang of the University of Manchester, Prof.
Garth Allen of Monfort College of Business of University of Northern Colorado, and the Fulbright
Scholar Larry Shotwell in Shanghai University of Finance and Economics.
End Notes
1. Zhengrong, L., and Guojian, L. (2006). Reference and revelation of international advanced
experience of oprisk management, Gansu Finance, 50–53.
2. Basel Committee on Banking Supervision (2002). Working Paper on the Regulatory Treatment
of Operational Risk.
3. Xiaopu, Z., Xun, L., and Ling, L. (2006). The Classification principles of operational risk loss
event. The Banker, 122–125.
4. Basel Committee on Banking Supervision. (2001). Operational Risk, Consultative Document,
Basel, September, URL: http://www.bis.org.
5. Wei, Z., Yuan, W. (2004). Operational risk management framework of new Basel accord.
International Finance Study, 4, 44–52.
6. Wei, Z., Wenyi, S. (2004). The new basel accord and the principle of operational risk manage-
ment. Finance and Trade Economics.12, 13–20.
7. Shusong, B. (2003). Operational risk measurement and capital restriction under the new basel
accord, Economic Theory and Economic Management. 2, 25–31.
8. Acerbi, C., and Tasche, D. (2001) Expected Shortfall: A Natural Coherent Alternative to
Value at Risk, Working Paper.
9. Mori, T., and Harada, E. (2001). Internal Measurement Approach to Operational Risk Capital
Charge, Bank of Japan, Discussion Paper.
10. Federal Deposit Insurance Corporation. (2003). Supervisory Guidance on Operational Risk
Advanced Measurement Approaches for Regulatory Capital, July: http://www.fdic.gov/regula-
tions/laws/publiccomments/basel/oprisk.pdf; Kühn, R. (2003). Functional correlation
approach to operational risk in banking organizations. Neul Physica A, 650–666.
11. Scandizzo, S. (1999). A Fuzzy Clustering Approach for the Measurement of Operational Risk
Knowledge-Based Intelligent Information Engineering Systems. Third International
Conference 31 Aug. – 1 Sept. 1999, 324–328; Giampiero, Beroggi, E.G., and Wallace, W.A.
(2000). Multi-expert operational risk management, IEEE Transactions on Systems, Man and
Cybernetics, Part C, 30:1, 32–44.
12. Buchanan, B.G., and Shortliffe, E.H. (1984). Rule-Based Expert Systems, Addison-Wesley,
Reading, MA.
13. Dempster, A.P. (1967). Upper and lower probabilities induced by a multi-valued mapping,
Annals of Mathematical Statistics, 38, 325–339.
208 C. Zhang et al.
14. Shafer, G. (1976). A Mathematical Theory of Evidence, Princeton University Press, Princeton,
NJ, 35–57.
15. Xinsheng, D. (1993). Evidence Theory and Decision, Artificial Intelligence. Beijing: Renmin
University of China Press 3:13–19.
16. Siow, C.H.R., Yang, J.B., and Dale, B.G. (2001). A new modelling framework for organisa-
tional self-assessment: Development and application. Quality Management Journal, 8:4,
34–47; Yang, J.B., and Xu, D.L. (2005). An intelligent decision system based on evidential
reasoning approach and its applications. Journal of Telecommunications and Information
Technology, 3: 73–80.
17. Wang, J., and Yang, J.B. (2001). A subjective safety based decision making approach for
evaluation of safety requirements specifications in software development. International
Journal of Reliability, Quality and Safety Engineering, 8:1, 35–57; Sii, H.S., Wang, J., Pillay,
A., Yang, J.B., Kim, S., and Saajedi, A. (2004). Use of advances in technology in marine risk
assessment, Risk Analysis, 24:4, 1011–1033.
18. Wang, Y.M., Yang, J.B., and Xu, D.L. (2006). Environmental impact assessment using the
evidential reasoning approach. European Journal of Operational Research, 174:3,
1885–1913.
19. Yang, J.B., and Xu, D.L. (2002). On the evidential reasoning algorithm for multiple attribute
decision analysis under uncertainty, IEEE Transactions on Systems, Man and Cybernetics.
Part A. 32, 289–304.
20. Shanlin, Y., Weidong, Z., and Minglun, R. (2004). Learning based combination of expert
opinions in securities market forecasting. Journal of Systems Engineering, 96–100.
21. Ibid.
22. Yong, D., Wenkang, S., and Zhengfu, Z. (2004). An efficient combination method to process
conflict evidences, Journal of Infrared and Millimeter Waves, 23:1, 27–33
23. Morgan, D.P., and Ashcraft, A.B. (2003). Using loan rates to measure and regulate bank risk.
Journal of Financial Services Research, 24:2/3, 181–200.
Chapter 14
Case Study of Risks in Cailing Chemical
Corporation
Risks in process industries have been widely studied in China.5 According to its
business leaders, the risk in Cailing Chemical Corporation can be classified into 14
types as follows:
Quality Risk
(b) Another aspect is that the qualified rates and competitive power of finished
product are very low. The most problems of its finished products are low nutri-
ent, which leads to share the profit in low nutrient product market, where low
profit margin can only be gained. And their qualified rates are very low, which
affects the market share.
(c) Also, Total Quality Management system is not established. Except for product
quality is emphasized, the maintenance quality of equipment, decision quality
and management quality among other things are not given enough attention.
Investigation showed that equipments are often repetitively repaired due to the
same act of violation recurs.
Safety Risk
Cailing has done a great deal of work at the aspect of safety production manage-
ment, but its performance in this area still needs to be strengthened. In 2002, the
accident count was 8 but in 2006 the figure elevated to 31. Our study discovered
that the safety risk of Cailing Chemical Corporation needs to focus on the accident
type, which is concentrated on mechanical injury, injured by vehicles and from
heights and others. The distribution of accident is concentrated on Nitrogenous
Fertilizer Plants, Sulfuric Acid Plant, Compound Fertilizer Plant and machinery
repairing plant. There are several factors responsible for accident in Cailing:
(a) Equipments and its management. Three departments and multi-grade management
layer are in charge of the equipment management, which is the main cause of
their distributive management. So, the efficiency of equipment management is
very low. Moreover, quality of equipments have the heavy corrosion aging,
which is one of the most potential safety hazards and affect normal production.
(b) Shortage of competent workers. The drain of high-class mechanics affects
the ability of operation and maintenance of equipments; many operators have
indifferent safety awareness; disobedient phenomenon is serious, sometime the
same safety accident repeated emergence.
(c) Safety education. There is lack of the systemic plan and scheme of safety edu-
cation, and also implementation of the systemic safety education, which lack
the mechanism of routine rescue rehearsals
(d) Working environment. There lies some potential safety hazard. For example,
narrow workplace potholes pavement etc. some serious potential safety hazards
are yet to be improved.
Marketing Risk
Marketing risk is also one of the most important risks of the corporation. There are
many factors that lead to marketing risk and such factors are:
14 Case Study of Risks in Cailing Chemical Corporation 211
(a) Organization. Its marketing organization lies in some question, for example,
unclear duty and work range, unsmooth information communication, lacks of
the flexibility of the system, etc.
(b) Marketing concept and means. Nowadays, its means of sales promotion is uni-
tary, lacks of systemic marketing strategy. The function of after-sales service
system is imperfect, lacks of given person track after-sales and building the cor-
responding archives. They do not investigation in the customers in time
meanwhile after-sales, so the need of customers can not comprehend in time.
(c) Competitors. Since the profit of the Phosphorus Chemical Industry is little, its
competition is very fierce, meanwhile some new comers and substitute contin-
uously appears, etc. all of these increase its marketing risk.
(d) Marketing channel. There lie in serious Customer losing problems and unitary
marketing channel. Meanwhile the development speed of new marketing chan-
nel and new customers
(e) Credibility. Sometime the need of some customers does not been met in time
for inferior quality of some Salesman, as Customer Complaint often appears.
Technology Risk
Since phosphorus chemical industry is not high-tech industry, new substitute cont-
inuously appears, the need of customers in green product and technology
continuously increase, in this background, Cailing chemical Corporation face
some technology risks as follows: (a) Technology competitive risk from new
substitute, green product and green technology etc.; (b) technology loss risk with
brain drain; (c) quality risk for the unabundance of technology capacity; (d) tech-
nology advantage loss risk for the development speed cannot meet the need.
Policy Risk
(a) Agriculture policy: to serve agriculture is the main function of the main prod-
ucts of Cailing Chemical Corporation, so any change from agriculture policy
can lead risk to Cailing Chemical Corporation
(b) Environmental protection policy: phosphorus chemical fertilizer is not Bio-fer-
tilizer and it may bring some pollute to environment, moreover there fill with
quantities of potential pollution risk in the product process of phosphorus
chemical fertilizer, so, some environmental protection policy can bring unfavo-
rable influence
(c) Local protective policy: in order to protect the benefit of local phosphorus
chemical enterprises, dealers or peasants, sometime some local government can
put forward for new policy, which can bring more uncertainty to Cailing
Chemical Corporation
Organization Risk
(b) Serious in-fighting and lack of effective competition. Organization setting diso-
beys the principle of authority responsibility profit, and there lack of coordina-
tion between organizations.
(c) Complex system and low efficiency of enterprise organization. Updating speed
of system is very slow; any system has not explicit valid term; Management
system lack of the coherence, systematicness and convenience.
Culture Risks
Institutional Risk
(a) Updating speed of Institution is very slow, and therefore unable to meet the
changes from the environment.
(b) Excessive and overstaffed institutions, they conflict with each other, and Lack of
systematicness and unity, so the Institution System lacks systemic efficiency.
(c) Some institutions have very lower quality. For example, the purpose is
unclear, power and duty are not clear, management process far from being
smooth.
It often occurs that production is stopped and product plan is affect for supply is not
enough; (b) the quality of products is affected by the quality of material, equip-
ments and machines; (c) safety and Environmental protection management is also
affected by the quality of some machines; (c) Bargaining power of some suppliers
can affect financial safety when their repayments occur simultaneously.
Financial Risk
Investment Risk
Risk Assessment
Cailing Chemical Corporation currently is facing with 14 types risks, and their
intensities are evaluated by subjective assessment technique on the basic of full
investigation. In the risk assessment, we took into account the level of harm and
frequency of risk. Figure 14.1 shows the 14 types of risks in a Coordinate Graphs.
Figure 14.1 indicates that quality risk, culture risk and human resource risk are
the most serious risk in Cailing Chemical Corporation. However, planning and
schedule risk, safe production risk, environmental protection risk, supply chain risk
and procurement risk, financial risk should not be ignored.
14 Case Study of Risks in Cailing Chemical Corporation 215
Planning and
Schedule Risk Quality risk
probability
Culture Risks
Human Resource Risks
Supply Chain risk and
procurement risks Safe production risk
Financial risk Environmental protection risk
Technology risk
Marketing risk investment risk
Policy risk
consequence
Risk Analysis
The risk transfer relationships among these risks is depicted in Fig. 14.2.
For Cailing Chemical Corporation, since policy risk, investment risk, organiza-
tion risk, institutional risk, technology risk are not the serious risks (as showed in
Fig. 14.1), they could be classified as sources of some risks. For example, unreason-
able organization structure can lead unefficient institutional and object management.
Thus organization risk is the source of schedule risk and institutional risk. Moreover,
institutional risk is the source of human resource risk and culture risk. Investment
risk is the source of procurement risk, quality risk and human resource risk. So, in
216
The identified 14 types of risks in Cailing Chemical Corporation can not distribute
in every organization, every department, or every business process. Moreover, their
levels of intensities are incompletely the same in different time and organization.
Consequently, it is necessary to study risk distribution from three points of view as
stated below:
There are different risks in different business processes. Also, there are different
risks in different stage of the same business process. Therefore, it is necessary to
study risk distribution based on business process. The identification and analysis of
risks in the whole process of the critical business process, which is the basic of the
whole process risk management is significant considering the impact of these risks
to the modern corporation. Here, we only research the risk distribution of procure-
ment process as depicted in Fig. 14.3.
Figure 14.3 shows some main risk of every stage in procurement process, accord-
ing to the chart, we can control risk in the whole process of procurement process.
218 X. Kefan et al.
Purchasing
intention
Selection Selection
procurement of Supplier of
planning decision risk
payment Selection transport
mode mode
Others planning
It is obvious that the risks in different location of factory district in different ways.
Thus, there is the need to research all risks and their position in factory district,
which can promote an all-round risk management system. Here we study risk dis-
tribution in the sulphuric acid plant.
Quality risk is a major problem in pyrite raw material, in the burning and poison
exposure in the roasting plant, and in the oxidation furnace. Environmental protection
risks are greatest in fluoride removal and the last absorber. Burning, poison exposure,
electrical shock, and corrosion exist at various stages in the production process as well.
Every unit has their own business and function, and different business can encoun-
ter different risk. Our investigation found that the risks encountered by every pro-
duction unit as listed in Table 14.2 and the risks encountered by every management
unit as shown in Table 14.3 are distinct. Tables 14.2 and 14.3 show risk distribution
in organization structure, the two tables make every unit clear of their anti-risk
responsibility, so risk distribution in organization structure should be study to
strengthen the risk management of all members.
14 Case Study of Risks in Cailing Chemical Corporation 219
Conclusions
Procure Human
Decision Financial ment resource Marketing Institutional Organiza Culture Investment Technol Schedule
Risk type risk risk risk risk risk risk tion risk risk risk ogy risk risk
Department
Front office p r r p p r p
Equipment r p p
Instrument r p p
Power r p p
Human p r
resource
Post inspec- p r r
tion
Social charity p r
Education p r
Planning and r p
statistics
Environment p r
protection
Safety p r
Mine techno- r p
logy
Chemical r p
technology
Quality r r
Financial p
Marketing p
Supply p
Here, r represents Production unit has the weak certain risk, p represents Production unit has the strong certain risk
X. Kefan et al.
14 Case Study of Risks in Cailing Chemical Corporation 221
End Notes
1. Wu, C., and Jia-ben, Y. (2000). Risk Management of Material Purchase, Systems Engineering-
Theory and Practice, 6, 54–59 (In Chinese).
2. Yi, K.-J., and Langford, D. (2006). Scheduling-Based Risk Estimation and Safety Planning
for Construction Projects, Journal of Construction Engineering and Management, 132, 626.
3. Hogan, J. (2004). Implementing a Construction Safety Program for Seaport Facilities, Ports,
136, 134.
4. Lavender, S.A., Oleske, D.M., Andersson, G.B.J., Kwasny, M.J., and Morrissey (2006). Low-
back disorder risk in automotive parts distribution, International Journal of Industrial
Ergonomics, 36:9, 755–760.
5. Jian-jun, Z. (1999). New Preparation Technology of Pure Phosphoric Acid with Variant
Phosphorus Ore, Guangxi Chemical Industry, 28:4, 13–16 (In Chinese); Jianxing, Y., Cheng,
L., and Guangdong, W. (2003). A New Method of Engineering System Risk Analysis Based
on Process Analysis, Ship Engineering, 05, 53–55 (In Chinese); Xie, K.-F. (2004). Enterprise
Risk Management, China, Wuhan: University of Technology Press, P.R. China, (In Chinese);
Zheng, L., Shan Ying, H., Ding Jiang, C., XiaoPing, M., and Jin Zhu, S. (2006). Dynamic
222 X. Kefan et al.
Modeling and Scenario Analysis on Phosphor Resources of China, Computers and Applied
Chemistry, 23:2, 97–102 (In Chinese); Cao, H.-P. (2006). Study on Anhui Liuguo Chemical
Industry Corporation Limited’s Development Strategy, Guang-xi University, (In Chinese);
Feng-Ping, W., and Xu-Xiang, T. (2007). Thermodynamical Analysis of the Normal-
Temperance Phosphating Process, Journal of Liaoning Normal University (Natural Science
Edition), 30:1, 80–83 (In Chinese).
Chapter 15
Information Technology Outsourcing
Risk: Trends in China
Introduction
Risks in information systems can be viewed from two perspectives. There is a need for
information technology security, in the sense that the system function properly when
faced by threats from physical (flood, fire, etc.), intrusion (hackers and other malicious
Systems
electrical distribution systems, and made train tracks more dangerous. Australians
imported rabbits to control one problem, and induced another. Environmentally,
DDT was considered a miracle cure to insect-borne epidemics in the 1940s. But
DDT had negative impacts, leading to its ban in 1972 after many DDT-resistant pest
strains had evolved.13 In medicine, hospitals have become very dangerous places,
with up to 6% of patients being infected by microbes after admission.14 Laparoscopic
surgery using fiber optic technology reduced operating costs 25%, which attracted
medical insurance companies, and led to double the rate of use, raising costs to
insurance companies 11% overall. Pap tests save many lives, but false reassurance
can lead to greater risks, and false positives can lead to unnecessary pain and agony.
Humans don’t seem to do well with complex systems. At least they create the need
for adaptation as new complications arise.
Complexities arise in technology as well.15 The Internet was created to assure
communication links under possible nuclear attack, and have done a very good
job at distributing data. It has also led to enormous opportunities to share business
data, and led to a vast broadening of the global market. That was an unintended
benefit. Some unintended negative aspects include broader distribution of por-
nography, or expedited communication in illegal or subversive organizations.
Three Mile Island in the U.S. saw an interaction of multiple failures in a system
that was too tightly coupled.16 Later, Chernobyl was even worse, as system con-
trols acted counter to solving the problem they were designed to prevent. We try
to create self-correcting systems, especially when we want high reliability
(nuclear power; oil transportation; airline travel – both in the physical context and
in the anti-terrorist context). But it is difficult to make systems foolproof.
Especially when systems involve complex, nonlinear interactions, conditions that
seem inevitable when people are involved.
Information systems involve high levels of risk, in that it is very difficult to predict
what problems are going to occur in system development. All risks in information
system project management cannot be avoided, but early identification of risk can
reduce the damage considerably. Kliem and Ludin (1998) gave A risk manage-
ment cycle18 consisting of activities managers can undertake to understand what is
happening and where:
15 Information Technology Outsourcing Risk: Trends in China 227
● Risk Identification
● Risk Analysis
● Risk Control
● Risk Reporting
Risk identification focuses on identifying and ranking project elements, project
goals, and risks. Risk identification requires a great deal of pre-project planning and
research. Risk analysis is the activity of converting data gathered in the risk identifi-
cation step into understanding of project risks. Analysis can be supported by quanti-
tative techniques, such as simulation, or qualitative approaches based on judgment.
Risk control is the activity of measuring and implementing controls to lessen or
avoid the impact of risk elements. This can be reactive, after problems arise, or
proactive, expending resources to deal with problems before they occur. Risk report-
ing communicates identified risks to others for discussion and evaluation.
Risk management in information technology is not a step-by-step procedure, done
once and then forgotten. The risk management cycle is a continuous process through-
out a project. As the project proceeds, risks are more accurately understood.
The primary means of identifying risk amounts to discussing potential problems
with those who are most likely to be involved. Successful risk analysis depends on
the personal experience of the analyst, as well as access to the project plan and his-
torical data. Interviews with members of the project team can provide the analyst
with the official view of the project, but risks are not always readily apparent from
this source. More detailed discussion with those familiar with the overall environ-
ment within which the project is implemented is more likely to uncover risks. Three
commonly used methods to tap human perceptions of risk are brainstorming, the
nominal group technique, and the Delphi method.
Brainstorming
Brainstorming involves redefining the problem, generating ideas, and seeking new
solutions. The general idea is to create a climate of free association through trading
15 Information Technology Outsourcing Risk: Trends in China 229
ideas and perceptions of the problem at hand. Better ideas are expected from brain-
storming than from individual thought because the minds of more people are
tapped. The productive thought process works best in an environment where
criticism is avoided, or at least dampened.
Group support systems are especially good at supporting the brainstorming
process. The feature of anonymity encourages more reticent members of the group
to contribute. Most GSSs allow all participants to enter comments during brain-
storming sessions. As other participants read these comments, free association
leads to new ideas, built upon the comments from the entire group. Group support
systems also provide a valuable feature in their ability to record these comments in
a file, which can be edited with conventional word-processing software.
The Nominal Group Technique19 supports groups of people (ideally seven to ten)
who initially write their ideas about the issue in question on a pad of paper. Each
individual then presents their ideas, which are recorded on a flip-chart (or compa-
rable computer screen technology). The group can generate new ideas during this
phase, which continues until no new ideas are forthcoming. When all ideas are
recorded, discussion opens. Each idea is discussed. At the end of discussion, each
individual records their evaluation of the most serious risks associated with the
project by either rank-ordering or rating.
The silent generation of ideas, and structured discussion are contended to overcome
many of the limitations of brainstorming. Nominal groups have been found to yield more
unique ideas, more total ideas, and better quality ideas than brainstorming groups.
Delphi Method
The Delphi method was developed at the RAND Corporation for technological
forecasting, but has been applied to many other problem environments. The first
phase of the Delphi method is anonymous generation of opinions and ideas related
to the issue at hand by participants. These anonymous papers are then circulated to
all participants, who revise their thoughts in light of these other ideas. Anonymous
ideas are exchanged for either a given number of rounds, or until convergence of
ideas.
The Delphi method can be used with any number of participants. Anonymity
and isolation allow maximum freedom from any negative aspects of social interac-
tion. On the negative side, the Delphi method is much more time consuming than
brainstorming or the nominal group technique. There also is limited opportunity for
clarification of ideas. Conflict is usually handled by voting, which may not
completely resolve disagreements.
230 D. Wu et al.
Outsourcing Risks
Bryson and Sullivan cited specific reasons that a particular ASP might be attrac-
tive as a source for ERP.23 These included the opportunity to use a well-known
company as a reference, opening new lines of business, and opportunities to gain
market-share in particular industries. Some organizations may also view ASPs
as a way to aid cash flow in periods when they are financially weak and desper-
ate for business. In many cases, cost rise precipitously after the outsourcing firm
has become committed to the relationship. One explanation given was the lack
of analytical models and tools to evaluate alternatives. These tradeoffs are reca-
pitulated in Table 15.3:
232 D. Wu et al.
Qualitative Factors
While cost is clearly an important matter, there are other factors important in selec-
tion of ERP that are difficult to fit into a total cost framework. Van Everdingen et
al. conducted a survey of European firms in mid-1998 with the intent of measuring
ERP penetration by market.24 The survey included questions about the criteria con-
sidered criteria for supplier selection. The criteria reportedly used are given in the
first column of Table 15.4, in order of ranking. Product functionality and quality
were the criteria most often reported to be important. Column 2 gives related fac-
tors reported by Ekanayaka et al. in their framework for evaluating ASPs,25 while
column 3 gives more specifics in that framework.
While these two frameworks do not match entirely, there is a lot of overlap.
ASPs would not be expected to have specific impact on the three least important
criteria given by Van Everdingen et al. The Ekanayaka et al. framework added two
factors important in ASP evaluation: security and service level issues.
China is India’s only neighbor in the Far East with a comparable population but far
better infrastructure boosted by its fastest expanding economy in the world. China
is already the manufacturing center of the world, and the winner of most IT out-
15 Information Technology Outsourcing Risk: Trends in China 233
sourcing contracts from developed Asian countries such as Japan and South Korea.
It is now poised to compete head-to-head with the traditional outsourcing destina-
tion countries, such as India, Ireland and Israel, for the much bigger and more prof-
itable North American and European market.
According to Gartner Group, the global IT services market is worth $580 billion,
of which only 6% is outsourced. India currently has 80% of this market, but other
contenders are rising with China now enjoying the biggest cost advantage. On aver-
age, an engineer with two to three years post-graduate experience is paid a monthly
salary of less than $500, compared with more than $700 in India and upwards of
234 D. Wu et al.
$5,000 in the United States. India also led other countries in the region with the
highest turnover rate at 15.4%, a reflection of the rampant job-hopping in the Indian
corporate world, especially the IT sector. Other markets with high attrition rates
include Australia (15.1%) and Hong Kong (12.1%). Almost all Indian IT firms
projected greater salary increases for 2005, according to a recent survey. Table 15.5
summarizes the labor cost factors.
In light of the increasing labor costs, India’s response is also moving to China.
In fact, most Indian IT firms that operate globally have begun implementing back-
door linkages to cheaper locations. IT giants such as Wipro, Infosys, Satyam and
Tata Consulting Services (TCS) have all set up operations in China, given the lower
wage cost of software engineers due to the excess supply of trained manpower. TCS
set up its shop in China in 2002 that employs more than 180 people; a year after
making a foray into the country, Infosys (Shanghai) has a staff strength of 200 to
cater to clients in Europe, the US and Japan; Wipro set up its Chinese unit in August
2004.
Business Risks
Two types of risks are perceived in international business operations in China that
apply to an ERP IT software company. First, because China is not a full market
economy based on a democratic political system, there is some political risk in the
government’s interfering with free enterprises. Such risks are deemed negligible
based on the open and reform policies of the central government in the past two
decade, and the economic boom derived from such a more transparent political
environment. Second, whereas China’s lack of protection of intellectual properties
is widely reported, there have been very few cases where business software was
pirated. This is due to the requirement of domain knowledge to profit from selling
business software.
A crucial factor for China’s emergence into the global outsourcing industry is
government support. The most important central government policy for the soft-
ware industry is the June 2000 announcement of State Council Document 18, for-
mally known as the “Policies to Promote the Software and Integrated Circuit
Industry Development.” The document created preferential policies to promote the
development of these two sectors. The documented policies for software companies
include:
(1) Value-added Tax (VAT) refund for R&D and expanded production
(2) Tax preferences for newly established companies
(3) Fast-track approval for software companies seeking to raise capital on overseas
stock markets
(4) Exemption from tariffs and VAT for software companies’ imports of technol-
ogy and equipment
(5) Direct export rights for all software firms with over USD $1 million in revenues
15 Information Technology Outsourcing Risk: Trends in China 235
Conclusions
Information systems are crucial to the success of just about every twenty-first cen-
tury organization. The IS/IT industry has moved toward enterprise systems as a
means to obtain efficiencies in delivering needed computing support. This approach
gains through integration of databases, thus eliminating needless duplication and
subsequent confusion from conflicting records. It also involves consideration of
better business processes, providing substitution of computer technology for more
expensive human labor.
But there are many risks associated with enterprise systems (just as there are
with implementing any information technology). Whenever major changes in
organizational operation are made, this inherently incurs high levels of risk. COSO
frameworks apply to information systems just as they do to any aspect of risk
assessment. But specific tools for risk assessment have been developed for informa-
tion systems. This paper has sought to consider risks of evaluating IT proposals
(focusing on ERP), as well as consideration of IS/IT project risk in general.
Methods for identifying risks in IS/IT projects were reviewed, We also presented
the status and trends of outsourcing risks in China.
End Notes