0% found this document useful (0 votes)
69 views63 pages

The Risk Matrix - Closing The Right Pill

Uploaded by

Mudassir Amin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
69 views63 pages

The Risk Matrix - Closing The Right Pill

Uploaded by

Mudassir Amin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 63

1

1
The author

Dr Carl A. Gibson originally trained as an infectious disease specialist, and has led major incident
response to various disease outbreaks, as well as service in the military and emergency services in the
UK and Australia. Taking a major career shift during his time in Australia, he has held senior executive
positions in government, corporate, and tertiary education sectors, before entering the consulting
world a decade or so ago. He now consults to government and international corporates in strategy
development, business transformation, and decision making under high uncertainty.

Carl has been active in the ‘world’ of risk management since the 1990’s, being a member of various
Standards Australia and ISO technical committees, contributing to the development of Standards in
risk, business continuity, emergency, crisis, and security management.

He is the author of over 130 publications and is engaged in active research programs in the areas of
organisational resilience and risk-informed decision making.

Dr Gibson can be contacted through info@executiveimpact.com.au.

Notices
All illustrations are the property of Executive Impact Consulting Pty Ltd, unless otherwise noted.

Copyright ©2023 Executive Impact Publishing

The moral rights of the authors have been asserted

For information about permission to reproduce selections in this book contact the info@executiveimpact.com.au

Composition and book design by Executive Impact Publishing an imprint of Executive Impact Consulting Pty Ltd.

All rights reserved. Except as permitted under the Australian Copyright Act 1968 (for example fair dealing for the purpose of
study, research, criticism of review). No part of this Whitepaper may be reproduced, stored in a retrieval system,
communicated or transmitted in any form or by any means without prior written permission. All inquiries should be made
to the publisher at the email address above.

The material contains research studies and anecdotal commentary. This by necessity means that some assertions in this
Whitepaper may prove to be incorrect as and when further information is made available in the public domain. The authors
apologise in advance for any situations where this may occur in the future.

Limit of liability/ Disclaimer of warranty


While the publisher and authors have used their best efforts in preparing this Whitepaper, they make no representations or
warranties with respect to the accuracy or completeness of the contents of this Whitepaper and specifically disclaim any
implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales
representatives or written sales material. The advice contained herein may not be suitable for your situation. You shall be
liable for any loss of profit or any other commercial damages, including but not limited to direct, special, incidental,
consequential, or other damages.

2
This page left intentionally blank

3
Contents

Notices ................................................................................................................................................... 2
A history of the matrix ........................................................................................................................... 6
The state of the risk matrix today .......................................................................................................... 7
An informed adoption decision? ............................................................................................................ 8
The claimed benefits of the risk matrix ................................................................................................ 10
A confused mess of claims ................................................................................................................... 52
The risk matrix: An ethical dilemma? ................................................................................................... 52
Is the risk matrix of any use? ................................................................................................................ 53
A way forward ...................................................................................................................................... 54
In conclusion ........................................................................................................................................ 56
References ............................................................................................................................................ 57

4
This page left intentionally blank

5
The Risk Matrix: Choosing the Right Pill
In the 1999 Movie “the Matrix”, Neo (played by Keanu Reeves) is given a choice, to take the blue pill
and remain in the simulated reality of the Matrix, or to take the red pill and enter into the real world.

In the world of risk management, a majority of practitioners have swallowed the blue pill, and
continue to be satisfied with what the (risk) matrix has to offer. The others, deciding on the red pill,
have entered a world more complex and mathematically challenging. Which choice provides the most
meaningful and valuable experience? It is time to enter the matrix and find out.

A history of the matrix


One of the earliest references about the use of a ‘risk matrix’ dates back over a century ago (USDA,
1914), with different forms of ‘risk matrices’ being used through much of the Twentieth Century,
usually in scientific, engineering, financial, or economic applications (Arrow et al, 1949; Kendricks and
Gagge, 1949; Chernoff, 1954; Greenhut, 1962; Wojnilower, 1962; Stoner 1982; Pogue 1970). However,
these matrices were not what most of us would recognise today as the traditional ‘risk matrix’, but
rather comprised a range of different mathematical constructs and multifactorial relationship tables.

There are many commentators that have claimed the ‘Directional Policy Matrix Approach’1 (Hussey,
1978; Shell, 1975) was the foundation of the risk analysis matrix. However, this is not entirely correct.
This matrix construct had little to do with today’s familiar risk analysis tool. It was just one of several
different types of alignment matrices (Wind and Mahajan, 1981) similar to those used since the early
days of the twentieth century. The difference was that Hussey referred to his construct as a ‘risk
matrix’, which aligned a “company’s competitive position” (weak/average/strong) with its “prospects
for market sector profitability” (unattractive/average/attractive). This 3x3 alignment matrix, then
produced results such as “proceed with care”, “cash generator”, “double or quit”, and “try harder”.
Hussey also developed a ‘risk matrix’ that aligned profitability against environmental effects. These
were a far cry from what we would now identify as a ‘risk matrix’, along with many similar models that
predated it.

The mid-70’s saw a number of different types of these alignment matrices emerge. Many of these
were modifications of an earlier BCG2 matrix whilst tools similar to Hussey’s had been in use since the
1950’s, such as the 3x3 ‘market-technology mix matrix’ (Johnson and Jones, 1957). Even in these early
days, the use of such alignment matrices was not without criticism because of significant limitations
in their ability to reflect the reality of what they were trying to show (Wensley, 1982). It seems that
many writers that quote Hussey as the ‘father’ of the ‘modern’ qualitative risk matrix have fixated
upon his use of the word ‘risk’ rather than trying to understand the nature of what he was actually
estimating.

Similarly, a nuclear safety report from the 1970’s is often credited with introducing the modern
version of the risk matrix (WASH1400, 1975). Although the report is full of frequency and probability
tables and graphs, and provides an interesting practical example of quantitative analysis, there is still

1
The Directional Policy Matrix was first developed by Shell International Chemical Company in 1975, and
provided a 3x3 matrix aligning a company’s competitive strategic capabilities (as high/ medium/ low) against the
prospects for sector profitability (as high/ medium/ low).
2
Boston Consulting Group.

6
nothing close to the all too familiar qualitative risk matrix still in use today. The closet this WASH1400
report gets is shown in Table 1 below. An Interesting artefact, but still nowhere near resembling the
common qualitative risk matrix.

Table 1: Land area affected by potential nuclear power plant accidents for 100 reactors3

Consequences
Chance per year Decontamination area (sq mile) Relocation area (sq mile)
1 in 200 <0.1 <0.1
1 in 10,000 2000 130
1 in 100,000 3200 250
1 in 1,000,000 3200 290
1 in 10,000,000 3200 290

One of the first recognisable qualitative risk matrices was in the system safety domain (DOD, 1984;
1993), aligning:
• the consequences (Negligible/ Marginal/ Critical/ Catastrophic)
• against the hazard frequency (“Improbable [10-6>X”]/ “Remote [10-3>X<10-6]”/ Occasional
[10-2>X<10-3]/ Probable [10-1>X<10-2]/ Frequent [X>10-1].

However, the intent of this ‘risk matrix’ approach was not to undertake a detailed risk assessment,
but rather to conduct an initial screening and prioritisation of hazards for subsequent detailed
quantitative analysis.

Much of the current construction of today’s qualitative risk matrices owe their existence to the
inadvertent popularisation of the concept by the word’s first risk management standard, the joint
Australian and New Zealand AS/NZS 4360: 1995.

The state of the risk matrix today


The current risk matrix, although still incredibly popular with business risk ‘analysts’ (Duijm, 2015),
has continued to be developed in an abstract world devoid of mathematical logic and scientific
foundation. In fact, many risk matrices in popular use absolutely violate critical foundational scientific
and mathematical principles, potentially invalidating their analytical outputs from the very start.

More than 30 years after their introduction, there is still no scientific evidence that risk matrices
improve decision making under uncertainty (Nas et al, 2022). Even some Standards bodies have
previously raised issues of “subjectivity and inconsistency” regarding the use of risk matrices (Ball and
Wat, 2013; ISO, 2009; BSI 2004). Whilst the committee responsible for developing risk management
Standards in Australia and New Zealand was so concerned about the misuse of risk matrices by
practitioners, consultants, and auditors4, that it eventually removed illustrative examples of matrices
from the risk management Standard.

3
Adapted from WASH14000, 1975, (p. 141)
4
Certainly, in the early days of the risk management Standard and associated Handbooks, many auditors so
misunderstood the risk management process that they would issue findings on noncompliance where an

7
An informed adoption decision?
Today, an overwhelming majority of risk practitioners rely on some form of risk matrix as a core
component of their risk assessment approach5, and there are many advocates for its adoption as a
simple and easy risk analysis tool (Bowers and Khorakian, 2014). There appears to be a variety of
reasons individual practitioners use a risk matrix:

• Inherited from the previous incumbent in the position.


• Required by the ‘boss’ (e.g., CRO, CFO, Head of Legal, etc.).
• Required by the CEO or Board.
• Organisational ‘policy’ or ‘standard’ mandates its use.
• Required by a regulator.
• Used elsewhere in the broader organisation.
• Common tool within the particular industry sector.
• Introduced by a consultant.
• Transferred from the incumbent risk professional’s previous job.
• Already established as part of a purchased commercial software package.
• Alternate methods are perceived to be too complex or too time consuming.
• Practitioners believed they did not have the knowledge or skills to use alternate methods.
• Practitioners believed they did not have sufficient data or information to use alternative
methods.
• Users of risk products are familiar with the matrix and comfortable with its output.
• Simply because everyone else is using a risk matrix, therefore it must be ‘good practice’.

There also appears to be a range of different reasons why a risk practitioner ‘chooses’ to use a specific
‘format’ or construct for the risk matrix:

• Already in use within the organisation when the practitioner was appointed to the position.
• Mandated by the organisation.
• Provided as part of a consultancy engagement.
• Copied from a Standard, code, government guideline, industry body guideline, or other
‘authoritative’ source.
• Copied from a different organisation.
• Copied from a textbook, journal, or magazine article.
• Copied from a social media post.
• Based on an example seen at a conference.
• Benchmarked different matrix examples and chose the one that looked the most appropriate
or interesting.
• Built from the ‘ground up’, and specific for the context within which it will be used.

organisation did not use a risk matrix identical to the hypothetical example provided in the Standard. I should
know, I was the recipient of such audit reports on multiple different occasions in the 1990’s, issued by three of
the big audit firms.
5
Based upon informal surveys of several thousand practitioners conducted over multiple training courses,
workshops, conferences, social media discussions, and consulting engagements, as well as reviews of the
published literature.

8
This last dot point above, the ‘bespoke option’ is, in my experience, the least commonly encountered
of any of the above ‘sources of use’. Thus, it seems that the start of the journey into using risk matrices,
for many risk practitioners, is because someone else has used a risk matrix. Not because there is a
scientific basis for the decision, or a weight of experiential evidence for its appropriateness and
performance.

Few practitioners seem to ask (let alone try to answer) how they can “know that their risk matrix is
actually working?”. One would have thought that most risk professionals would at least understand
the risk of using a risk matrix.

Even amongst those practitioners and decision-makers that accept that risk matrices are problematic,
there is still a wide continuing acceptance for their use, based on the belief that that using a risk matrix
is better than “purely random decision making” (Cox, 2009). However, mathematical modelling of
error within risk matrix-based assessment provides a different conclusion. Cox demonstrated that in
conditions of less than perfect positive correlation, the performance of the risk matrix can be worse
than just making a random decision, and can be “worse than useless” (Cox and Popken, 2007).

A good or bad decision?


Without doubt, there are many practitioners that believe that the risk matrix is THE risk management
tool. Providing a more than sufficient means of meeting most, if not all, of their risk assessment needs.
These practitioners seem to take umbrage at any criticism of risk matrices, and often appear to take
such criticism personally. Conversely, there are a number of quantitative risk practitioners and
commentators that believe that qualitative risk assessments (whether they use a matrix or not) are of
little use and are particularly dismissive of the use of risk matrices in any context. It is likely that the
effective use of risk matrices lies somewhere in between the two extremes of opinion.

It is important to acknowledge that there are substantial problems with the concepts, construction,
and application of most risk matrices. However, by far the larger problem is a failure to understand,
acknowledge, and address these problems by users of risk matrices. A qualitative risk matrix may be
of use in certain circumstances if its limitations are recognised and allowed for, something that is not
a common experience.

There are two prevalent assumptions: that the risk matrix provides a qualitative assessment that
categorises or ranks a group of risks in either the correct order of magnitude, or ranks risks in the
correct order of importance. Both assumptions are often demonstrably false.

9
The claimed benefits of the risk matrix
A quick search of the internet will yield hundreds of articles uncritically extolling the benefits of the
risk matrix6. Of these articles, a majority seem to be written by consultants, few have any evidential
basis, and less than 0.5% are from a peer reviewed journal7. Even, for the moment, assuming that a
risk matrix can live up to the claims8 made about it, the construction and use of the matrix in many of
these articles represents uninformed and poor practice (if not actually harmful advice). Yet, the use
of risk matrices in risk management has become so common that users of the tool rarely question its
validity, despite a complete absence of scientific validation ( Wall, 2011).

There is no doubt that a substantial number of risk practitioners place significant reliance upon risk
matrices, and have apparently strong beliefs in the benefits of these tools. However, it has been
cogently argued that such benefits are mostly “illusory” (Hubbard and Evans, 2010). Further, just
because the use of risk matrices is so widely accepted, this does not provide any evidence that they
are an accurate or even useful tool. It needs to be remembered that for many of these applications of
risk matrices, very few are in any way validated effectively.

Supporters9 of the risk matrix often justify its continuing use by arguing that it:

• Is the starting point for risk management.


• Simplifies the risk management process.
• Provides a structured approach.
• Provides a reliable and repeatable analysis.
• Is a good calculation tool.
• Is a good display tool.
• Explains risks in a clear way.
• Make complex Information more accessible.
• Provides a rational tool for risk assessment.
• Tells us what threats and hazards can happen, and under what circumstances they can occur.
• Reveals unforeseen danger.
• Identifies the “gravest” risks.
• Tells us if the risk is being controlled effectively.
• Tells us what actions are required to manage the risk.
• Provides a comprehensive overview of risk.
• Tells us what are the most pressing issues.
• Provides a quick means of bringing risk to attention that require immediate action.
• Provides a real time look at the risk landscape.
• Breaks down risk into it most important facets.
• Provides a low cost way of measuring risk.
• Is easy to construct.

6
I have deliberately not provided references for these articles, I will not inflict these dubious sources upon the
reader, publicising such articles may only encourage people to read and follow their advice!
7
Many of these peer reviewed articles are also highly conjectural and assumption laden, with little robust
analysis and validation of the claims being made.
8
Several of these articles claim that a risk matrix is essential for organisational success. Snake oil salesmen have
merely shifted professions in the last 100 years.
9
Every one of these claims has been published by ‘promoters’ of the risk matrix, and many of these claims have
been repeated by multiple writers and on multiple occasions.

10
• An nXn matrix10 is the best tool.
• Is simple to use.
• Is quick to use.
• Is intuitive.
• Requires no expertise or prior knowledge to understand a risk matrix’s outputs.
• Helps to visualise the risks.
• The colour coding system makes it easy to interpret.
• Is essential to the risk management process.
• Allows easy quantification.
• Facilitates a good understanding of risk.
• Produces an easy to read and understand risk rating.
• Can be used to compare different types of risk.
• Can be applied to different contexts.
• Provides an objective assessment.
• Provides a great way of communicating risks.
• Facilitates meaningful reporting to senior decision-makers.
• Can be used by anyone.
• Makes the risk management process more transparent.
• Helps get the team aligned.
• Increases stakeholder trust of the risk assessment.
• Focuses decisionmakers on the highest priority risks.
• Informs more accurate risk management strategies.
• Provides the mechanism for evaluating risk treatments.
• Provides the basis for robust discussions.

If even half of these claims were true, then the matrix would be the greatest way of ‘measuring’ risk
ever conceived. Obviously, a great many practitioners believe at least some these claims, otherwise I
am at a lost to explain the matrix’s enduring popularity.

However, how many of these claims are true, or even defensible? This paper will consider each of
these claims about the risk matrix in turn, present the evidence, and let the reader decide on how
they should consider their future use of the risk matrix.

10
For example, “my 4x4 matrix is better than your 3x3 matrix”, or other claims such as “only my 10x10 matrix
provides sufficient granularity”, etc.

11
“Is the starting point for risk management”

We will start off by considering one of the strangest and far reaching of these claims, by asking why
would one even consider that a matrix is the starting point for risk management? Most risk
management approaches (whether ISO 31000, COSO, or one of the multitude of other alternate
methodologies) ‘start off’ by developing an understanding of the ‘world’ (context) within which risk
management is to be conducted, and then designing a risk management approach that will work
within that ‘world’.

The design and construction of a risk matrix is likely to be a fairly ‘late’ activity11 in this respect. One
needs to develop an understanding about the nature of risk in this ‘world’, and the criteria by which
meaningful analysis can be conducted and upon which judgements will be informed. Only once this
knowledge base has been firmly established, can one even start to think about the design of a risk
matrix.

For those that believe the risk matrix is the starting point for risk management, may I humbly suggest
that you download some reading material form the internet, buy a book about risk management, or
choose a different career.

It may be that we have the whole concept of risk management back to front. One of the early activities
is to “establish the context”12, commonly interpreted as looking at the “internal contexts’ and
“external contexts”, and then determining the context for risk management. We can alternatively look
at this from this very different perspective, in particular, using as the starting point an exploration of
the ‘decision context’ and ‘problem space’. For example, by first building an understanding of the
questions that risk management needs to resolve and the decisions that need to be made (NRC, 2009),
we will have a far better understanding of what data and information needs to be sought about the
internal and external contexts and their interactions. This understanding of the decision context will
help to identify what types of information products are required and in what level of detail. Which in
turn will provide the foundation for the selection and design of the risk management methodology
(e.g. qualitative or quantitative), and determine if the risk matrix is an appropriate tool13 for the
specific decision context, and if it is appropriate, how should a risk matrix be constructed and
validated.

Conclusion: fact or fallacy


“The risk matrix is the starting point for risk management” is a fallacy.

11
Noting that risk management should not be thought of a linear process, but is rather a reflexive, reflective,
and recursive cycle.
12
According to ISO 31000: 2018, the most popular of the risk management standards.
13
Even if one assumes that a qualitative risk matrix is a useful tool, it does not necessarily follow that it will be
the right choice every time that a risk assessment is rolled out. The decision context should be a major factor
influencing that choice.

12
“Simplifies the risk management process”

For many risk practitioners, risk has only two attributes or dimensions: ‘consequence’ and ‘likelihood’.
Which then ‘intuitively’ leads to the adoption of the easiest tool with which to compare two
dimensional attributes: the risk matrix. Consequently, the substantial adoption of the risk matrix then
reinforces the belief that risk only needs be expressed in those same two dimensions.

However, there are multiple other factors that contribute to risk, beyond just simple concepts of
consequence and likelihood. For example, a common fault with the use of risk matrices is the reliance
on estimating the analytical outcomes of combining a simple notion of consequence with a simple
notion of likelihood. Thus, ignoring the existence of a complex interplay of a wide range of other
factors. Even just considering a fairly simple construct of risk, the current use of the risk matrix does
not lend itself to considering the contribution and importance of the range, dispersion, and extent of
variability of potential future consequences, and their contribution to the construction of risk (Naik
and Prasad, 2022). For the users of the ISO 31000 definition of risk, this variability in consequences is
a key contributor to the uncertainty that could have a future effect on the pursuit of objectives.

The use of the risk matrix abstracts and oversimplifies what are often complex concepts and problems.
An understanding of which is necessary to both properly understand the nature of risk, and to make
appropriate decisions about how to effectively address risk. This gross simplification is not done in a
way that goes anywhere near improving understanding about the nature of risk. This is not really
simplifying the risk management process in any meaningful way, it merely looks simple because of its
huge abstraction from reality. Indeed, “the utilisation of matrices is no simple task despite their simple
appearance” (Nicholls and Carroll, 2017).

This apparent ‘simplicity’ hides further woes for both analyst and decision-maker, as the rationale and
meaning of assigning particular rankings from a risk matrix are rarely explained, so that both the
underlying complexity and uncertainty remain hidden from both user and decision-maker. Both are
left to interpret the relative importance of different risks from what are often just single word labels
assigned to subjective risk ratings. With little consideration of the various biases and assumptions that
underpin the analytical product.

The effective use of a risk matrix requires a complexity of thinking and a depth of validation that is
beyond a majority of current risk practices.

A simple construct?

There are broadly three types of risk matrix that are most commonly used:

• Purely qualitative, where the consequence and likelihood axes are descriptive scales (e.g.
‘Negligible’, ‘Low’, ‘High’ etc.)
• So called ‘semi-quantitative’, which usually have one axis with some form of quantitative
expression (and the other axis expressed qualitatively), or both axes have ordinal linear values
(e.g. numbered 1 to 5).
• Purely quantitative, where the consequence and likelihood axes use mathematical values (e.g.
ratio scales). These are not used as commonly as the previous two types of matrix, probably
because if such values are being measured, the data will usually lend themselves to other
better validated quantitative tools.

13
The construction of a risk matrix commonly involves assigning what is an innate quantitative attribute
embedded within the descriptive likelihood criteria, often as frequency ranges (such as “once a year”,
“once in 10 years”, etc. However, these values are not then used in any quantitative sense, instead
they are abstracted and used to provide some limited boundary calibration (to how the descriptive
terms are to be used). Similarly, qualitative consequence criteria often have a starting basis in
quantitative levels. Although these quantitative ‘markers’ are then commonly ‘hidden’ within the
qualitative criteria, and remain inapparent or are ignored by the typical user of the matrix (Elmonstsri,
2014).

Many versions of the risk matrix, rather than simplifying risk management or risk assessment, actually
introduce more complexity and cognitive demands because of the introduction of ‘displacement’.
Traditional risk matrices often have an abstracted rating scale (commonly just single word descriptive
labels, as in many of the example matrices in this paper), with the actual value descriptors in separate
consequence and likelihood criteria tables. Thus, using such a risk matrix requires information to be
kept in short term (working) memory as one refers back and forth between matrix and criteria tables
(Fausset et al, 2008). This increased cognitive demand may have one or more outcomes:

• Since working memory has severe capacity limitations14 (Oberauer et al, 2016; Cowan, 2010),
the more demands that are placed upon it the more chance that some important information
will not be processed, and will be forgotten or disregarded.
• The sheer act of moving back and forth between matrix and rating criteria provides a
distraction and critical information elements (such as patterns in the data) may not be
recognised, leading to an incomplete consideration of risk.
• The increased cognitive demand can drive the user of the matrix to rely more on intuition
when assigning matrix ratings, rather than continually referring back to the rating criteria
tables, thereby further increasing the potential for misinterpretation and other error.

It can be argued that applying a risk matrix can simplify the risk management process. After all, almost
anyone (with no specific expertise or training) can construct a risk matrix and use it for analysis. (Cox,
2008) Could anything be simpler? But here ‘lies the rub’. Constructing even a qualitative risk matrix
that is robust and meaningful requires a deep understanding of the nature of risk, extensive expertise
in quantitative analyses, and the willingness to spend the time required to conduct a robust validation.
Capabilities that are too often ignored in the rush to simplification. Yet establishing a simple
substitution quantitative risk analysis should be within the capabilities of anyone calling themselves a
risk professional. A basic quantitative analytical approach is often far simpler than constructing and
validating a risk matrix.

Conclusion: fact or fallacy


“The risk matrix simplifies the risk management process”: is a fallacy.

14
Between 4 to 7 chunks of information.

14
“Provides a reliable and repeatable analysis”

Many practitioners place an incredibly high degree of reliance upon the risk matrices that they use,
with little or no attempt to validate the reliability and repeatability of their use in risk analysis.
Similarly, many decision-makers and other users of analytical products place significant faith in the
risk management products that they receive. There appears to be such a high level (false sense) of
confidence in both the risk management process and product, such that their validity and integrity is
rarely questioned. There are a few of quick questions that users of a risk matrix should honestly ask
of themselves:
• What is the reliability15 of your chosen matrix?
• What is the repeatability16 of your chosen matrix?
• What is the reproducibility17 of your chosen matrix?

If these questions cannot be answered immediately with confidence, then it would appear that such
users really do not understand the core analytical tool that they have decided to employ.

It is very difficult to measure what is not known (Jean-Paul, 2004), and even the best analytical tools
will introduce uncertainty into the measurement of risk. The accuracy of any analysis, whether
quantitative or qualitative techniques are used, will be affected by the availability and quality of data.

Reliability in risk analysis is usually dependent upon:

• The quality of the analytical process.


• The quality of the tools used to support the process.
• The alignment of the analytical process, techniques and tools with the context within which
they will be applied.
• The knowledge, experience, and expertise of the analyst.
• The ‘management of information quality’ .

How many analysts undertake rigorous modelling of their risk assessment processes, tools and
techniques (including the risk matrix) to determine the type and extent of errors that the chosen
methodology may be introducing?

If we just consider this latter dot point (above) for the moment, to what extent does the typical analyst
using a risk matrix consider the depth, breadth, accuracy, precision, and reliability of the information
inputs used for the analysis, and the reliability and trustworthiness of the sources of that information?

It has been recognised for some time that qualitative risk matrices provide for a low precision and low
accuracy analysis. Even attempts to introduce some ‘quantification’ into a risk matrix, for example by
including ordinal number sequences (e.g. ‘1’ to ‘5’) to the matrix’ axes and adding or multiplying out
values, have been shown to produce even less valid results (see “A risk matrix is a good calculation
tool” below).

15
Do you know the magnitude of errors in your estimates of risk?
16
Have you analysed the same data sets, using the same criteria and risk matrix, on multiple different occasions?
What is the variation in risk ratings that are achieved across these repeated analyses? Repeating the same
analysis, but weeks apart (with sufficient time to have forgotten the original analytical outputs, and without
referring to those outputs), and it is entirely likely that risks will be rated differently.
17
Has the analysis been repeated multiple times, with the same data set and matrix, but using different
independent analysts? Has the analysis been repeated comparing your matrix against other matrices, and
against quantitative tools?

15
Multiple studies have shown that different analysts can produce wildly inconsistent results using the
same risk matrix and the same input data (for example, Karanikas and Kaspers, 2016; Ball and Watt,
2013), and that even with increased familiarity in the data and use of the matrix, this inconsistency is
not reduced.

Oher studies (for example, Budescu et al, 2009) have shown that even where guidelines are provided
on how to interpret criteria and assign values, users will still assign their own arbitrary values. That
individuals assign their own interpretations to risk criteria within a matrix is not some new discovery.
We have known about this propensity for ‘guessing’ for at least the last thirty years (Windschitil and
Weber, 1990), but with very little done to address the matter.

Subjectivity and biases

Work by Nobel Laureate Daniel Kahneman and others demonstrated that individuals have a poor
appreciation of probability and will regularly underestimate situations that are highly probable, whilst
overestimating the probability of particularly rare situations (Bordalo et al, 2021; Smith et al, 2008,
Wilson, 1994; Tversky and Kahneman, 1992). Further, some studies have indicated that individuals
have a higher concern for, and pay more attention to the precision of outcomes (consequences) than
probabilities (Du and Budescu, 2021; Budescu et al, 2002)

The risk matrix is known to be a highly subjective tool (Baybutt, 2016; Ball and Watt 2013; Edwards
and Bowen, 2005) and is incredibly susceptible to:

• The relative knowledge and experience of the analyst.


• The beliefs of the analysts and those using the risk assessment products.
• A range of judgement errors.
• Individual cognitive biases.
• Group sensemaking biases.

A common issue is that of ‘centering bias’ (Hubbard, 2009; Smith et al, 2009), where users of the risk
matrix tend to avoid extreme ratings on the matrix (for example in the ‘corners’), and instead
preferentially choose cells clustered in the more central ‘yellow’ and ‘orange’ zones (as in Figure 1
below). Some studies suggest that up to 75% of risk ratings are assessed in these centre or mid ranges
of the risk matrix (Thomas et al, 2014), with a centring bias that is seen in perhaps a majority of
applications of the risk matrix18.

Many risk matrices demonstrate poor consistency, where subjectively lower ‘qualitative’ risk may be
quantitatively greater than other qualitatively ranked risks (and vice versa). It is, therefore, baffling
how most risk matrices could be regarded as a reliable tool, when there are not even simple rules
guiding the creation of risk matrices, an absence that almost guarantees inconsistency in both
development of the tool and in interpretation of results.

Conclusion: fact or fallacy


“The risk matrix provides a reliable and repeatable analysis”: is a fallacy.

18
Author’s personal observation of several hundred organisation’s risk assessment applications.

16
“A risk matrix is a good calculation tool”

The risk matrix, by its very structure, implies that there are only two important factors in assessing
risk: ‘consequence’ and ‘likelihood’, and that a simple alignment (or multiplication) of a single
consequence with a single likelihood provides for an accurate measurement of risk. This is an
important first point, qualitative risk matrices do not allow for the calculation of anything. They are
merely providing a subjective estimation of a simplistic risk construct. Note that this is not an
estimation of a level of risk per se, it is a relative ranking estimation of the consequence/likelihood
qualitative construct.

An ordinal problem

The very nature of the risk matrix, even with attempts at quantification, can introduce significant
errors into a risk analysis. For example, simply adding numbers to the axes of a risk matrix does not
necessarily turn the tool into a quantitative calculation tool. There are many versions of risk matrix.
Some of which, the so called ‘semi-quantitative’ matrix, are capable of introducing serious error into
risk analysis. Such ‘semi-quantitative’ risk matrices are often better referred to as ‘pseudo-
quantitative’. These matrices often assign numerical labels to each axis (often numbered 1 through to
5, as an ordinal scale). This is in itself problematic, as there are certain mathematical functions that
one cannot do with such ordinal scales19, such as multiply or add then together. The outputs of such
‘calculations’ with ordinal numbers are unfortunately mathematically invalid, which means that any
such derived ‘risk levels’ or ‘risk values’ are also invalid.

A logical problem

Notwithstanding the ordinal nature of these rating scales, the outcomes of using such scales are often
logically unintelligible. For example, consider Table 2. Assume that we determine that we have a
hypothetical type of consequence that is ‘negligible’ (assigned value of ‘1’), another type of
consequence that is ‘negligible’ (assigned value of ‘1’), and a third type of consequence that is also
negligible (value of ‘1’), and that all three consequence are equally likely to occur simultaneously.
What is the overall consequence?

• Well, if we were dealing with truly quantitative values, and for the sake of this discussion we
assign a negligible consequence with a value of $1, then the overall consequence is $3 ( three
consequences all occur simultaneously =$1+$1+$1).
• If we apply the same logic to the ‘semi-quantitative’ risk matrix, by adding together the three
consequence quantities (1+1+1)20, we derive an overall consequence value of ‘3’. Reading
along the consequence axis, a value of ’3’ is equivalent to a rating of ‘medium’. Are we really
expecting that 3 negligible consequences would combine to produce a medium
consequence21.

19
‘Ordinal’ scales do not provide any size or magnitude relationships, they merely indicate the order in which
the criteria are ranked. Hence numbering a matrix axis ‘1 to 5’ is not introducing any real quantification. It is
merely showing that a ‘5’ is a higher order than a ‘4’; that a ’4’ is a higher order than a ‘3’ etc.; but not tell us in
any way tell by how much ‘5’ or ‘4’ is greater than ‘2’ in this usage (and no, the answer is not 21/2 times or twice
as big).
20
For which we have to adopt the erroneous assumption that one can add and multiply ordinal numbers, an
assumption that many of these ‘semi-quantitative’ risk matrices do make.
21
Which could convert to equivalent impacts of thousands to tens of thousands of dollars if we aligned with
true quantitative values, of course depending upon the context.

17
• If we take this logical assumption even further, then five negligible consequences all occurring
simultaneously would produce a consequence rating of ‘5’, a ‘catastrophic’ position.

So, combing consequence values in a risk matrix does not produce a logical result, how then can they
be combined? Some of these ‘semi-quantitative matrices try to resolve this problem by averaging out
the consequence values. Whereby, the assumption is made that an overall consequence can be
expressed as the average of the values of the individual consequence contributing to it. In our example
above, then the average of these three ‘negligible’ consequences (each with a value of ‘1’) would be
‘1’ – still in the negligible column of the matrix. This seems to be a reasonable outcome, but is it really?

Consider a different scenario, where a risk is associated with three different ‘medium’ consequences
occurring simultaneously. Can we still average out the ordinal consequences ratings and expect a
meaningful result? The average of three ‘mediums’ (each with a value of ‘3’) is ‘3’ placing it still in the
‘medium’ column. How reasonable is this? Consider, some additional information about the three
scenarios22:

• ‘Medium’ (‘3’) for one consequence equates to a four month loss time injury.
• ‘Medium’ (‘3’) for the second consequence equates to ‘regulatory intervention with several
days’ disruption to production, material penalties, and sustained negative publicity.
• ‘Medium’ (‘3) for the third consequence equates to ‘substantial loss of stakeholder
confidence’.

Averaging the consequence ratings, means that the combined values produce the same level for three
consequences all occurring together as for only a single consequence occurring.

Logically would we expect that the combined effects of these three consequences should be the same
as the effect of only a single consequence? Or could we expect that the overall value would be at an
elevated level? Should the consequence level be a ‘3’ or a ‘4’?

Either outcome is plausible (depending upon the crossover threshold), but there is no easy way of
applying simple rules to these types of matrix to provide a consistent level of estimation of risk. Yet
the issue becomes even more complex if one applies the ‘averaging out rule’ to a set of consequences
that have different values. For example:

• Consequence A = low (2)


• Consequence B = medium (3)
• Consequence C = high (4)

Using an averaging approach, a consolidated consequence of ‘3’ (‘medium’ column23) would be used
to derive a level of risk. So, although we have a ‘high’ consequence that could occur, we only estimate
risk using a ‘medium’ consequence. How does that make sense?

An even greater sin is then to try to multiply out such values within the body of the matrix (as such
‘semi-quantitative’ risk matrices often attempt to do) to provide ‘risk scores’ (Table 2). Multiplying
two ordinal scales24 in this manner is mathematically nonsensical, being arithmetically inadmissible

22
The descriptors for these ‘medium’ consequence values were based on a real life ‘semi-quantitative’ matrix.
23
Based upon an average of ‘2’, ‘3’, and ‘4’.
24
A related issue with using such problematic ordinal scales is known as monotonicity, where actual events (i.e.
actual occurrence of the scenario described by the risk) consistently produce much higher or lower outcomes
than determined by risk analysis (Artzner, 1999).

18
(Fávero et al, 2023; Shavykin and Karnatov, 2020; MacKenzie, 2013; Hubbard and Evans, 2010; Alwin,
2005; Wilson, 1971). The absurdity of this approach is evident if one considers two safety-related risks,
both of which are ‘almost certain’. One is expected to have a ‘negligible’ consequence (e.g., a sore
finger – not even requiring first aid) whilst the other is expected to have a ‘catastrophic’ consequence
(e.g. multiple fatalities), yet the ‘catastrophic associated risk’ has a risk score that is only 2.5x greater
than the ‘negligible associated risk’. How does that make any sense?

The best that most risk matrices can achieve is a poor proxy for a statement of expected loss (Thomas
et al, 2014). Something that a simple quantitative risk analysis can perform more quickly and reliably.

Table 2: Hypothetical ‘semiquantitative’ risk matrix25

Consequence
Negligible Low Medium High Catastrophic
Likelihood
1 2 3 4 5
Almost certain
5 10 15 20 25
5
Likely
4 8 12 16 20
4
Possible
3 6 9 12 15
3
Unlikely
2 4 6 8 10
2
Rare
1 2 3 4 5
1

Linear vs logarithmic or exponential

Again, looking at Table 2, if we look at any individual row of this matrix (i.e. for any given likelihood)
then as we go up each consequence level, the risk value goes up by the equivalent amount (i.e., goes
up by a value of ‘1’ each time’ (from 1 to, from 2 to 3, etc.). When this is multiplied out into ‘risk
ratings’, for example, for an almost certain likelihood, the different between ‘negligible’ and ‘low’ (a
value of ‘5’) is the same as the difference between ‘‘medium’ and high’, which in true quantitative
terms is nonsense (where the difference between ‘medium’ and high’ would be expected to be
exponentially higher. Even more bizarrely, the difference between a ‘medium’ consequence and a
‘catastrophic’ consequence is only twice the value of the difference between a ‘low and ‘medium
consequence’. When one looks at the descriptors (in almost any ‘semi-quantitative’ risk matrix), these
equivalent differences in ascending values for consequence (or for likelihood for that matter) make
no logical sense.

To look at just how nonsensical such a ‘semi quantitative’ matrix actually is, we should look at how
these estimates relate to actual calibrated and tested measures. If we substitute $ point values26 for
consequence, based on common descriptive criteria, say for losses on a $10m project, all at the same
likelihood level we could get something looking like Table 3. As the consequence level increases, the
difference between any two quantitative values increases near exponentially. Thus, as consequence

25
Please, please, please never ever use this matrix!
26
To make the example simpler than using range values that are common in many matrices.

19
increases, using a standard matrix (as in Table 2), the value of higher risk is significantly
underestimated compared with a lower rated risk.

The problem can be further compounded when similar ‘semi-quantitative linear values are assigned
to likelihood ratings, since most criteria descriptors are non-linear in nature and can be logarithmic or
exponential when substituted for probabilities.

Table 3: Hypothetical quantitative increases across consequence values

Consequence
Negligible Low Medium High Catastrophic
<1% budget 5% budget 20% budget 45% budget 95% budget
$ loss <$100,000 $500,000 $2,000,000 $4,500,0000 $9,0000,000
$ difference $400k $1.5m $2.5m $4.5m

So far, we have been discussing the problems with these ‘semi-quantitative’ matrices in terms of point
values for consequence and likelihood. However, the vast majority of risk matrices that use criteria
ranges not spot values. So, when a matrix assigns a value of ‘5’ to a consequence level, this can equate
(depending on the context) to a substantial quantitative range27.

In one example of a matrix developed by a well-known international audit firm, they produced
consequence criteria for their matrix similar to Table 4. Despite including quantitative data in their
risk criteria (which would have allowed a simple calculation to derive a $risk value for each risk), they
still assigned a numbering scale from 1 to 5 and multiplied these ordinal numbers out instead.
Although this example is from about a decade ago, I believe that they still use this type of matrix in
some of their engagements, and they are not alone. This seems to teeter on the brink of absolute
madness, to have exponential quantitative criteria, and then to substitute them for an arbitrary linear
ordinal scale (1 to 5), then to multiple them out against another linear ordinal scale, and then to
believe that all of this messing about produces a simpler and more accurate process and result.

Table 4: Extracted rating schema from an anonymous advisory firm

Consequence
Negligible Low Medium High Catastrophic
1 2 3 4 5
$ value $0 to $10k $10k to $100k $100k to $2m $2m to $20m $20m to >$100m
Range $10k $90k $1.9m $18m >$80m
cover28

27
However, in many risk matrices, this problem is even more pronounced as the ‘5’ value is often applied to
unbounded criteria, for example: consequence of 5= > $100 million – as in Table 4. This means that the actual
consequence value has a range anywhere from $100 million to infinity.
28
I have inserted the ‘range cover’ values for comparison purposes. Such a calculation was not part of this
advisory firm’s methodology.

20
A good calculation informs a good decision?

This also introduces another problem with the use of any risk matrix which includes range values for
either or both consequence and likelihood criteria, and that is the issue of relativity and boundaries.
If we again look back to and use Table 4, and consider relativity. Assume that we have three risks, all
of which are rated ‘High’, with equally rated control efficiencies. Since the matrix is only based on very
limited attributes, the matrix will produce risk ratings that provide equal prioritisation of the three
risks (Table 5).

Table 5: hypothetical risk comparison for procurement decision

Risk Risk rating Preference

Financial risk for purchasing solution A High =1

Financial risk for purchasing solution B High =1

Financial risk for purchasing solution C High =1

Based upon this information, how does one make a decision on such an output from a risk analysis?29

However, although all apparently have the same risk rating (read off the risk matrix), these high values
could represent consequences ranging anywhere from $2m to $20m (using Table 4). Which means we
could have a very different risk report (Table 6), if the actual quantitative risk values were derived,
even assuming that each of the likelihoods is equivalent. Such an alternate approach, which is about
a simple analysis as one can get, provides more meaningful data input in the procurement decision-
making.

29
Yes, I have seen a similar analysis provided to decision-makers in the real world.

21
Table 6: Hypothetical substituted comparison for a procurement decision

Risk Risk rating Risk value ($30) Preference31


Financial risk for purchasing solution A $2.5m
High =1

Financial risk for purchasing solution B High $8m =2


Financial risk for purchasing solution C High $19m =3

Conclusion: fact or fallacy

“The risk matrix provides a good calculation tool”: is a fallacy.

“A risk matrix is a good display tool”


I have raised previously, in this paper, the issue of trying to substitute qualitative descriptors with
simple linear ordinal scales on the two axes of the risk matrix, in an attempt to transform it into a
pseudo-quantitative tool. Despite the mathematical inappropriateness of trying to multiple two
ordinal scales, or substituting a logarithmic progression for a linear one, the use of logarithmic scales
on a matrix presents a significant problem of its own. This time a problem of interpretation. It is simply
that many individuals, including ‘experts’, have difficulty in interpreting logarithmic scales (Romano
et al, 2020; Menge et al, 2018; Heckler et al, 2013). In my experience, I have encountered quite a few
risk practitioners that simply do not even understand that these scales are logarithmic, and what
logarithmic even means. Sutherland et al (2022) make the compelling point that a lack of
understanding of the logarithmic nature of many risk matrices significantly comprises the user’s ability
to make decisions based on the matrix outputs.

The issue is compounded further, when the risk matrix is used a display tool, such as in a ‘heat map’
(Figure 1). This particular hypothetical heat map indicates that ‘risk 7’ has the highest risk rating, and
is the most important to deal with (after all it is in the red zone), followed by a cluster of risks at a
second priority (6, and 8 to 12), all in the ‘orange, zone, with a cluster of lower priority risks (1 to 5) in
the yellow zone.

What could be a simpler way of displaying the relative ratings and importance of a group of risks?
Apart from the issue that such a display can be completely misleading!

30
Equivalent to expected loss for each of the three risk scenarios.
31
Preference for proceeding with the purchase

22
Figure 1: Hypothetical heat map

Instead of using a ‘standard’ risk matrix/heat map, what if we plotted these risks according to their
quantitative values (Figure 2). We can get a very different order and relative importance of the various
risks, compared with the heat map. This quite different picture occurs because the risk matrix/heat
cells each represent a substantial range of ‘consequence’ and ‘likelihood’ ranges (which could be tens
or thousands of times higher or lower in a single cell), meaning that the overall risk values will also
represent a substantial range. It is also not uncommon for the ranges within a single cell to overlap
with the ranges of cells of a different ‘colour. Which could result in a qualitatively ‘extreme’ risk (red
cell) being quantitatively lower than a high risk (orange cell). Risk matrices can be designed so that
such overlaps do not occur, but this requires matrices to be calibrated and validated with quantitative
data. Something that is not commonly undertaken.

Figure 2 Hypothetical quantitative risk values

There are additional problems with the way in which the risk matrix presents information that can
lead to irrational thinking and misinterpretation (see “provides a rational tool for risk assessment”
below).

Conclusion: fact or fallacy

“The risk matrix provides a good display tool”: is a fallacy.

23
“Provides a rational tool for risk assessment”

To what extent is the risk matrix grounded in the real world? There are many claims in various
publications for the ‘objectivity’ and ‘rationality’ of the risk matrix, although the evidence suggests
otherwise (see below).

Admittedly, all risk assessment methods, whether they are quantitative or qualitative, have a degree
of subjectivity, and are prone to biases and predictable errors (Cardenas et al, 2014; Tversky and
Kahneman, 1974). The presence of subjectivity in alternative tools is often used by advocates of the
risk matrix to argue for the continued use of the matrix and for rejecting quantitative methods.
However, this is a spurious argument relying on a misleading false equivalence. Not all subjectivity is
the same. Using a risk matrix introduces significantly greater levels of uncertainty, bias, and error into
risk analysis than almost any other generally accepted technique.

In most uses of the risk matrix, practitioners have little, if any, appreciation of the underlying
uncertainty within the matrix, or the accuracy of their qualitative estimates of either consequence or
likelihood. If we examine any typical use of the risk matrix, for a set of risks, there will be differing
levels of knowledge about the nature of each of these risks. However, few risk analyses take account
of the differing strengths of knowledge for the different risks that we input into the risk matrix. This
can become particularly acute when high likelihood/ low consequence risks are being compared with
those that are low likelihood/ high consequence. Whilst risk matrix outputs may yield similar risk
ratings, the nature of these two groups of risk could be very different (Aven, 2017). For example, with
high likelihood/ low consequence risks, the types of scenarios that they represent would be expected
to be encountered far more frequently, be much more familiar, be better understood, and be
associated with much less uncertainty. Conversely, low likelihood/ high consequence scenarios are
generally encountered much less frequently, would be more unfamiliar, and be accompanied by much
more uncertainty.

The very presence of such uncertainty can also affect how the brain perceives and makes sense of the
underlying data and scenarios (Sun and Wåhlström, 2022; Fjäder, 2021; De Luca Picione and Lozzi,
2021; Soeiro de Carvalho, 2021; Maitlis et al, 2013; Heilman et al, 2010; Platt and Huetall, 2008; Naqvi
et al, 2008). The combined effect of altered cognitive and emotional processing will substantially
reduce rationality in judgments made about risk, being far more pronounced in users of risk matrices
compared with quantitative approaches. Resulting in widely different qualitative assessments and
interpretations by different practitioners and users, or by the same individual undertaking the
identical analysis but at different times.

If further proof were needed of the irrationality of using a risk matrix, one need only look to the
phenomenon of ranking reversal (Faramondi et al, 2023; Hong et al, 2020; Åkerberg, 2021; Kaya et al,
2019; Duim, 2015; Rozell, 2015; Wilson, 2014). The incredible sensitivity of risk matrices to ranking
reversal has been acknowledged for many years, where the visual ordering of the criteria in the matrix
axes can strongly influence way in which the matrix is interpreted. For example, merely reversing the
order (swapping around ascending for descending ordering of the criteria, or vice versa) can produce
very different values and prioritisation for the same set of risks (Thomas et al, 2014).

Further, rational and logical construction of a risk matrix would naturally lead to the expectation that
different risks that are recorded with the same qualitative risk ratings (for example a ‘medium’ rating
in different cells of the matrix) would have similar quantitative values. However, this is commonly not
the case (Pickering and Cowley, 2010). It is not uncommon to see, say ‘medium’ risks that have a
quantitative value (when a substitution analysis is used) that is closer to neighbouring ‘low’ or ‘high’
risks, than to the value of other ‘medium’ risks. This raises another interesting dilemma, that using

24
the risk matrix then creates. Given two risks with apparently the same risk rating (say ‘medium’).
Which risk should one be more concerned about? The risk where the scenario is more likely to occur,
the scenario with the higher potential consequence (but lower likelihood), or the risk associated with
the higher uncertainty (the scenario where consequence and likelihood could be orders of magnitude
higher or lower than estimated)?

Conclusion: fact or fallacy

“The risk matrix provides a rational tool for risk assessment”: is a fallacy.

“Explains risks in a clear way”

The risk matrix is based solely on the premise that risk is comprised of only two dimensions:
consequence and likelihood, ignoring the multidimensional nature of risk.

Different individuals often perceive the same risk very differently. For some, a specific risk will be
regarded as a high risk, for other users that same risk may have much lower or higher rating, when
using a risk matrix. Why must we only think about risk in terms of simple (and usually single)
consequence and likelihood relationships? Even if we just think about these two criteria, then they
can be far more than just a single estimate of potential impact from a future event and its likelihood.

Consequence is itself multifactorial, comprising for example:

• The way in which postulated event, changing conditions, or other sources of risk32 could
potentially interact with an organisation, and the potential direct impacts that this could have
at the point (or points) of interaction.
• The potential indirect impacts on the organisation that may arise from the effect of these
future conditions on other factors in the ‘external environment’33.
• The way in which conditions could change further (as a result of the various initial impacts of
the changing conditions), which could either amplify or constrain direct and indirect impacts
upon the organisation.
• The additional direct and collateral impacts that occur as the effects of the ‘interaction’ are
transmitted through the organisation and its interdependencies.
• The way in which these impacts could be further modified as the organisation’s decision-
making and responses take effect.
• The manner in which the organisation’s sensitivity towards different types and magnitudes of
impact may change over time (from the present to different future time periods) and as a
result of the consequences that arise from these impacts.

32
That I will refer to simply as ‘conditions’ in the following dot points.
33
And that interact with the organisation or ripple through the environment effecting other secondary, tertiary,
etc. interdependencies.

25
Similarly, likelihood, even at its simplest, is a multifactorial construct which could include:

• The likelihood that a suite of initiating conditions could interact in a manner that will give rise
to a source of risk.
• The likelihood that a source of risk will actually manifest.
• The likelihood that a source of risk will become exposed to the organisation.
• The likelihood that the source of risk will interact with the organisation or its
interdependencies.
• The likelihood that preventative controls will function as expected.
• The likelihood that this functioning will have a desired effect on the source of risk/interaction.
• The likelihood that the source of risk or its effects will penetrate the organisation.
• The likelihood that other controls will contain, constrain or otherwise limit or otherwise
modify these effects.
• The likelihood that the assumed impacts will occur, etc.

There also are a range of other potential factors that can be considered to form part of our
construction of risk, in addition to simplistic consequence and likelihood:

• The degree of connectivity between and amongst individual risks (and the conditions that give
rise to them). For example, many practitioners take individual risks one at a time and ‘push’
them through the matrix. Conditions described by our constructed risk rarely exist in isolation
of one another. If the situations described by ‘risk A’ and ‘risk C’ occur within the same time
frame, what effect could this have on the likelihood of the scenario for ‘risk B’ occurring.
• The extent of vulnerabilities within the organisation, for example:
o vulnerabilities that have been introduced during the design and development of the
organisation, its systems34, processes, and its people.
o vulnerabilities that have been introduced over time (for example, through
normalisation of deviance).
o vulnerabilities that emerge as a result of changing conditions (either as described by
the risk scenario, or other changing conditions).
• The speed with which conditions change, effects emerge, interact with the organisation, and
effects are experienced.
• The speed within which changing conditions are detected, recognised, decisions are made,
and enacted (and other response actions).
• The variability in conditions, and in data associated with sources of risk and the construction
of risk.
• The speed with which primary and collateral (contagion) effects spread or disperse.
• The rate and extent of amplification as the impacts and effects cascade through networks and
other systems, for example the bullwhip effect (Ouyang and Li, 2010).
• The speed within which these response actions take to have the desired level of effect.
• The manner in which the changing conditions could affect individual and group awareness,
sensemaking, cognition, emotions, and behaviours.
• The extent and speed of an organisation to recover from the specific impacts associated with
the risk.
• The extent to which the problem space and the risk can be adequately framed, including the
extent of remaining uncertainty.
• The extent to which organisational and group culture amplifies or diminishes the perception
of risk.

34
Any system within an organisation, that it interacts with or is ultimately dependent upon. Not just IT systems.

26
• Individual and group aversion or preference for certain types of risk (Kahneman Tversky,
1979).
• Assumptions about the extent to which the risk is shared with other parties, the extent to
which other parties are willing to ‘take on’ this shared risk, and the extent to which other
parties will experience similar risks.
• The capacity to absorb or tolerate the individual types and magnitudes of risk.
• The different weighting that might be applied to different risk-related attributes.

The concept of risk is complex. There is no getting way from this complexity if risk is to be managed
effectively. Overly simplistic two dimensional qualitative attributes promoted by risk matrix-based
approaches do not necessarily explain risks in a clear way. By such over simplification, these matrix
methods force complexity beneath the sight of practitioner and decision-maker alike. They do not
reduce complexity, they merely serve as crude obfuscation that hides the complexity, and in doing so
allow inappropriate, inadequate, and incomplete assumptions to be made. In such situations, risk
assessment becomes reduced to an exercise in crude guesswork.

Conclusion: fact or fallacy

“The risk matrix explains risks in a clear way”: is a fallacy.

“Makes complex Information more accessible” and “Provides the basis for
robust discussions”.

It could also be argued that the limitations of the risk matrix for exploring complex information may
also be more about how risk matrices are most commonly applied, rather than the actual risk matrix
itself. Certainly, in my experience, many practitioners and decision makers treat the risk matrix as
some magical black box. Data is fed into the matrix, and an answer pops out, without any need to
understand the complexity of the context or the data, or to even understand the ‘risk model’ on which
the matrix was designed. Unthinking use of the risk matrix certainly does produce outputs of dubious
value. I have personally seen situations where a risk practitioner derives risk ratings from a matrix,
intuitively admits that they seem to make little sense in relation to the context, yet they will then
accept the result because it is what the risk matrix says it is.

As we have discussed previously in this Whitepaper, it can be reasonably argued that the risk matrix
does not improve access to complex information at all. In fact, it is questionable if the common risk
matrix makes any accurate information available (Cox et al, 2005). Rather, the matrix makes a very
crude simplistic abstraction of the available information. In the process ignoring, hiding, or
disregarding a whole lot of very useful data about what is in reality is a complex system or world. In
other words, the risk matrix and associated qualitative process often creates such an abstraction of
the real world that any comparison with reality is lost. A great many applications of qualitative (and
‘semi-quantitative’) risk matrices are describing the real world in the same way that a children’s
kindergarten book about doctors is describing the reality of the operation of a complex intensive care
unit.

Risk matrix-based methodologies rarely demonstrate effective risk model thinking, and when risk
modelling is conducted, the valuable information is then often pushed to the side when only two

27
parameters are considered by the risk matrix. Especially when those parameters are also abstracted
to the ultimate simplistic degree. To compound this almost complete obfuscation of innate
complexity, many applications of the risk matrix then go on to produce a single risk value that reduces
a multidimensional issue to a single word (or pseudo-number) construct. It is then a common
occurrence for decision-makers to use this single ‘information’ product as the key input into their
decision-making processes. Yes, I have seen senior executive teams and Board base their strategic
decisions on a single page risk heat map.

This practice of providing ‘lists of lists’ (ranked) often ignores the nature of the sources of risk
(including networks of causal, contributing, and influencing factors) and interactions within and across
systems, and the influence of vulnerabilities (present and potentially emerging). The result is often
that an information product is delivered that is of unknown quality, rather than a knowledge product
upon which decision-making can rely.

Paying less attention to the manufactured risk rating, and more attention to constructing potential
scenarios that consider underling conditions and their importance, may lead to more robust risk
discussions and to better informed risk management.

Conclusion: fact or fallacy

“The risk matrix makes complex information more accessible”: is a fallacy.

“Tells us what threats and hazards can happen”


An effective risk management process should provide insight into how sources of risk (threats,
hazards, causal, contributing, and influencing conditions, etc.) could occur, and provide an
understanding of past situations, the current situation, and potential future scenarios and options. I
struggle to think of how a risk matrix can add value to such a process. I can really only see problems.
If anything, by its very nature, including a risk matrix in the process is only likely to result in key
information being excluded, rejected, ignored, or hidden.

Furthermore, some peer reviewed studies have shown that the use of risk matrices is capable of
comparing only a small percentage of the possible consequence/likelihood pairs for a given hazard or
risk (Cox, 2009). Meaning that the very act of using the matrix is automatically giving us only a tiny
insight into even just the consequence and likelihood relationships, and will certainly fall well short of
helping us to explore the nature of multiple future potential scenarios.

Exceptionally simple techniques such as event trees, bow-ties, etc. can provide us with far greater
insight into “what threats and hazards can happen”, than a risk matrix ever will.

Conclusion: fact or fallacy

“Tells us what threats and hazards can happen”: is a fallacy.

28
“Reveals unforeseen danger”
I am unsure the origin of this oft repeated claim. I have the seen this claim blindly repeated by so many
writers over the years, without any evidence or even a plausible argument. I am left wondering about
what they could have been smoking at the time. I have been unable to find even the flimsiest of a
factual basis to support the contention. How on earth can a simplistic tool that aligns an often guessed
consequence with an often guessed likelihood reveal anything resembling ‘unforeseen danger’?

Even from a logical perspective, such a claim is nonsense., because by pushing data through a risk
matrix, we must have already foreseen some ‘danger’, and have made some sort of estimate about
its consequences and associated likelihood. There is nothing more unforeseen that will be revealed
by a risk matrix.

To achieve any sort of such a revelation, then the practitioner has to resort to some actual risk
assessment techniques, such as undertaking horizon scanning, strategic and operational analysis,
threat and hazard analysis, vulnerability analysis, building risk models, exploring causal, influencing,
and contributing pathways within multiple scenarios, etc. In fact, doing almost anything instead of just
relying on a risk matrix.

Conclusion: fact or fallacy

“The risk matrix reveals unforeseen danger”: is a fallacy.

“Identifies the gravest risks”


Using a risk matrix, it is not uncommon to obtain significantly different qualitative values for two risks
which have very similar quantitative values, because of different positioning within the matrix (for
example a lower consequence/higher frequency risk, compared with a higher consequence/ less
frequent risk). This so-called risk inversion (Krisper, 2021; Cox et al, 2008), calls into question whether
a risk matrix can be relied upon to establish any robust ranking of a group of risks, let alone identify
the “gravest risks”, even if we were to accept that simplistic estimates of just consequence and
likelihood were adequate.

Even assuming that some level of accuracy could be achieved from the ‘average’ risk matrix, the most
horrendous of errors can still be introduced regularly because of the very way in which the matrix is
constructed. For example, most risk matrices use ‘unbounded criteria’, such as the extreme right-hand
side of the consequence criteria (for example as in Table 4), where the consequence range has no
upper limit identified. For example, a “severe’ consequence may be expressed35 as ‘months to years36
of disruption, or to have a value between $20m to >100m. This means that a consequence (or risk
once likelihood is applied) rated as severe (using Figure 7 by way of example) could result in a loss of
$20m (catastrophic for a part of a business), a loss of $100m (catastrophic for the whole business) or
$400bn (catastrophic for the whole economy), yet all three risks would be rated with the same colour
coded level of risk.

35
Hypothetically.
36
Which could mean a consequence lasts for a couple of months, a couple of years, or a couple of centuries or
more.

29
I want to go back to the reasons for undertaking risk assessment (and more generally risk
management). At its best, we hope to have better knowledge on which to make important decisions,
increase the chances that we will be successful, and reduce the chances that we will be unsuccessful
or suffer harm to something of value. Surely, to attain any of these ‘hopes, we want to have the best
available37 information inputs for judgement and action. For that purpose, one of the things that risk
assessment tries to achieve is to help us understand what are the most important (or “gravest”) risks.
Yet how can we ever hope to have the best available information if we are just basing our risk
assessment on:

• An incomplete data set: just how incomplete, we often have no idea.


• A vague data set, where the value of risk could be anywhere in a massive range of ‘values’,
sometimes with no defined limits.
• A data set of undefined accuracy, with little idea if the risk has a value defined by the matrix, or
is 30% or 100% (or more), higher or lower.
• A data set that is based on just consequence and likelihood, without knowing if these two criteria
are the most important, or if one or more of a whole suite of other factors may be more important
considerations (Krisper, 2021),

We then decide to further increase uncertainty, and obscure any objective measure of importance,
by pushing vague data through a risk matrix.

Conclusion: fact or fallacy


“The risk matrix identifies the gravest risks”: is a fallacy.

“Tells us if the risk is being controlled effectively” and “Provides the


mechanism for evaluating risk treatments”
Practical experience suggests that using risk matrices provide little if any true insight into the
operation and effectiveness of controls. An all too familiar experience of risk reporting that occurs in
‘risk management committees’ of Boards or of senior executives, is the often unchanging rating of
multiple risks over multiple reporting periods, despite substantial control improvements and other
risk treatments being delivered. In several organisations, that I have had longstanding experience
with, I can well remember whole collections of risks retaining ‘high’ risk rating over a decade or more,
irrespective of the investments made on controls and risk treatment.

Risk matrices are relatively insensitive to changes in level of risk. In some parts of the risk matrix,
one could achieve a halving of the level of quantitative risk (or even greater reduction), yet the
qualitative rating would remain as ‘high’. If a simple and cheap single control improvement halved
the level of risk. I would say that was having a positive effect and was a potentially great investment.
This is not just a theoretical consideration, in risk management processes all over the world,
effective control improvements are being delivered, but are not being reflected by commensurate
reduced estimates of risk in many cases.

37
After all, best available information is one of the key principles extolled by the ISO 31000 Standard. It could
also be argued that simply by using a qualitative or ‘semi-quantitative’ risk matrix, when better alternatives are
available, represents a serious breach of this principle and is a major failure to align with the Standard’s
guidance.

30
Sensitivity to changing controls

This insensitivity is an example of risk matrices being very poor tools for demonstrating translation
invariance (Artzner, 1999). Where efforts are made to reduce a risk, (e.g., determined quantitatively),
ideally this should be reflected by a similar level of reduction in the qualitative value of the risk. For
example, if we use the quantitative values described in Figure 7 and the matrix from Figure 3, then
consider a hypothetical risk ‘A’. If we implement preventive treatments to reduce the level of
quantitative risk (by reducing the probability)– from $102m (C=$103m x p=0.99) to $72m (C=$103m x
p=0.7). So long as the cost of treatment is less than the $30m potential loss avoided, then this should
be a sensible decision (all other factors being equal). However, if we consider the same scenario on
the qualitative matrix, a starting position of a ‘critical risk’ (C= ‘catastrophic’ and L= ‘almost certain’),
implementing the proposed treatments will achieve no reduction in the overall level of risk, which will
remain unchanged as ‘critical’ (C= “catastrophic’ and L= ‘likely’). In this scenario it certainly does not
look as though any meaningful control of the risk would be achieved by investing in the proposed
treatments.

For many risk matrices, the low granularity, range compression and other inaccuracies mean that the
effects of changing some controls just cannot be tracked on an instrument as blunt as a risk matrix.

Even where a risk matrix is capable of determining a change in the level of risk, following say, the
improvement of a suite of controls, this still does not give us any indication of the effectiveness of the
proposed treatment or actual control. It just tells us that some effect has occurred, the significance of
which is likely to remain obscure.

Conclusion: fact or fallacy

“The risk matrix tells us if the risk is being controlled effectively”: is a fallacy.
“The risk matrix provides the mechanism for evaluating risk treatments” is a fallacy.

“Tells us what actions are required to manage the risk” and “informs more
accurate risk management strategies”
If the risk matrix is relatively insensitive in determining the extent to which control interventions
affect or potentially affect risk, then its even less probable that they will be able to provide sufficient
guidance on what actions are required to manage risk.

The potential for the risk matrix to misinform is considerable, and there are a growing number of
examples, from published research and from anecdotal experience, where the use of a risk matrix can
seriously mislead the decision-maker. Whilst a great deal of the potential to misinform arises from the
intrinsic high subjectivity and vagueness of the risk criteria used, the way in which the matrix is
constructed is also a significant influence, and may result in the same decision being inferred
irrespective of the level of risk (Busby and Kazarians, 2018). For example, assume that an organisation
has established that any risk with a rating of ‘high’ or above is not tolerable and requires immediate
treatment. Using the matrix in Figure 3, any situation, that is ‘almost certain’ to occur irrespective of
the triviality its impact, will be immediately prioritised for urgent action. Now it could be argued that
this hypothetical matrix has been deliberately skewed to support the argument. Although

31
‘hypothetical’, the matrix structure almost identical to a real-world example developed for a
multinational company by a global consulting firm (just modified enough to protect the guilty!).

Figure 3: Hypothetical skewed matrix

As this Whitepaper has explored previously, the risk matrix (with its simplistic reliance on crude
consequence and likelihood estimates), fails to make available a whole variety of information that is
required to effectively make decisions about risk, and in particular about their treatment. Using a
matrix may provide some insight, and allow risks with widely distant qualitative ratings to be ranked.
However, a situation familiar to many users of the matrix will be the clustering of multiple risks with
the identical risk ratings38. In such situations, the matrix provides little help in prioritising individual
risks for treatment, let alone for guiding decisions on which treatment options would be the best
choice.

Put simply, the risk matrix, in many cases, is just too an insensitive blunt instrument for providing
the insight required to determine the best approaches for addressing unacceptable risk.

There are a wide variety of tools and techniques that can be used to develop a more comprehensive
exploration of the nature of risk that would better inform how to manage risk. Referring to “IEC 31010:
2019 – Risk management: Risk assessment techniques” would provide a starting point to identify the
range of approaches that could be used to improve your assessment of risk.

Conclusion: fact or fallacy

“The risk matrix tells us what actions are required to manage the risk”: is a fallacy.
“The risk matrix informs more accurate risk management strategies”: is a fallacy.

38
A risk register that I recently reviewed had approximately 60 risks rated at ‘significant’, and about 30 risks
rated at ‘high’ with an approved risk appetite that required all risks at significant and above to have “risk
treatment plans developed”.

32
“Provides a comprehensive overview of risk”

This is another strange claim to make about the risk matrix. The traditional risk matrix is a ranking tool,
it certainly does not contribute to identifying risk, understanding anything about the nature of
individual risks, (beyond a highly artificial construction of consequence and likelihood), nor does it
provide any insight into the interdependencies and interactions that could occur across the diverse
range of factors that could result in unexpected outcomes.

Does the risk matrix really produce an output that is appropriately informative for decisionmakers?

Well, the very nature of the risk matrix means that it is extremely difficult to establish, with any degree
of confidence, the presence and importance of any intrinsic uncertainty in either the analytical inputs
or outputs. (Krisper, 2021). Even if a sufficiently broad and deep data set about risk was made
available, using the risk matrix would reduce this down to a bare (and largely unacceptable) minimum,
resulting in an overview that is anything but comprehensive.

By way of a different way of thinking, some practitioners use the term ‘risk matrix’ to refer to a ‘heat
map’ (as in Figure 1). In which case, can the ‘heat map’ provide a comprehensive overview of risk?

Certainly, a graphical display can provide an overview of all of the rated risks within a single diagram,
and show the distribution and clustering of all of these risks. However, whilst providing some sort of
summary overview, the ‘heat map’ falls prey to the same issues that affect the risk matrix. Even if we
ignore the huge amount of subjectivity, bias, inaccuracy, and guesswork present, it is still does not
provide anywhere near a comprehensive overview of risk. Being based only upon crude consequence
and likelihood criteria, the matrix omits a huge amount of information which effective decision-
making about risk requires.

Conclusion: fact or fallacy

“The risk matrix provides a comprehensive overview of risk”: is a fallacy.

“Tells us what are the most pressing issues” and “focuses decisionmakers on
the highest priority risks”
Notwithstanding all of the other issues about accuracy and consistency, can a risk matrix really tell us
which are the “most pressing issues”. Even if there was some way to improve the overall accuracy of
‘placing’ individual risks into the relevant cells of a matrix, we would still end up with groups of risks
clustering into the same cells, with the same qualitative ‘value’.

How can we then rank and prioritise risk based on the outputs of a risk matrix, when whole collections
of different risks could be rated identically, especially where these risks could be rated very differently
by almost every other reputable risk assessment technique available.

With this problem of substantial clustering of different risks all with the same rating39, how does this
help decision makers focus on the highest priority risks? How can one use only the matrix to assign
even a simple prioritisation of #1, #2, 3, etc. to such a cluster or risks. It is simply not possible without

39
I have seen risk reports listing 30+ risks all rated ‘high’, with an even greater number rated as ‘significant’.

33
referring to other attributes of the risk (duration, recoverability, capacity to absorb, speed of onset,
etc.). Irrespective of any other faults that one could assign to quantitative risk values, they at least
allow for an absolute ranking.

As this Whitepaper has discussed previously, the risk matrix is just too blunt an instrument.

Conclusion: fact or fallacy


“The risk matrix tells us what are the most pressing issues”: is a fallacy.
“The risk matrix focuses decisionmakers on the highest priority risks” is a fallacy.

“Provides a real time look at the risk landscape”


I am really unsure how a risk matrix provides a ‘real time’ look at any risk. Surely it is the overall risk
assessment process and how it is designed and applied that determines how close to “real time” that
any consideration of risk is capable of achieving.

Perhaps the author of this claim about the risk matrix believed that the supposed ‘simplicity’ and ‘ease
of use’ lends itself to a quick analysis. Well, there are many other techniques that are almost as simple
and easy to use as any matrix, and so would equally lend themselves to real time use. In fact, the
sources of changing quantitative data, say in financial or economic data, could be fed directly into a
risk algorithm and provide continuous real time reporting, without having to go anywhere near a risk
matrix.

Again, it is necessary to consider the relative insensitivity of the risk matrix, which means that the tool
is almost incapable of detecting the often subtle level of change in risk that could occur in real time.
A matrix may have sufficient sensitivity to detect large scale changes of risk that could occur over a
long time frame. However, real time monitoring, where risk many change gradually over days or weeks
will generally be beyond the capability of most traditional risk matrices (again depending upon the
context in which they are applied).

The great majority of risk matrices that are used today appear to be in direct conflict with the core
concept and definition of risk as espoused by ISO 31000: 2018. That risk related consequences can be
ether or both negative and positive. Yet most risk matrices are created with just negative
consequences to provide outputs of a negative risk ratings (i.e., based on the likelihood of an expected
loss). How then can the traditional risk matrix provide any real-time insight into changes in the nature
of risk’s positive outcomes?

Conclusion: fact or fallacy

“The risk matrix provides a real time look at the risk landscape”: is a fallacy.

34
“Breaks down risk into its most important facets”

The risk matrix by its very design assigns de facto equal importance weightings to consequences and
likelihoods in corresponding parts of the matrix. For example, there is an innate assumption that a risk
with a ‘medium’ consequence (score 3) and ‘low’ likelihood (score 2) is equivalent in level (and
importance) to a risk with ‘low’ consequence (score 2) and ‘possible’ likelihood (score 3)40. However,
it is not uncommon to find that this is a false equivalence when a risk matrix is validated against robust
quantitative data. Following on from this, how do we even know that a matrix is breaking down risk
to its most “important facets”, when we cannot even determine with reasonable confidence the
relative importance of even two of these values?

Furthermore, the way that consequence and likelihood are conceptualised in the risk matrix are as
highly simplistic abstracts of a range of very different properties (Kosovac et al, 2019), where
important information about risk is lost during the abstraction. In the Whitepaper’s earlier section
“explains risk in a clear way” a number of other facets of risk were considered that could be as
important as the simplistic abstraction that the risk matrix depends upon (and in some cases more
so).

Conclusion: fact or fallacy

Whilst acknowledging that measures of consequence and probability are core factors considered in
risk quantification, the claim that “The risk matrix breaks down risk into its most important facets ”:
is a fallacy.

“Provides a low cost way of measuring risk”


A first glance, this claims seems reasonable, since it costs little to:

• ‘Borrow’ the design of some else’s risk matrix.


• Spend 30 minutes drawing your own risk matrix using PowerPoint.
• Train someone to be an analyst where they only need to know how to align a subjective
consequence with a subjective likelihood.
• Employ someone with no deep experience or education in probability theory, or core STEM
subjects, since no great deal of existing expertise is required to use a risk matrix.
• Spend minutes analysing each risk on just two parameters, instead of hours required to deeply
explore the nature of each risk.

However, ‘on the flip side of the coin’, what is the cost of advising, making decisions, and designing
and implementing control changes, when you are relying on information that is likely to be superficial,
incomplete, inaccurate, and substantially divorced from reality?

What is also not factored into this cost efficiency claim are the very real costs of properly designing,
testing and validating a risk matrix, and properly calibrating users before actual analysis should be
commenced. When done properly, the development of an effective risk matrix is neither a simple nor

40
For example, as in the matrix in Table 2.

35
cheap exercise. It is just that so few risk professionals have taken the time and effort to design and
develop a robust validated risk matrix, that the real costs of doing so remain largely unexplored

Even more to the point, do those individuals that make this claim about risk matrices really know how
much extra it costs to undertake more meaningful alternative ways of assessing risk? It may be
surprising to many just how inexpensive some of these alternative analytical techniques are in
practice. Although it is likely that there will need to be a significant investment in uplifting skills for
many practitioners to use more robust tools.

Conclusion: fact or fallacy

“The risk matrix provides a low cost way of measuring risk”: is a fallacy.

“Easy to construct”

I have to agree, it is very easy to construct any old risk matrix. If you can use MSWord, PowerPoint,
Excel, their Mac equivalents, or other applications, constructing a risk matrix takes a matter of
minutes. There are even some software packages that will automatically build a risk matrix based on
answering a few questions, whilst some other software packages come with an already constructed
risk matrix which automatically ‘calculates’ the level of risk. But will such risk matrices be any good?

A ‘good’ risk matrix cannot just be cobbled together. Its construction requires a well thought-out
design process, and needs to reflect (and be validated for) the type of specific contexts in which the
matrix41 is to be used. This means taking a strategic approach to matrix design, not just jumping
straight into its drafting. The construction of risk matrices is significantly affected by a variety of
factors, including:

• The risk attitude of those analysts and decisions-makers involved in its design and
construction (Ruan et al, 2015).
• The risk appetite at the time of construction (Figure 4)42.
• Recency and nature of unexpected events43, unexampled events, unwanted events,
unacceptable events, desired events, expected events, and notable successful events .
• Cognitive and emotional biases.
• Peer influences.
• Availability of exampled risk matrix approaches.
• Level of experience and competency of risk analysts and decision-makers.
• The availability and quality of historical data upon which testing and validation can be
undertaken.

41
It is a central tenet of ISO 31000 that risk management (and by extension risk criteria and matrix) need to be
adapted to the context within which it will be applied. This means that risk criteria and risk matrix developed
(and hopefully validated) at an enterprise level, may not be suitable at a project level or domain level (for
example for OHS, for specific types of operation, etc). Criteria and matrix need to be validated for the different
contexts for which they will be applied, and if necessary customised to that context.
42
Individual and organisational appetite for risk would be expected to be reflected in the construction of the
risk matrix. For example, fewer risks would qualify as high (red) where there was a ‘high risk appetite’ compared
to a situation where there was a ‘low risk appetite’. However, this raises other complications in the use of risk
matrices. For example, where an organisation has different appetites for different categories of risk, can a single
risk matrix be used across all of the different risk categories?
43
Where ‘events’ includes actual and potential occurrences, situations, conditions, and scenarios.

36
Figure 4: Hypothetical alignment of risk appetite and the construction of risk matrices

Why a square matrix?

A question that I do not think has been answered adequately by anyone is, “why are risk matrices
almost invariably ‘square’? I think that most of us have some understanding that there are a range of
different formats of risk matrix, e.g., ‘3 x 3’; ‘4 x 4’; ‘5 x 5’; ‘8 x 8’ and so on, with the choice either
associated with a need for greater or lesser granularity, or because other specific people or
organisations have chosen that size of matrix. However, why do we not see non-square risk matrices
more often, why not 6 X 4, or 8 x 5, or some other combination? On the traditional square matrix,
each axis has a different attribute and different arithmetical ranges, so why the same number of rows
and columns?

Following on from this ‘squareness’, is that many risk matrices are constructed with the assumption
of internal symmetry along the diagonal (i.e. from “rare”/”insignificant” to “almost certain”/”severe”,
as in Figure 3). This results in obviously different risks being assigned the same overall risk rating. For
example (referring back to Figure 3 again), consider four different risks: risk A (“almost
certain”/”minor”= “high”; risk B (“significant”/”likely” = “high”; risk C (“major”/”possible”= “high”; risk
D (“severe”/”rare”= “high”), all would be assigned the same importance based on the output of the
risk matrix. Translating this into some safety scenarios, this would mean that the very real chance of
a small cut to a finger (risk A) would be comparable to:

• Someone requiring hospitalisation over the next year.


• Someone dying over the next five or so years.
• Multiple mass fatalities over the next 10 or more years.

37
Aversion and the risk matrix

This situation also raises the issue of risk aversion. Irrespective of any calculated financial losses
associated with any of the above scenarios, most people would be naturally more averse to a fatality
occurring (at almost any feasible probability) than to a more likely hospitalisation occurring. That is
because as individuals we tend to focus more on the consequence than on probability, especially when
the consequences are large (Taylor and Weerspana, 2009). One of the reasons that gambling on
lotteries is so popular. Similarly, we tend to pay more attention to consequences that are unfamiliar
to us, or that we a greater fear about.

It is not uncommon for an organisation to mandate a single risk management approach, to be


deployed and employed across the organisation. Such approaches often have at their very centre a
single risk matrix44, with the expectation that this will be used across all of the organisation, for all
types of risk, and across all different types of context.

Yes, anyone can design a risk matrix in a matter of minutes, without any real understanding of how it
is working, and what the outputs actually mean. Designing and constructing a risk matrix that does
not suffer from the wide range of flaws and issues, described in this Whitepaper (and by numerous
other authors), is difficult. Effective and robust design is complex, and takes time, knowledge, and
skill, it is most certainly not ‘easy’.

Conclusion: fact or fallacy

“The risk matrix is easy to construct”: is a fallacy.

“Simple to use”
The intuitive appeal of the risk matrix, and possibly the overriding driver of its popularity is that the
matrix is deceptively easy to use45. Even where the user has difficulty using single word descriptors for
the risk level (e.g., ‘high’ low’, etc.), the matrix’s axes can be colour coded to drive the analyst’s
judgment.

If one is only considering a single risk, with a single type of consequence at any one time, then yes
there is an argument that the risk matrix is simple to use. However, many individual risks have multiple
types of consequence. For example, consider the following fairly straightforward scenario:

A large household brand name retail company has recently replaced all of its display shelving, in all of
its outlets across the country, with cheap overseas’ manufactured product. Reports have started to
come in from retail outlets across the country that if any force is applied to slip rods on the shelves,
such as a child pulling on them, then they easily fracture leaving a sharp metal edges protruding from
the shelf. There will be significant costs associated with reengineering or replacing the display units.
A decision needs to be made about whether to take action now or just repair any rods once they have
broken, informed by a risk assessment. Figure 5 represents the analysis of just one risk scenario, with
ratings derived from a qualitative analysis using a matrix. What is the level of risk?

44
From exposure to risk management approaches in several hundred organisations, I would say that at least half
of these organisations try to use the same identical risk matrix as the magic bullet for all risk assessments.
45
By which I mean that its apparent ease of use is deceptive!

38
Figure 5: Hypothetical risk scenario

In real life, I have seen risk practitioners answer this question in multiple different ways:

1) Do not specifically address each of the consequences, but use ‘professional judgment’ to
estimate an overall consequence and likelihood. Now this is problematic for several reasons:
• as exercising ‘professional judgement’ would seem to ignore the actual ratings and
could just involve an outright guess at the level of risk.
• we know from other discussions in this Whitepaper, that different analysts will
perceive and judge the risk differently from others, even with all of the same data
available and using the exact same tools and techniques. Even one individual analyst
could derive a different overall level of risk if they were to repeat the same analysis
at a different time.
• Poll stakeholders to provide a consensus overall risk rating (this would involve
reaching agreement on an overall position based upon each stakeholder’s individual
judgement about the risk).

2) Determine the consequence level for each type of consequence , and then:
i. average out the overall consequence level (e.g. in Figure 5 this could be estimated as
close to ’significant’: with three of the ratings at ‘major’, and 2 ratings at ‘significant’,
and three ratings below ‘significant’). This is a particularly problematic approach,
especially if the scenario would result in each of the consequences being realised. The
consequences above the average (all major consequences) would effectively be
ignored, substantially reducing the risk level. Or,
ii. taking the highest consequence level to set the overall consequence (in this scenario,
the overall consequence would therefore be at ‘major’. However, if multiple other
consequences were also to occur simultaneously, then the overall consequence could
easily exceed a ‘major’ level. Or,

39
iii. Taking the highest level consequence, and then adjusting it upwards depending on the
other consequence levels that have been included. Again, this is problematic, by how
much should the overall consequence level be elevated?

With any of these approaches, there would need to be clear interpretation rules to guide assessment
judgements. However, these rules would need to consider a wide range of different scenarios, as the
way in which risk outputs are consolidated and aggregated could well be different under different
contexts. Using the risk matrix has now become a lot more complex and a lot more difficult to use,
with this need for detailed guidance. Consolidating and aggregating risk data is a lot easier and more
repeatable using simple quantitative methods combined with narrative approaches.

Judgment: introducing more variability into the matrix

Ultimately, in many types of risk analysis, judgements are made about likelihood/probability and
consequence to derive a level of risk. However, estimating probability is an intrinsic human weakness,
subject to various cognitive and emotional reasoning flaws (Branch and Hegdé, 2023; Benjamin, 2019;
Booth and Sharma, 2019; Clemen and Riley, 2014; Hammond et al, 1998; Tversky and Kahneman,
1974). An issue that is ignored in the ‘easy construction’ of a risk matrix. Because there is the common
perception that building a risk matrix is easy, and anyone can do it, many attempts at such
construction seem to disregard the whole range of factors that actually could improve the application
of risk matrices (see below).

I hear all of the time, often from so called ‘experts’, that we need to keep risk management ‘simple’,
so that it can be easily applied by anyone. Recently, this included a senior person in an overseas risk
management-related peak body categorically state that we have to make risk management so simple
that anybody could pick ISO 31000 and deliver the process. Why is this drive to absolute simplification
acceptable for risk management, but recognised as dangerous in other professions? I would say that,
yes, we want to make the management of risk as simple as it can be so that effective practices can be
more widely adopted, but there are a lot of concepts about risk that are not simple, and if these
concepts are misunderstood or ignored, then the outputs of risk assessment will be dubious at best.

Should anyone have to rely on the results and advice from anyone who only read 14 pages about risk
management (in a Standard), with no experience, no training, and unproven competences. Would any
of want to drive a car or fly in an aircraft where the engineering and systems have been built by
unqualified technicians and quality controlled by hazy evaluation?

Risk matrices are simple and are intuitively easy to use, because the need for knowledge,
understanding, and skills in risk assessment have been ignored. Accordingly, the use of risk matrices
has been implicated in the provision of poor guidance to decision makers and that they “present
critical and potentially damaging intrinsic problems” (Oboni and Oboni, 2012).

The risk matrix may seem easy to use, but to be used effectively the matrix is often complex and time
consuming.

Conclusion: fact or fallacy

“The risk matrix is simple to use”: is a fallacy.

40
“Quick to use”

Whilst it may be quick to apply a particular risk matrix to the analysis of risk within a single context, it
needs to be remembered that the matrix needs to be constructed to the specific context (or across a
range of contexts) in which it is to used. If a matrix is used for a context for which it has not been
designed and validated, then it needs to be evaluated for that context, and adjusted and revalidated
if necessary. All of which can substantially increase the time needed to use a risk matrix ‘properly’.

However, in many organisations the same risk matrix is applied across many different contexts,
irrespective of its suitability. Furthermore, even within the same context, different types of risks may
require the application of differently constructed and validated risk matrices46 for them to be of any
real use. Using a tool such as the risk matrix that requires extensive testing and change with every
different contextual application is hardly a time-saving mechanism.

Conclusion: fact or fallacy

“The risk matrix is quick to use”: is a fallacy.

“Intuitive”
The traditional risk matrix is highly intuitive, and is one of the reasons for its enduring appeal and wide
adoption. However, the construction of the risk matrix seems to lead to an intuitive avoidance, during
analysis, of more ‘extreme’ risk ratings, especially those found in the corners of the matrix, for
example:

• High impact/low likelihood (e.g., bottom right corner in Figure 1), in part because we have
problems in conceiving the nature of exceptionally rare events, including so called ‘black
swans’ (Taleb, 2007). There is also a tendency, generally, to conservatively assess
consequence (and risk) away from the highest ratings (Busby and Kazarians, 2018)
• High impact/high likelihood, such potential situations should have been identified by good
situational awareness and already be receiving management/Board attention long before
they feature in a risk assessment.
• Low impact/high likelihood, would generally be believed that they are being dealt with as part
of routine day-to-day decision making operations, and so would not gain the analysts
attention.
• Low impact/low likelihood, would often be perceived as highly trivial by the analyst and would
tend then to be disregarded.

Collectively, these factors would exert both conscious and unconscious influences to ‘push’ risk ratings
more into the centre of the matrix.

Conclusion: fact or fallacy


“The risk matrix is intuitive”: is True, but in that truth lies a problem, because this not expert
intuition, but is due to a deceptive simplicity innate to the risk matrix.

46
For example, where materially different sensitivities to different types of risk may be present, or where there
are substantial differences in the types and reliability of input data used. If the same risk matrix is used to analyse
risks associated with climate change effects on community stability and for hedging risk, then the matrix would
have to be validated for both areas of risk.

41
“Requires no expertise or prior knowledge to understand a risk matrix’s
outputs”

Consider a typical matrix (Figure 6) which indicates that several different ‘cells’ within the matrix (with
different individual consequence and likelihood ratings), have the same colour and label coding (e.g.,
red= “very high”), and all have the same qualitative level of risk and importance. Simple logic would
indicate that something is suspect if when an increase in the consequence (e.g., moving horizontally
from one cell to the next cell on the right), still produces the same risk rating. For example, in the
‘Possible’ row, moving from a ‘Minor’ consequence to a ‘Significant ‘ consequence, the risk is still rated
as ‘Medium’. If for the same likelihood, the potential consequence increases, then logically the risk
must increase, but this is not always the case in the risk matrix.

Figure 6: Typical 5x5 matrix

However, if we substitute the qualitative labels for ‘equivalent’ quantitative range values, we generate
Figure747.

Figure 7: Quantitative range substitution

As is apparent, identically rated qualitative risks can have widely different quantitative values. For
example, individually rated ‘Moderate’ risks (colour coded ‘yellow’) could have values ranging from
‘$0 to $7,000’, up to $200k’, and with some substantial overlap with some of the range for ‘High’ and
’Very High’ risks.

47
I have substituted $ ranges for consequence criteria, but for the sake of simplicity have left the likelihood as
single point likelihoods. It does not matter what values are selected for this substitution, similar effects will be
seen.

42
Notwithstanding the significant problem of a lack of comparability across different risks, the very act
of reducing two dimensions of comparability into a single descriptive word (e.g., ‘High’) creates the
impression that all risks with the same ‘rating’, are of equal importance. This, despite the wide range
of other properties that may also characterise (and differentiate) these individual risks.

This type of range compression (e.g., where the corresponding quantitative values within a single cell
of a matrix can commonly have a 10 to 1,000-fold or more range) has been reported in the scientific
literature as a significant issue in using risk matrices (Vatanpour et al, 2015, Thomas, 2014). In such
range compression, risks with substantially quantitatively different risks (as in magnitudes of
difference) can all have the same qualitative risk rating.

Conclusion: fact or fallacy

“The risk matrix requires no expertise or prior knowledge to understand a risk matrix’s outputs” is a
fallacy. Yes, anybody can pick up a risk matrix and use it, but to use a matrix effectively requires a
deep understanding of risk, and actual analytical expertise.

“Helps to visualise the risks”

There are two very different concepts embedded within this supposed benefit.

Firstly, the actual shape of the risk matrix can have profound subconscious effects on how risk
information is analysed and interpreted. Look at any of the example risk matrices displayed in this
Whitepaper. Despite each one being a 5 x 5 matrix (and technically a mathematically square matrix),
none of them are actually ‘square’ in visual perspective. In fact, almost all risk matrices I have seen
have always had one axis (usually the top axis, which is commonly ‘consequence’) more extended than
the side axis. This visual elongation of an ostensibly square matrix usually occurs in order to fit the size
of words entered into the cells (which are ‘wider’ than they are ‘deep’). This visual size difference
apparently creates perceptual differences between the likelihood and consequence, influencing users
to unconsciously assign more relative importance to the longer axis (consequence) than to the shorter
axis when using the matrix (Sutherland et al, 2022; Woodruff, 2005).

Secondly, the relative dimensions of each cell within a matrix influences the perception of the relative
value and importance of the risk. In the most commonly used risk matrices, each and every cell has
the same horizontal and the same vertical dimensions, further strengthening the perception of
linearity and equivalent step changes from any one cell to any other neighbouring cell.

However, this is not usually the case. Just considering moving left to right across any row in increasing
consequence ratings, there is a non-linear change in magnitude, often in terms of 5 to 10-fold or
greater increases in value between each cell of the matrix. Thus, true differences between cells are
visually hidden by the physical layout of the matrix. When there is a disconnect between a visual
representation and its factual basis, the observer finds the information more difficult to understand
and will be more prone to making errors (Shah and Hoeffner, 2002). In terms of the risk matrix, this
incongruity between physical size of the cell and (explicit or implicit) rating values may be enough to
produce bias priming and cognitive interference (Oppenheimer et al, 2008; Tzelgov et al, 1992) in
users of the matrix.

43
In fact, it can be argued that the use of the risk matrix achieves the exact opposite of helping to
visualise risks. The very construction of a risk matrix with cells of equal size and distance from each
other, irrespective of relative values and differences, can only serve to mislead the user and observer.
Such a phenomenon has been termed the ‘lie factor’ (Tufte, 2001; 2006), where graphical displays,
such as the risk matrix, fail to represent distances on the graphic that are proportional to the
quantitative values they are ‘replacing’.

There could be a counter argument that the use of the ‘matrix’ as a ‘heat map’ provides a very easy
to read overview of the importance of risks relative to each other. However, these same problems of
false linearity can mean that risks that appear close to each other on the ‘heat map’ can have widely
different quantitative values.

Furthermore, most matrices use ranges for both consequence and likelihood. However, when
displayed on a majority of ‘heat maps’, risks are displayed as a set of equally sized ‘bubbles’, ignoring
the range of values that each mapped risk represents. Even rarer still are heat maps that give an
indication of the uncertainty present within each of the single risk level point value. Plotting such
uncertainty into a heat map could reveal substantial overlaps in ‘risk values’ between otherwise widely
visually separated risks.

Conclusion: fact or fallacy


“The risk matrix helps to visualise risks”, yes the risk matrix can help to visualise the estimated
importance of risks relative to each other, but this is often an inaccurate and misleading visualisation
that hides (or ignores) important information. Important information that could dramatically change
the relative levels and importance of a reported collection of risks. In conclusion, the claim is largely a
fallacy.

“Colour coding system makes it easy to interpret”

Colour coding of information certainly helps data visualisation in many situations, and can readily alert
the user to more important, somewhat important, and much less important data (such as through
red/yellow/green ‘traffic light’ reporting). Such colour coding is a common feature of a majority of risk
matrix approaches, although some use three colours, some four, and even five different colours
(sometimes more) aligned to different levels of risk rating. However, as with criteria labels (such as
‘high’, ‘medium’, etc.), some differently ‘coloured risks’ may be quantitatively very similar, and
conversely some similar colour coded risks could have widely different quantitative values (Thomas et
al, 2014). Not all identically coloured risks are equal, whilst some differently coloured risks may be
very similar in relative level and importance.

Whilst undoubtedly providing a very rapid visual guide to different levels of risk, the use of colours
can also introduce additional problems which can lead to errors in analysis and misinterpretation of
risk information. There is a well known phenomenon in psychology, the Stroop effect (Stroop, 1935),
where individuals have to read and say out loud a set of single colour words (such as ‘red’, ‘black’,
‘yellow’, ‘green’, ‘orange’, etc.). If the words are coloured differently (e.g., the word ‘red’ is coloured
‘blue’, i.e.: ‘RED’), then it takes longer to read and say the mismatched word, compared with the word
and the colour matching. There is also a tendency, in such mismatches between words and colours,
for the actual displayed colours to take precedence over the written words, i.e. an individual will read
the colour rather than the word itself. We are therefore more cognitively primed to rely on visual
colours than the words associated with them when provided together. Thus, we are more likely to pay

44
much more attention to risks with the colour red or yellow, and assign to them a much higher level of
importance than we are to risks that are coloured blue or green, despite any closeness of the risk
criteria values.

There is some evidence that using colour coding in conjunction with additional information about
consequence and likelihood can result in better decision outcomes (Mu et al, 2023), compared with
just relying on colour coding and a label. The effect of colour coding can be particularly acute when a
risk crosses a colour separation boundary, for example when estimating the effect of a proposed
treatment on the level of risk. Recently published work on fuzzy-trace theory48 shows that there is also
a significant preference49 for choosing risk reduction options that result in cross- boundary
movements (i.e., that result in a change in the colour rating for the risk), than say comparing the
reduction in likelihood or consequence (Proto et al, 2023). In real terms, this could mean that a risk
treatment that reduced the quantitative risk from $100 million down to $50 million (but the reduced
risk remains coloured red, although the rating level changes) could be perceived as less attractive than
a risk reduction of $100,000 if the risk reduction crossed colour boundaries (say from ‘yellow’ to
‘green’).

There is a whole scientific discipline dedicated to colour psychology, how colours can affect our
decision making and behaviours (Xia et al, 2022; Leong et al, 2019; Voss et al, 2019; Silic and Cyr, 2016;
Dzulkifli and Mustafar, 2013; Benbasat et al, 2019). Effects that are generally unconscious and exert
substantial influences. There is a reason why well known brands such as Coca Cola and McDonalds
use red and yellow colour palettes instead of blue or green. It’s all about influencing consumer
perception and decisions (Yu et al, 2021; Cheng et al, 2009; Chebat and Morrin, 2007).

Matrix colour coding could have emotional and cognitive effects on the analyst and the reader way
beyond just highlighting the relative importance risk. Influences that may have unintended effects on
decisions.

Conclusion: fact or fallacy


“The risk matrix colour coding system makes it easy to interpret”: Yes, a matrix’s use of colours
provides an intuitive interpretation of the risk. However, how realistic and accurate is this
interpretation, and how valuable is it for effective decision-making. The intuitive response is probably
a fact, but the usefulness of the product will always be uncertain, and the conclusion is probably less
fact and more fallacy.

“Allows easy quantification”


The most commonly used risk matrix is qualitative, which is the antithesis of any quantification. In
which case the claim is most certainly a fallacy. However, what about ‘semi qualitative’ or ‘quantitative
risk matrices allowing for easy quantification? If the analyst has access to meaningful quantitative
data, then it is far simpler to perform a simple consequence x likelihood calculation (or a range
calculation), than trying to place this data into a wide spanning dual range values on a matrix.
Furthermore, if we look more closely at the mathematics of ‘semi-quantitative’ matrices, as discussed

48
Which propose that when individuals process risk information, that they use various different cognitive
processes which differ markedly in terms of accuracy. This can be problematic because in selecting different
options, there tends to be a decision-making preference for the simplest information that provides a clear
differentiation (Reyna and Brust-Renck, 2020). This simplest information is also usually the most inaccurate.
49
Surprisingly a more pronounced effect in highly numerate individuals.

45
in earlier sections of this Whitepaper, such applications often produce nonsensical mathematical
outputs. Which means that the claim is most certainty a fallacy.

One of the key issues with using tools such as the risk matrix, based on just the two parameters of
consequence and likelihood, is that are we really just using the term ‘risk’ when all that we are actually
estimating is the expected loss in most circumstances50 (Moat and Doremus, 2020). After all,
estimated loss is calculated as the consequence x probability, and that is all that the risk matrix is
trying to do, either in qualitative or ‘semi-quantitative’ terms. This raises several questions, including:

• Is risk something more than just a statement of expected loss?


• Is risk a combination of expected loss and an expression of the uncertainty about that
expected loss?
• Does risk include consideration of confidence limits?
• Does risk include the modelling of different future conditions that could give risk to different
ranges of expected loss?
• Does risk include a consideration of the presence and potential future emergence of
vulnerabilities that could influence the magnitude, range, and duration of losses?
• Is risk the described scenario within which expected losses occur?

This then raises the question, does any ‘easy quantification’ using a matrix really tell us anything
deeply informative about risk?

Conclusion: fact or fallacy


“The risk matrix allows easy quantification” is a fallacy.

“Facilitates a good understanding of risk” and “produces an easy to read and


understand risk rating”

A great deal of the previous discussion in this Whitepaper has demonstrated the severe limitations of
the products from using risk matrices, and how misleading these products can be for understanding
and decision making.

In almost any risk assessment there will also be uncertainty associated with the inputs into that
assessment. Does the risk matrix provide any understanding of the extent of this uncertainty, and how
does the risk matrix tell us how these potential variables in inputs translate into variability in
outputs?51

As previously discussed, the risk matrix only considers two factors in assessing risk (consequence and
likelihood – in a linear relationship), yet this is often only part of the information available about the
nature of the risks under consideration. This also raises the question of what a risk rating actually
means, and how relevant it is to making decisions about risk. A majority of risk practices still rely on a
risk matrix, and hence the users of the risk products are making decisions based upon an artificial and
simplistic consequence and likelihood alignment. Although many users are just ticking the box and

50
Just assuming for the moment that we are only interested in losses and other negative consequences, which
may not always be the case.
51
For example, by using uncertainty analysis (Avila, 2015; Mahadevan and Sarkar, 2009; Paté-Cornell, 1996; NRC
Committee, 1994)

46
ignoring the risk product in their decisions. I have conducted numerous ad hoc surveys of analysts and
decision makers about how risk information is used. There is a wide range of assumptions made about
risk ratings, although with the common theme that many practitioners and their ‘customers’ have a
poor understanding about what that risk rating actually means for their decision making.

Conclusion: fact or fallacy


“The risk matrix facilitates a good understanding of risk” is a fallacy.
“The risk matrix produces an easy to read and understand risk rating”, yes. a rating is easy to read, but
that this is easy to understand (in a meaningful way) is largely a fallacy.

“Provides an accurate measurement of risk”


The risk matrix has become the most ubiquitous tool used in qualitative risk analysis worldwide. In
addition to those issues that are directly attributable to solely using a matrix, qualitative analysis itself
has been shown to introduce significantly greater error than quantitative analysis (Cox et al, 2005).
This includes:

• “Reversed rankings”, situations where ‘lower quantitative risks’ are assigned ‘high qualitative’
risk levels.
• “uninformative ratings”, situations where the highest qualitative rankings are assigned (but
with low quantitative risk ratings) and assigning this same rating to other risks which differ by
multiple orders of magnitude.

In other words, the research showed that qualitative analysis was unable to discriminate accurately
across risks with multiple different values.

Multiple studies have shown that even using the same input data and the same methodology at the
same time, different individuals can arrive at very different estimates of risk (Tebehaevu, 2015).

It may also be timely to debunk this claim purely on semantics. Qualitative and ‘semi-quantitative’
matrices can only ever provide a crude estimate of a level of risk (i.e. a relative order), they do not
provide anything resembling an actual measurement. Only actual quantitative methods can provide a
measurement.

Conclusion: fact or fallacy


“The risk matrix provides an accurate measurement of risk”: is a fallacy.

47
“Can be used to compare different types of risk” and “can be applied to
different contexts”

The same identical single risk matrix is often applied across an organisation (strategically) and then
applied (without adjustment) within different parts of an organisation for operational, tactical,
project, and specialised areas of risk (where the context is likely to be very different to the whole of
an organisation), and for very different types of risk. A core principle of risk management (also
expressed in ISO 31000) is that risk management is customised to the context in which is applied. This
includes adapting, where necessary, the risk criteria that will be used. These very same criteria that
provide the foundation (or should do so) for the construction of every risk matrix.

A risk matrix cannot be just blindly applied across multiple different aspects of an organisation. A risk
matrix needs to be designed, constructed, and validated for the context within which it will be applied,
and it may very well be that a mandated and beloved risk matrix is wholly unsuitable when used in a
different context (Busby and Kazarians; 2018).

Conclusion: fact or fallacy


A risk matrix could be used as a comparative tool for different types of risk, but would that comparison
have any validity if the context of application is different to the designed for context? “The risk matrix
can be used compare different types of risk” may be true in some applications, but in many other uses
would simply be a fallacy.

“Provides an objective assessment of risk”


The highly subjective nature of risk analysis using a matrix is repeatedly demonstrated when different
individuals analyse the same data set, using ostensibly the same ISO31000 based methodology52, yet
come up with widely varying results. This is reinforced for me time and time again in my consulting
work. I am often called in by clients to review earlier work (often produced by a Big 4 firm), and using
the same qualitative methodology with the same matrix and with the same data, generate a
significantly different analysis. I will usually calibrate my analysis with a quantitative review (even
when the client only expects a qualitative view53). I know that many other skilled analysts have similar
experiences regarding matrix and qualitative subjectivity54.

Independent research further supports this experience. In one particular study, three different
consulting companies qualitatively analysed risk at a single hydroelectric power plant, and created
very different analytical outputs (Backlund and Hannu, 2002). Similar variability, arising from the
approach’s subjectivity, was more recently demonstrated in the water sector within Australia. In this
study, 77 ‘water practitioners’ used similar (ISO 31000 based) methodologies and applied ‘semi-
quantitative risk matrices for ‘scoring’, across the same sets of projects risks (Kosovac et al, 2019). The
study showed that an individual project risk could be rated from low to extreme by different
practitioners, using the same inputs.

Conclusion: fact or fallacy


“The risk matrix provides an objective assessment of risk” is a fallacy.

52
And using the identical risk matrix and risk criteria.
53
To better inform my understanding of the risks and strengthen my own calibration.
54
Yes, there can still be subjectivity in assigning quantitative values in risk analysis, but on an even playing field,
quantitative analysis still be expected to be less subjective and more easily calibrated.

48
“Provides a great way of communicating risks” and “facilitates meaningful
reporting to senior decision-makers”

The level of inconsistency in output that is innate to the risk matrix raises substantial issues for the
matrix to be applied to robust decision support (Emblemsvåg, 2010). It has been shown that the
results of qualitative analysis using matrix type approaches are often unreliable (Cox et al, 2005), and
that the matrix has inappropriate granularity for decision making (Busby and Kazarians, 2018). Whilst
some studies (Cox, 2008) suggest that using a matrix can produce results that are worse than a random
guess.

The risk matrix certainly provides outputs that are simple (actually more like simplistic) and hence are
easy to communicate, but in many cases the potential for misleading information means that is not a
particularly “‘great’ way of communicating risks”. Whilst the actual meaningfulness and decision-
making value of the reported information is often doubtful55.

Conclusion: fact or fallacy


“The risk matrix provides a great way of communicating risk” is a fallacy.
“The risk matrix facilitates meaningful reporting to senior decision-makers” is a fallacy.

“Can be used by anyone”


Risk based regulation in many jurisdictions require organisations to conduct risk assessments often
using a matrix, some associated guidance (provided by government agencies) even provide a readily
constructed risk matrix to be used. We will ignore, for the moment, the real problem of applying an
identical risk matrix to different types and sizes of organisations, operating in different contexts. A key
issue with such an approach is requiring the use of risk matrices by organisation and individuals that
may have little familiarity and experience in their use (Sutherland et al, 2022). Even some ‘skilled’ risk
practitioners appear to find risk matrices challenging, judging by how these matrices are frequently
abused. What hope is there then for the average individual tasked with conducting an occasional risk
assessment to meet compliance obligations?

On the other hand, practical experience shows that any fool can pick up a risk matrix and generate
‘risk ratings’. Whether those ratings are meaningful seems to be of little concern to the average user,
or apparently some regulators.

Conclusion: fact or fallacy


“The risk matrix can be used by anyone” is actually true.
However, “the risk matrix can be by anyone, and achieve meaningful results” is a very different
statement, and is most certainly a fallacy.

55
Having spent some time discussing risk with C-suites and Boards, I regularly find that there is a wide variety of
assumptions in place about what reported risk ratings actually mean, and how to use this information in decision
making. A common outcome of this, is that senior decision makers often receive a risk report, ‘tick the box’ then
rely pretty much solely on their on their own intuition to make decisions. Observations supported by the work
other authors (Farrel, 2023; Cai Sui and Lucietto, 2022; Kutsch, 2019; Resnik, 2017; Butler et al, 2013; 2011;
Klein, 2011).

49
“Makes the risk management process more transparent”

Since many users have not really built their risk matrix ‘from the ground up’ using robust data and
have not rigorously evaluated their construction, the risk matrix in many instances is little more than
a ‘black box’. The average ‘analyst’ and the average user of the matrix outputs have little
understanding about what the matrix is actually doing for them.

In a typical risk assessment (even if there is a such a thing as ‘typical’!), pages and pages of notes would
be collected from interviewing various staff about the context and issues of concern. That information
is then condensed, ultimately, through the risk matrix to produce a short list of risks each with a single
(consequence | likelihood) risk rating. With so much data being rendered invisible or unavailable to
the decision-maker, where is the transparency? That is even then assuming that any individuals using
the risk matrix understand anything about the data that they are selectively including and excluding
from inputting into the matrix.

Neither the practitioner nor the decision-maker usually give much thought to what is actually
happening within the black box of the risk matrix. They just accept the answer, even when that answer
intuitively makes little sense.

Conclusion: fact or fallacy


“The risk matrix makes the risk management process more transparent” is a fallacy.

“Helps get the team aligned”


This is a very interesting claim and one that reminds me of a number of risk workshops that I attended
some years ago, conducted by open of the ‘Big 4’ accounting firms. A common feature of many risk
workshops is the difference of opinion expressed by attendees. Differences that can take a skilled
facilitator some time to work through and resolve. To make the facilitation process easier and quicker
(so that a junior consultant still in their professional year) could conduct a workshop, voting software
was used. This software came complete with nifty little consoles (looking a bit like TV remote controls,
but with far fewer buttons), allowing workshop participants to assign consequence and likelihood
values to a risk. The software would then exclude any obvious outliers and determine the average
‘score’ for consequence and likelihood. This would then be plotted onto a matrix allowing the team of
participants to produce a fully aligned set of risk ratings.

Such an approach had huge issues, none more so than the exclusion of outliers. Differences of opinion
about risk are important factors to bring out in any risk assessment. If most participants are clustering
together in their assignment of ratings, but one participant is an obvious outlier (e.g., they are rating
risk far higher or lower than everyone else), then that outlier is the really interesting part of the
assessment. They are perceiving risk very differently to everyone else, and I want to know why. Do
they have different experiences that influence their perception? Are they misinformed (and why)? Or
have they had access to additional data and are better informed? I don’t want the team to be
artificially aligned through some imperfect tool. The same issue applies to just using the risk matrix
(without the software), I want to know why different individuals perceive the risk in different ways.
That tells me more about the risk than just plugging inputs into a matrix.

50
What is more important than aligning a team, is that individuals are aligned with themselves. That
they have been given the opportunity to go through some form of calibration about their perceptions
of consequence and likelihood. For example, a majority of individuals are biased - either
underconfident or overconfident in their estimates about probability (Hubbard, 2014). A number of
studies have shown that individuals are able to make better estimates when actions are taken to help
remove or help reduce these personal biases (Lichtenstein and Fischoff, 1980a;1980b; 1977; Choo,
1976). The issue is that individuals need to calibrate themselves, through some form of calibration
training, before they can effectively contribute to estimates of risk (Hubbard, 2014).

The risk matrix does not provide a meaningful alignment of a team’s estimates of risk. It merely
provides a means of clustering what are often little more than guesses about a limited set of risk
attributes.

Conclusion: fact or fallacy

“The risk matrix helps to get the team aligned” is a fallacy.

“Increases stakeholder trust of the risk assessment”

Today, many risk analysts, risk managers, and decisionmakers place an enormous amount of trust in
the risk matrix. Trust that is placed on something that few of them really understand. Considering
that:
• Many risk matrices are either poorly validated or not validated at all.
• The risk matrix provides exceptionally low precision and accuracy.
• The risk matrix has very low granularity, being unable to distinguish risks with substantially
different quantitative values.
• In application, the risk matrix reduces what was a wealth of useful data down to a simplistic
consequence and likelihood construct.
• Using a risk matrix obscures the extended ranges of analytic values in producing a simplistic
single point value.
• The matrix promotes and amplifies biases that introduce additional uncertainty.

It appears that such trust is misinformed and is misplaced.

Conclusion: fact or fallacy


“The risk matrix increases stakeholder trust of the risk assessment” is true in many circumstances
when caution and mistrust would better serve the decision-maker.

51
A confused mess of claims
In addition to the unjustifiable claims that have been dissected above. There is a plethora of articles
(many by software companies) that make some other pretty wild claims: including that the risk matrix:

• Allows risk management strategies to be developed in advance.


• Removes “bottlenecks” in projects and allows better delivery of projects.
• Prevents scope creep in programs and projects.
• Provides better allocation and management of resources.
• Standardises evaluation and management of risk across the organisation.

It is sufficient to say that there is not one shred of robust published evidence that supports any of
these claims. It appears that many claims are simply made up to promote something that is inherently
dodgy in conceptualisation, construction, and application.

The risk matrix: An ethical dilemma?


If only half of the criticisms levelled against the use of matrices in risk analysis hold true, then the risk
matrix is a seriously flawed tool with a very real prospect of highly inaccurate and highly misleading
outputs. There can be few (if any) risk professionals that are unaware of at least some of the problems
with risk matrices. How many though have made any real concerted effort to review how they use risk
matrices and how they ‘report’ on the subsequent analytical products?

If we are providing a faulty product to someone (whether it is electronic goods, a car, or a risk
assessment) do we not have an obligation to inform the ‘buyer’/’customer’ about these problems?
With some types of product there is a legal duty of care to make full disclosures. In most situations
there will be a moral duty of care (what a ‘reasonable person in the street’ would expect to be done),
but for professionals, there is also an ethical duty of care to conduct oneself to an appropriate
standard of behaviour and fulfill certain obligations that are expected of a professional.

What then is the ethical duty of care owed by the risk professional to their customers and other
stakeholders. To be diligent, and honest goes without saying. But one would also expect an obligation
to possess the required expertise to undertake the specific risk analysis (or other risk management
activity). To apply (and know how to apply) the specialised tools that they will use, and to inform the
stakeholder about any issues, limitations and assumptions that have affected or could affect the
analysis.

Accordingly, it seems that many risk professional are negligent in meeting their ethical duty of care to
the users of their analytical and risk management products, by using risk matrices and not advising
on, or warning about the significant limitations in their use (Montague, 2004).

52
Is the risk matrix of any use?
The risk matrix has been heavily criticised by academics, practitioners, and users, in particular that the
matrix “… may be creating no more than an artificial and even untrustworthy picture of the relative
importance of hazards, which may be of little or no benefit to those trying to manage risk effectively
and rationally” (Ball and Watt, 2013).

In some circumstances, the risk matrix may be able to reasonably differentiate between ‘low’ risks
and ‘high’ risks56. However, as the qualitative value of different risks come closer together, then the
matrix’s ability to differentiate these risks diminishes accordingly.

Many of the justifications for using risk matrices are based upon false equivalence. For example,
arguing for the use of a highly subjective and inconsistent matrix, because there is some subjectivity
in all other risk tools, even quantitative methods57. So, the argument goes, because of this all-present
subjectivity, why not just use a matrix. This is akin to arguing that we should all fly interstate in a
homemade helicopter (constructed of cardboard and elastic bands) because modern commercial
airliners also have some safety issues.

The real question that should be asked is: “does the use of risk matrices improve decision-making to
the same extent as other available techniques?

Any type of risk assessment relies on assumptions and abstractions, which often introduce
considerable uncertainty. Yes, even the most sophisticated of quantitative techniques will have some
degree of uncertainty and error associated with it. One of the key aims of risk assessment (and risk
management) is to reduce uncertainty in order to make better decisions. This then comes to the nub
of the problem, given the innate uncertainty associated with risk assessment, why would someone
want to use a tool that introduces more uncertainty than it resolves?

On many occasions it has been convincingly argued that even simple quantitative risk analyses will
yield more meaningful results than relying on a risk matrix (for example: Rozell, 2015).

Remaining ignorant of the flaws and problems associated with the risk matrix, means that the matrix
will continue to provide low resolution and low value outputs. However, even if these problems
cannot be fixed, just being aware of the issues, and considering these issues when using the matrix
will enhance its usefulness. Including other tools and techniques alongside the risk matrix, will start
to provide some validation, and will also start to reduce dependence on the matrix as a preferred tool.

Why are alternatives not more popular?


It appears that there is an overriding reason why a highly flawed and often dubious tool such as the
risk matrix has retained such massive popularity, and why more robust alterative methodologies are
avoided by so many ‘risk managers’ and their ilk. One reason is the apparent simplicity of the risk
matrix. It allows anyone to pick up a matrix and spit out a risk rating without even a basic
understanding of the nature of uncertainty or of risk. Using many of the more robust alternatives
requires the user to do some work, apply some high school level science and mathematics

56
Depending upon how ‘low’ and ‘high’ are defined.
57
Yes, there has been substantial criticism about the performance of quantitative risk assessments, because of
flaws that are intrinsic to such methodologies (Rae et al, 2014), including a lack of validation of many such
approaches.

53
understanding, and spend some time doing calibration and validation. They may even have to spend
some time conducting other types of analysis to better understand their context.

It is a sad indictment that many ‘risk professionals’ are willing to accept superficiality and to use the
smoke and mirrors of pseudo-quantitative matrices to try and persuade decisions-makers that their
analytical outputs are of more value than they can ever be. User and decision maker alike will just
accept the ‘answer’ from a risk matrix, even when that answer intuitively makes little sense. Thus,
because there is little acceptance of the limitations of risk matrices, there is little demand for change.

A way forward
Can the intrinsic and application flaws of the risk matrix be addressed? One of the major problems
with using most types of risk matrix is that they are treated as though they are a magical black box.
The practitioner pumps highly simplified data into one end of this ‘box’ which then spits out a
simplistic answer, a single word rating or abstract number. That simplistic answer is then used to drive
complex decision-making (rather than just be one source of multiple different types of influencing
information). A good starting point for improving risk assessment and risk management, is for risk
professionals to acquire a better understanding of the nature of risk, and apply that understanding to
how they use the risk matrix. This will at the very least provide a basic personal evaluation of the
capabilities and limitations of the risk matrix, and how much reliance should be placed on its products.

Trying to validate a risk matrix


All robust analytical tools are subject to testing of the model upon which they are based, along with
testing, review, and validation of the use of the tool in real life situations. Including monitoring and
assessing the performance of the tool over the life of its operation.

However, in my experience of advising individuals and organisations, I have seen only a handful of risk
matrices that were based upon tested models, and few of these again were regularly tested and
validated for their performance. I have looked at hundreds of risk assessment approaches (that rely
on matrices) in the last decade or so, and less than 5% of them even had a documented design process
for constructing and validating their risk matrix tools.

There is no getting away from the simple fact that the risk matrix remains popular today. Risk
professionals will not abandon such a dearly loved ‘tool’ overnight, and it will likely be a feature of risk
management for many years to come.

So, without being overly prescriptive, if one is committed to using a risk matrix, how can its design and
application be improved? There are some actions that can be taken:

• Establish a simple model which allows the relationships between risk sources, ‘controls’,
‘elements of value’, and impacts to be explored for the context in which the matrix will be
used.
• Review and adjust the model, with actual historical or reasonable hypothetical data. This
should include surfacing, challenging, and adjusting assumptions, until the model’s outputs
are validated.

54
• Continue to build the model iteratively with increasing complexity and comprehensiveness,
until an appropriate and acceptable balance between effort and reality is achieved
• Prototype the matrix, test with the model, and evaluate the ‘common sense’ of the outputs.
• Calibrate the risk matrix criteria against quantitative risk values (Baybutt, 2015).
• Review and adjust the model and construction of the matrix, with available data, and ensure
the matrix’s ability to deal with the expected types and ranges of risk.
• Develop a narrative analysis of the risk, explore the wide range of other attributes of risk and
use these to reach a judgment about the importance of individual risks and their
interrelationships. Compare this ‘narrative analysis’ with the outputs of the risk matrix for
further validation.

Starting to adopt alternative tools and techniques.

Today, as I am writing the concluding parts of this paper, I picked a couple of ‘old’ risk tomes from my
bookshelves. One book on probability assessment and one on quantitative risk analysis. Flicking
through these books, there were more than a few pages replete with equations, many of which I have
not previously applied in practice. Time to brush up on calculus was my first thought. Then came a
second thought, this could be scary to risk professionals that have only ever used risk matrices and
qualitative analysis, and is perhaps a key reason why they have only ever relied on the matrix. Then a
final thought, what would some less scary alternatives look like?

Many professionals will want to stick with the risk matrix simply because they believe that they did
not have the data to use other more informative tools and techniques. Let us assume for a moment
(and only for a moment!) that they are correct, they do not have quantitative data for a quantitative
analysis.

If they have sufficient data to use a risk matrix, then they will also have sufficient data for the
following:

• Developing a graphical risk model, showing the relationships across risk sources, influencing
and contributing factors, controls, vulnerabilities, and areas and types of uncertainty.
• Developing a narrative risk scenario, describes the current conditions and the different ways
of how (and why) those conditions could change, and the nature of the consequences that
could arise.
• Applying tools such as event tree analysis, decision trees, FRAM58, bow-tie diagrams, etc., that
support the development of risk models and risk scenarios.
• Conducting sensitivity analysis to obtain a better understanding of the uncertainty within the
matrix and to model how different changes in the parameters (such as consequence and
likelihood criteria) result in changes in the assignment of risk levels.
• Using multiple criteria decision analysis, where there is uncertainty in the face of multiple and
conflicting objectives (Comes et al, 2011; Casperry, 2008).
• Risk adjusted loss (RAL) method (Morat and Doremus, 2020).

58
FRAM: functional resonance analysis method (Hollnagel and Slater 2022; McGill et al, 2022; Damen and de
Vos, 2021; Hollnagel et al, 2014, Hollnagel 2012), provides an analysis technique that maps out
interdependencies in complex processes and systems of work under variability.

55
Returning to our earlier assumption, that there is an argument to continue using the risk matrix
because there is insufficient data to use alternate quantitative tools. Some authors argue that this is
rarely, if ever, a valid excuse, and that anything can be measured quantitatively with the right
approach (Hubbard and Seirersen, 2016; Hubbard, 2014). Certainly, from a practical perspective, I
have found little difficulty in taking a client’s qualitative risk criteria and developing a quantitative risk
approach. Even if it is just a simple substitution of dollar values for the descriptive ordinal criteria that
they routinely use. This at least allows a quantitative range of risk measurements to be calculated,
which instantly provides a more granular discernment of the relative importance of the risks.

In conclusion
No matter how many practitioners eventually read this paper, and accept its arguments, the risk
matrix will remain as the preferred methodology for the foreseeable future for many risk professionals
and decision makers. The matrix is too firmly entrenched in the ‘risk psyche’, is too readily available,
and is too easily picked up and used by anyone without requiring any real understanding of risk.

However, if practitioners can start to think about how risk matrices are constructed and validated,
what their limitations are, and what the outputs really mean, then we will start to move toward
generating more meaningful and more valuable risk intelligence.

I will continue to develop and expand my own tool bag of analytical techniques, and further deepen
my love for Bayesian belief network modelling! However, I will also at times still use a risk matrix (for
all of its sins), when it makes sense to do so. Though hopefully with a reasonable understanding of
what the results actually mean, and the conclusions that one should not make about matrix products.

56
References
Åkerberg F. (2021). Risk Ranking Reversals and Classification Ranking Reversals in Risk Matrices. Thesis, Faculty
of Engineering and Sustainable Development, University of Gävle, Sweden.
Arrow K.J., Blackwell D., and Girshick M.A. (1949). Bayes and Minimax Solutions of Sequential Decision Problems.
Econometrica 17(3/4) pp. 213-244.
Artzner P., Delbaen F., Eber J.-M., and Heath D. (1999). Coherent Measures of Risk. Mathematical Finance, 9 (3)
pp. 203–228.
Avila R. (2015). Uncertainty Analysis and Risk Assessment. In: Walther, C., Gupta, D. (eds) Radionuclides in the
Environment. Springer, Cham.
Aven T. (2017). Improving Risk CharacterisaDons in PracDcal SituaDons by HighlighDng Knowledge Aspects,
with ApplicaDons to Risk Matrices. Reliability Engineering and System Safety 167 pp.42-48.
Backlund F. and Hannu J. (2002), Can we Make Maintenance Decisions on Risk Analysis Results? Journal of
Quality in Maintenance Engineering 8(1) pp. 77-91.
Ball D.J. and Watt (2013). Further Thoughts on the Utility of Risk Matrices. Risk Analysis 33(11) pp2068-2078.
Baybutt P. (2015). Calibration of Risk Matrices for Process Safety. Journal of Loss Prevention in the Process
Industries 38 pp.163-168.
Baybutt P. (2016). Cognitive Biases in Process Hazard Analysis. Journal of Loss Prevention in the Process
Industries 43 pp. 372-377.
Benbasat I., Dexter A.S., and Todd P. (1986). The Influence of Color and Graphical Information Presentation in a
Managerial Decision Simulation. Human–Computer Interaction 2(1) pp. 65–92.
Benjamin D. J. (2019). Errors in Probabilistic Reasoning and Judgment Biases. Handbook of Behavioral Economics
- Foundations and Applications 2, pp. 69-186, (Editors: B. D. Bernheim, S. DellaVigna, and D. Laibson). North-
Holland/Elsevier.
Booth R.W. and Sharma D. (2020). Attentional Control and Estimation of the Probability of Positive and Negative
Events. Cognition and Emotion 34(3) pp. 553-567.
Bordallo P., Conlon J.J., Gennaioli N., Kwon S.Y., and Shleifer A. (2021). Memory and Probability. Working paper
29273., September 2021. National Bureau of Economic Research.
Bower J. and Khorakian A. (2014). Integrating Risk Management in the Innovation Project. European Journal of
Innovation Management 17(1) pp. 25-40.
Branch F. and Hegdé J. (2023). Toward a More Nuanced Understanding of Probability Estimation Biases.
Frontiers in Psychology 14: 1132168. doi: 10.3389/fpsyg.2023.1132168
BSI. (2008). OccupaDonal Health and Safety Management Systems — Guide, 2008, BS 8800, Britsh Standards
Insttuton, London.
Budescu D.V., Broomell S., and Por H.-H. (2009). Improving Communication of Uncertainty in the Reports of the
Intergovernmental Panel on Climate Change. Psychological Science 20(3) pp. 299-308.
Budescu D., Kuhn K.M., Kramer K.M., and Johnson T.R. (2002). Modeling Uncertainty Equivalents for Imprecise
Gambles. Organizational Behaviour and Human Decision Processes 88(2) pp. 748-768.
Busby K. and Kazarians M. (2018). Pitfalls of Using the Wrong Risk Matrix in PHA and LOPA. American Institute
of Chemical Engineers, 2018 Spring Meeting and 14th Global Congress on Process Safety Orlando, Florida April
22–25 2018.
Butler J.V., Guiso L., and Japelli T. (2011). The Role of Intuition and Reasoning in Driving Aversion to Risk and
Ambiguity. Working Paper No 282, Centre for Studies in Economics and Finance, University of Naples, Italy.
Butler J.V., Guiso L., and Japelli T. (2013). Manipulating Reliance on Intuition Reduces Risk and Ambiguity
Aversion. Accessed at: https://jeffreyvbutler.org/papers/IntuitiveThinkingCausation_Final.pdf.
Cai Shi M. and Lucietto M. (2022). The Preferences of the Use of Intuition Over Other Methods of Problem Solving
by Undergraduate Students. The European Educational Researcher 5(3) pp. 253-275.
Cárdenas I., Al-Jibouri S., Halman J., Linde W., and Kaalberg F. (2014). Using Prior Risk-Related Knowledge to
Support Risk Management Decisions: Lessons Learnt from a Tunnelling Project. Risk Analysis, 34(10).
Casperry G. (2008). Assessing Decision Tools for Secondary Risks of Capital Projects: Weighing EIA versus More
Complex Approaches. Management Decision 46(9) pp. 1393-1398.
Chernoff H. (1954). Rational Selection of Decision Functions. Econometrica 22(4) pp. 422-443.
Chebat J. and Morrin M. (2007). Colors and Cultures: Exploring the Effects of Mall Décor on Consumer
perceptions. Journal of Business Research 60(3) pp. 189–196.

57
Cheng F., Wu C., and Yen D.C. (2009). The Effect of Online Store Atmosphere on Consumer’s Emotional
Responses—An Experimental Study of Music and Colour. Behaviour & Information Technology 28(4) pp. 323–
334.
Choo G.T.G. (1977). Training and Generalisation in Assessing Probabilities for Discrete Events. Technical report
76(5) pp. 12-13.
Comes T., Hiete M., Wijngaards N., and Schultman F. (2011). Decision Maps: A Framework for Multi-criteria
Decision SuPport Under Severe Uncertainty. Decision Support Systems 52 pp. 108-118.
Cowan N. (2010). The Magical Mystery Four: How is Working Memory Capacity Limited, and Why? Current
Directions in Psychological Science 19(1) pp. 51-57.
Cox L.A. (2009). Risk Analysis of Complex and Uncertain Systems. Internatonal Series in Operatons Research &
Management Science 129. Springer Science+Business Media.
Cox A. L. (2008). What’s Wrong with Risk Matrices? Risk Analysis 28(2) pp. 497–512.
Cox L.A., Babaeyev D., and Huber W. (2005). Some Limitations of Qualitative Risk Rating Systems. Risk Analysis
25(3) pp. 651-662.
Cox L.A. and Popken D.A. (2007). Some Limitations of Aggregate Exposure Metrics. Risk Analysis. 27(2) pp. 439–
45.
Damen N.L. and de Vos M.S. (2021). Experiences with FRAM in Dutch Hospitals: Muddling Through with Models.
In: Resilient Health Care, pp. 71–80. CRC Press, Boca Raton, FL.
De Luca Picione R. and Lozzi U. (2021). Uncertainty as a Constitutive Condition of Human Experience. An
Extensive Review of the Paradoxes and Complexities of Sensemaking Processes in the Face of Uncertainty Crisis.
SAS Journal I(2) pp. 14-53. ISSN 2035-4630.
DOD (1984). Military Standard System Safety Program Requirements, MIL-STD-882B. US Department of Defence,
30 March, 1984.
DOD (1993). Military Standard System Safety Program Requirements, MIL-STD-882C. US Department of Defence,
19 January 1993.
Du N. and Budescu D.V. (2021). The Value of Being Precise. Journal of Economic Psychology 83 article # 102358.
Duijm N. (2015) Recommendations on the Use and Design of Risk Matrices. Safety
Science 76 pp. 21-31.
Dzulkifli M.A. and Mustafar M.F. (2013). The Influence of Colour on Memory Performance: A Review. Malaysian
Journal of Medical Science 20(2) pp.3-9.
Edwards P. and Bowen, P. (2005). Risk Management in Project Organization. University of New South Wales
Press Ltd. Australia.
Elmonstri M. (2014). Review of the Strengths and Weaknesses of Risk Matrices. Journal of Risk Analysis and Crisis
Response 4(1) pp. 49-57
Emblemsvåg J. (2010). The Augmented Subjective Risk Management Process. Management Decision 48(2) pp.
248-259.
Faramondi L. Oliva G., Setola R., Bozóki S. (2023). Robustness to Rank Reversals in Pairwise Comparison Matrices
Based on Uncertainty Bounds. European Journal of Operational Research 304(2) 676-688.
Fausset C.B., Rogers W.A., and Fisk A.D. (2008). Visual Graph Display Guidelines. Technical Report HFA-TR-0803.
Atlanta, GA: Georgia Insttute of Technology.
Fávero L.P., Belfiore P., and de Freitas Souza R. (2023). Chapter 3 - Types of Variables, Measurement Scales, and
Accuracy Scales. Data Science, Analytics and Machine Learning with R, pp. 29-36. Academic Press.
Fläder J. (2021). Sensemaking Under Conditions of Extreme Uncertainty: From Observation to Action. In:
Sensemaking for Security pp. 25-45, (Editor: A.J. Masys). Advanced Sciences and Technologies for Security
Applications. Springer, Cham.
Greenhut M.L. (1962). A General Theory of Maximum Profits. Southern Economic Journal 28(3) pp. 278-285.
Hammond J.S.; Keeney R.L., and Raiffa H. (1998). The Hidden Traps in Decision Making. Harvard Business Review
September-October 1998.
Heckler A.F., Mikula B., and Rosenblatt R. (2013). Student Accuracy in Reading Logarithmic Plots: The Problem
and How to Fix it. 2013 IEEE Frontiers in Education Conference (FIE) pp.
1066–1071.
Heilman R.M., Crisan L.G., Houser D., Miclea M., and Miu A.C. (2010). Emotion Regulation and Decision Making
under Risk and Uncertainty. Emotion 10(2) pp. 257-265.
Ho V. (2010). The Risk of Using Risk Matrix in Assessing Safety Risk. HAKRMS, Safety Engineering Lecture Series
2.2.
Hollnagel E. (2012). FRAM, the Functional Resonance Analysis Method: Modelling Complex Socio-technical
Systems. Ashgate Publishing, Ltd. Farnham, UK.

58
Hollnagel E., Hounsgaard J., and Colligan L. (2014). FRAM-the Functional Resonance Analysis Method: A
Handbook for the Practical Use of the Method. Centre for Quality, Region of Southern Denmark.
Hollnagel E., Slater D. (2022). FRAMSYNT A FRAM Handbook. Accessed at:
https://www.researchgate.net/profile/David-
Slater/publication/364959115_A_FRAM_HANDBOOK/links/63611d2f6e0d367d91e7b7e2/A-FRAM-
HANDBOOK.pdf
Hong Y., Pasman H.J., Quddus N., Mannan M.S. (2020). Supporting Risk Management Decision Making by
Converting Linguistic Graded Qualitative Risk Matrices Through Interval Type-2 Fuzzy Sets. Process Safety and
Environmental Protection 134 pp. 308-322.
Hubbard D.W. 2009. The Failure of Risk Management: Why It’s Broken and How to Fix It. John Wiley & Sons, Inc.,
Hoboken, New Jersey.
Hubbard D.W. (2014). How to Measure Anything: Finding the Value of Intangibles in Business. 3rd Edition. John
Wiley & Sons, Inc., Hoboken, New Jersey.
Hubbard D. and Evans D. (2010) Problems with Scoring Methods and Ordinal Scales in Risk Assessment. IBM
Journal of Research and Development 54 (3) pp. 2:1-2:10.
Hubbard D.W. and Seiersen R. (2016). How to Measure Anything is Cybersecurity Risk. John Wiley & Sons, Inc.,
Hoboken, New Jersey.
Hussey D.E. (1978). The Directional Policy Matrix—A New Aid to Corporate Planning. Long Range Planning 11(4)
pp. 2-8.
ISO. (2009). Risk Management: Risk Assessment Techniques, 2009. ISO/IEC 31010. Internatonal Standards
Organisaton, Geneva.
Johnson S.C. and Jones C. (1957). How to Organise for New Products. Harvard Business Review, May-June 1957,
p. 52.
Kahneman D. and Tversky A. (1979). Prospect Theory: An Analysis of Decision under Risk. Econometrica 47 (2):
263–291.
Karanikas N. and Kaspers S. (2016). Do Experts Agree When Assessing Risks? An Empirical Study. In Proceedings
of the 50th European Safety, Reliability and Data Associaton (ESReDA) Seminar, pp. 1-10. European Safety,
Reliability and Data Associaton (ESReDA), Lithuania,
Kaya G.K., Ward J., and Clarkson J. (2019). A Review of Risk Matrices Used in Acute Hospitals in England. Risk
Analysis 39(5) pp.1060-1070.
Kendricks E.J. and Gagge A.P. (1949). Aeromedical Aspects of Jet Propelled Aircraft; problems of High Speed
Flight. Bulletin of the US Army Medical Department, United States Army. Medical Department 1 July 1949, 9(7)
pp. 552-562.
Klein G. (2011). Streetlights and Shadows: Searching for the Keys to Adaptive Decision Making. MIT Press,
Cambridge MA.
Krisper M. (2021). Problems with Risk Matrices Using Ordinal Scales. arXiv:2103.05440v1.
Kosovac A., Davidson B., and Malano H. (2019). Are We Objective? A Study into the Effectiveness of Risk
Measurement in the Water Industry. Sustainability 11 Article: 1279.
Kutsch L. (2019). Can We Rely on Our Intuition? Scientific American, 15 August 2019.
Leong K., Sung A., Williams T., Andoniou C., and Sun F. (2019). The Importance of Colour on the
Communication of Financial Data in Management. Journal of Work-Applied Management
11(1) pp. 92-100.
Lichtenstein S. and Fischoff B. (1977). Do Those Who Know More Also Known More About How Much They Know?
Organisational Behaviours and Human Performance 20(2) pp. 159-183.
Lichtenstein S. and Fischoff B. (1980a). How Well do Probability Experts Assess Probability? Decision Research
Report. Eugene, OR.
Lichtenstein S. and Fischoff B. (1980b). Training for Calibration. Organisational Behaviours and Human
Performance 26(2) pp. 149-171.
MacKenzie I.S. (2013). Chapter 4: Scientific Foundations. Human Computer Interaction pp. 121-156. Elsevier Inc.
Mahadevan S. and Sarkar S. (2009). Uncertainty Analysis Methods. International Atomic Energy Agency.
Maitlis S., Vogus T.J., Lawrence T.B. (2013). Sensemaking and Emotion in Organizations. Organizational
Psychology Review 3(3) pp. 222-247.
McGill A., Smith D., McCloskey R., Morris P., Goudreau A., and Veitch B. (2022). The Functional Resonance
Analysis Method as a Health Care Research Methodology: A Scoping Review. JBI Evidence Synthesis, 20(4) pp.
1074–1097.

59
Menge D.N L., MacPherson A.C., Bytnerowicz T.A., Quebbeman A.W., Schwartz N. B., Taylor B. N., and Wolf A.A.
(2018). Logarithmic Scales in Ecological Data Presentation May Cause Misinterpretation. Nature Ecology and
Evolution, 2(9), 1393–1402.
Monat J.P. and Doremus S. (2020). An Improved Alternative to Heat Map Risk Matrices for Project Risk
Prioritization. JMPM Issue#22, 7(4) pp. 214-228.
Montague P. (2004). Reducing the Harms Associated with Risk Assessments. Environment Impact Assessment
Review 24(2-8) pp. 733-748.
Mu D., Kaplan T.R., and Dankers R. (2018). Decision Making with Risk-based Weather Warnings. International
Journal of Disaster Risk Reduction 30 pp. 59–73.
Naik S. and Prasad Ch.V.V.S.N.V. (2022). Risk and Risk Management: A Historical Review and Research Agenda.
International Journal of Business Continuity and Risk Management 12(3) pp. 244-262.
Naqvi N., Shiv B., and Bechara A. (2006). The Role of Emotion in Decision Making: A Cognitive Neuroscience
Perspective. 15(5) pp. 260-264.
Nas I., Helsloot I., and Cator E. (2022). Of Critical Importance: Toward a Quantitative Probabilistic Risk
Assessment Framework for Critical Infrastructure. Journal of Contingencies and Crisis Management 31(2) pp.
171-184.
Nicholls C. and Caroll J. (2017). Is there Value in a “one Size Fits All Approach to Risk Matrices? Hazards 27,
Symposium Series No 162. IChemE.
NRC (2009). Science and Decisions: Advancing Risk Assessment. National Research Council of the National
Academies. The National Academies Press, Washington DC.
NRC Committee (1994). Science and Judgement in Risk Assessment. National Research Council (US) Committee
on Risk Assessment of Hazardous Air Pollutants. Washington (DC).
Oberauer K., Farrell S., Jarrold C., and Lewandowsky S. What Limits Working Memory Capacity? Psychological
Bulletin 142(7) pp. 758-99.
Oboni C., and Oboni F. (2012). Is it True that PIGs Fly when Evaluating Risks of Tailings Management Systems?
Mining 2012, Keystone CO.
Oppenheimer D.M., LeBoeuf R.A., and Brewer N. T. (2008). Anchors Aweigh: A Demonstration of Cross-modality
Anchoring and Magnitude Priming. Cognition 106(1) pp. 13–26.
Ouyang Y. and Li. X. (2010). The Bullwhip Effect in Supply Chain Networks. European Journal of Operational
Research 201(3) pp. 799–810.
Paté-Cornell M.E. (1996). Uncertainties in Risk Analysis: Six Levels of Treatment. Reliability Engineering and
System Safety 54 pp. 95-111.
Pickering A. and Cowley S.P. (2010). Risk Matrices: Implied Accuracy and False Assumptions. Journal of Health
and Safety Research and Practice 2(1) pp. 11-18.
Platt M.L. and Huettel S.A. (2008). Risky Business: The Neuroeconomics of Decision Making Unde Uncertainty.
Nature Neuroscience 11(4) pp.398-403.
Pogue G.A. (1970). An Inter-temporal Model of Investment Management. Alfred P. Sloan School of Management,
MIT, Cambridge, MA.
Proto R., Recchia G., Dryhurst S., Freeman A.l.J. (2023). Do Colored Cells in Risk Matrices Affect Decision-making
and Risk Perception? Insights from Randomized Controlled Studies. Risk Analysis 43 pp. 2114-2128.
Resnik D.B. (2017). The Role of Intuition in Risk/Benefit Decision-Making in Human Subjects Research. Accounting
Research 24(1) pp.1-29.
Reyna V.F., and Brust-Renck P.G. (2020). How Representations of Number and Numeracy Predict Decision
Paradoxes: A Fuzzy-trace theory Approach. Journal of Behavioral Decision Making 33(5), pp. 606–628.
Romano A., Sotis C., Dominioni G., and Guidi S. (2020). The Scale of COVID-19 Graphs Affects Understanding,
Attitudes, and Policy Preferences. Health Economics 29(11), 1482–1494.
Rozell D.J. (2015). A CauDonary Note on QualitaDve Risk Ranking of Homeland Security Threats. Homeland
Security Affairs 11 Artcle #3.
Ruan X., Yin Z., and Frangopol D.M. (2015). Risk Matrix Integrating Risk Attitudes Based on
Utility Theory. 35(8) pp. 1437-47.
Shah P. and Hoeffner, J. (2002). Review of graph comprehension research: Implicatons for instructon.
Educatonal Psychology Review, 14(1), 47–69.
Shavykin A. and Karnatov A. (2020). The Issue of Using Ordinal Quantities to Estimate the Vulnerability of
Seabirds to Oil Spills. Journal of Marine Science and Engineering 8(1) article 1026.
Shell (1975). The Directional Policy Matrix - A New Aid to Corporate Planning. Engineering and Process
Economics, 2, pp.181-189.

60
Silic M., and Cyr D. (2016). Colour Arousal Effect on Users’ Decision-Making Processes in the Warning Message
Context. In: HCI in Business, Government, and Organizations: Information Systems: Third International
Conference, HCIBGO 2016 (Editors: F.-H. F. Nah and C.-H. Tan) Held as Part of HCI International 2016, Toronto,
Canada, July 17-22, 2016, Proceedings, Part II (pp. 99-109). Cham: Springer International Publishing.
Smith E.D., Siefert W.T., and Drain D. (2009). Risk Matrix Input Data Biases. Systems Engineering 12 (4): 344–
360.
Soeiro de Carvalho P. (2021). Sensemaking in Ambiguous and Uncertain Environments. IF Insight & Foresight,
accessed at:
https://www.academia.edu/50090926/SENSE_MAKING_IN_AMBIGUOUS_AND_UNCERTAIN_ENVIRONMENTS
Stoner K.M. (1982). A Decision Model for Evaluating Land Disposal of Hazardous Wastes. LSSR 65 – 82. School
of Systems and Logistics, Air Force Institute of Technology. Wright Patterson AFB.
Stroop J. R. (1935). Studies of Interference in Serial Verbal Reactions. Journal of Experimental Psychology 18 (6)
pp. 643–662.
Sutherland H., Recchia G., Dryhurst S., and Freeman A.L.J. (2022. How People Understand Risk Matrices, and
How Matrix Design Can Improve their Use: Findings from Randomized Controlled Studies. Risk Analysis 42(5) pp.
1023-1041.
Sun T. and Wåhlström D. (2022). Sensemaking & Decision Making in Uncertainty: A Case Study on How Tech
Leaders ́ Navigate the Metaverse. Thesis, Uppsala University, Uppsala, Sweden.
Taleb N.N. (2007). The Black Swan: The Impact of the Highly Improbable. Allen Lane, London, UK.
Taylor J. and Weerapana A. (2009). Principles of Macroeconomics. Global Financial Crisis eEdition. South-
Western Pub.
Tebehaevu O.J. (2015). Uncertainty in QualitaDve Risk Analysis and RaDng Systems: Modeling Decision Making
Determinants. Thesis: Department of Technology Systems, East Carolina University.
Tufte E.R. (2006). Beautiful Evidence. Graphics Press Cheshire, CT.
Tufte E.R. (2001). The Visual Display of Quantitative Information. Graphics Press, Cheshire, CT.
Tzelgov J., Meyer J., and Henik A. (1992). Automatic and Intentional Processing of Numerical Information. Journal
of Experimental Psychology: Learning, Memory, and Cognition 18(1) pp.
166–179.
Thomas P., Bratvold R.B., and Bickel J.E. (2014). The Risk of Using Risk Matrices. SPE Economics and
Management, 10 April 2014.
Thompson B. (2023). Sensitivity and Uncertainty. Sandia National Laboratories.
Tversky A., and Kahneman D. (1974). Judgment Under Uncertainty: Heuristics and Biases. Science, 185(4157)
pp. 1124–1131.
Tversky A. and Kahneman D. (1992). Advances in Prospect Theory: Cumulative Representation of Uncertainty.
Journal of Risk and Uncertainty 5(4), 297-323.
USDA (2014). Irrigation Siphon Tube. Comparison Extension Work in Agriculture and Economics. USDA and Texas
A&M College System.
Vatanpour S., Hrudey S.E., and Dinu I. (2015). Can Public Health Risk Assessment Using Risk Matrices Be
Misleading? Internatonal Journal of Environmental Research and Public Health 12 pp. 9575-9588.
Voss R.P., Corser R., McCormick M., and Jasper J.D. (2018). Influencing Health Decision-making: A Study of Colour
and Message Framing. Psychology & Health 33(7) pp. 941–954
Wall K.D. The Trouble with Risk Matrices. Naval Postgraduate School (DRMI) Working Paper.
WASH1400 (1975). Reactor Safety Study; An Assessment of Accident Risks in US Commercial Nuclear Power
Plants. Main Report, October 1975. United States Regulatory Committee.
Wensley (1982). PIMS and BCG: New Horizons or False Dawn? Strategic Management Journal 3(2) pp. 147-158.
Wilson A. (2014). Inherent Flaws in Risk Matrices May Preclude Them from Being Best Practices. Journal of
Petroleum Technology, 31 July 2014.
Wilson A.G. (1994). Cognitive Factors Affecting Subjective Probability Assessment. ISDS Discussion Paper #94-
02, 1February 1992. Institute of Statistics and Decision Sciences, Duke University.
Wilson T.P. (1971). Critique of Ordinal Variables. Social Forces 49(3) pp. 432-444.
Wind Y. and Mahajan V. (1981). Designing Product and Business Portfolios. Harvard Business Review January
1981.
Windschitl P. and Weber E. (1999). The Interpretation of ’Likely’ Depends on the Context, but 70% is 70%-Right?
The Influence of Associative Processes on Perceived Certainty. Journal of Experimental Psychology: Learning,
Memory, and Cognition, 25(6) pp. 1514–1533.
Wojnilower A.M. (1962). Examiner Criticism Rates in Relation to Industry and Size of Borrower. In: The Quality
of Bank Loans: A Study of Bank Examination Records. NBER.

61
Woodruff J.M. (2005). Consequence and Likelihood in Risk Estimation: A Matter of Balance in UK Health and
Safety Risk Assessment Practice. Safety Science, 43(5–6) pp. 345–353.
Xia G., Henry P., Li M., Queiroz, F., Westland S., Yu L., (2022). A ComparaDve Study of Colour Effects on CogniDve
Performance in Real-World and VR Environments. Brain Science 12 Artcle # 31.
Yu L. and Westland S., Chen Y. and Li Z. (2021). Colour Associations and Consumer Product-colour Purchase
Decisions. Color Research and Application,46 (5) pp. 1119- 1127.

62

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy