0% found this document useful (0 votes)
42 views29 pages

Empirical Models For The Monitoring of UK Corporations

This paper describes empirical models developed in the UK to monitor corporations for solvency and performance risks. It provides a critical review of existing UK Z-score models in chronological order, including the first UK model by Lis in 1972 and Taffler's 1974 model. It then describes two new unpublished models, one distinguishing manufacturing and distribution companies, and the other using a jackknife discriminant technique for private companies. The paper outlines developments to make Z-score models more operational and reviews current uses of such techniques in the UK.

Uploaded by

GaMer Loid
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views29 pages

Empirical Models For The Monitoring of UK Corporations

This paper describes empirical models developed in the UK to monitor corporations for solvency and performance risks. It provides a critical review of existing UK Z-score models in chronological order, including the first UK model by Lis in 1972 and Taffler's 1974 model. It then describes two new unpublished models, one distinguishing manufacturing and distribution companies, and the other using a jackknife discriminant technique for private companies. The paper outlines developments to make Z-score models more operational and reviews current uses of such techniques in the UK.

Uploaded by

GaMer Loid
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Journal of Banking and Finance 8 (1984) 199-227.

North-Holland

EMPIRICAL MODELS FOR THE MONITORING OF UK


CORPORATIONS

Richard J. TAFFLER
City University Business School, London, ECZY 8HB, UK

This paper has four main aims. Firstly it attempts a critical appraisal of extant UK Z-score
models and seeks to assess in each case their operational utility. Next two previously
unpublished models are described which address respectively (i) the need for separate models for
manufacturing and distribution companies, and (ii) the utility of the jackknife discriminant
approach in practice. Then developments of the technique to enhance considerably its utility to
the practitioner are described. Finally a brief review of how such approaches are currently being
used in the UK and by whom is provided.

1. Introduction
The UK provides a financial environment ideal for the successful
development of statistical models for the assessment of company solvency
and performance. The amount and quality of financial information about
corporate entities available to the analyst is similar to, and in the case of
privately owned companies considerably greater than, the US.l In addition
there is a well developed stockmarket with as many enterprises listed as on
the main US exchanges, a number of computer databases of standardised
company financial information and also a corporate failure rate among the
highest in the developed world. Not surprisingly then analysts have been
quick to adopt techniques successfully applied in the US to local conditions.
This paper has four main aims. Firstly in section 2 it provides a critical
review of the salient features of documented UK Z-score models, which are
discussed in chronologisal sequence. Then in the next section two new
models, unpublished heretofore, are described. Respectively these address
issues relating to the need in the UK for separate models for manufacturing
and distribution enterprises and, in the development of a private company
model, explore the utility of the jackknife discriminant technique. Next
section 4 describes developments of the Z-score approach to make it more
operational and finally a review of how such techniques are being used
currently in the UK, for what purposes and by whom is attempted. A

‘Because of the Companies’ Act requirement for all limited companies, whether listed or not,
to tile audited accounts, at Companies House, which are then made available to the public on
microfiche.

037884266/84/$3.00 0 1984, Elsevier Science Publishers B.V. (North-Holland)


200 R.J. Tafflm, Empirical models for monitoring UK corporations

concluding section provides a summary and some comments on how such


approaches should be used.

2. UK Z-score models
Work in the UK in this area has generally followed the methodology and
objectives of US studies particularly the seminal Altman (1968) paper and
has been largely restricted to the quoted corporate sector. The underlying 2-
group linear discriminant analysis (LDA) approach adopted in virtually all
the studies is well known and does not need to be described here.’

2.1. Lis, 19723


This, the first UK study, is described briefly by Bolitho (1973). The
researcher, Lis, developed a 4-variable discriminant function with ratios
based on Altman (1968), using as his bankrupt sample 30 major quoted
manufacturing, construction and retailing failures between 1964 and 1972
and, with his equal size continuing sample matched by industry, asset size
and year. The model is given by

2 = 0.063 X, +0.092 XZ + 0.057 X3 + 0.0014 X,,


where
X, = working capital/total assets,
X, = earnings before interest and tax/total assets,
X3 = retained earnings (adjusted for scrip issues)/total assets, and
X4= net worth/total debt.4

A selected cut-off of 0.037 misclassified one-failed company and 5 non-


failures. The mean of the failed group was -0.003 and that of the matched
sample 0.063.

Comment. The Lis study demonstrated, possibly for the first time, that such
an approach is applicable in environments outside the US5 and served to
stimulate further work in this area in the UK. Although reportedly this
model was used subsequently, particularly in the investment community, no
analysis of its actual performance in practice has been published.

%x Eisenbeis and Avery (1972) or Altman et al. (1981) etc. for good reviews.
3The convention of appending the dates of completion of the different models to the authors’
names is used throughout the paper.
4Aknan (1983, ch. 3) interestingly develops a model with identical ratios for the analysis of
private companies in the US environment.
sAltman (1983, ch. 12) provides no earlier multivariate references.
R.J. TaJler, Empirical models for monitoring UK corporations 201

2.2. Tafler, 1974

This, the next study chronologically, was completed in August 1974 and
received exposure in the financial press at that time [e.g., Taffler (1976)].
Taffler (1982) provides a full discussion including tests of its ex-ante
predictive ability. Where this study differs most from earlier US work is to
distinguish in the construction of the non-failed group between healthy live
companies and those with characteristics more similar to previous failures.
Tafller’s initial non-failed sample of 61 firms thus was culled of 16 firms
which conventional financial analysis suggested might be problematical.‘j The
23 bankrupt companies, predominantly manufacturers, were all those failing
between 1968 and 1973. No attempt was made to match the two groups by
industry, size or financial year.
Three classes of discriminant variable were developed: conventional ratios,
4-year trend measures and funds statement variables. However the latter
were too volatile for meaningful analysis and the trend measures added very
little to the power of the discriminant model. Analysis thus focused on a set
of 50 financial ratios which were transformed appropriately and windsorised
[Lev and Sunder (1979)] to improve univariate normality which is a
necessary condition for multivariate normality.7
Using principal component analysis (PCA) to help avoid multicollinearity
problemss a stepwise LDA produced a model consisting of the following five
variables:

X, = earnings before interest and tax/opening total assets,


X, = total liabilities/net capital employed,
X, = quick assets/total assets,
X, = working capital/net worth, and
X, = stockturn.

The cut-off was set taking into account prior probability odds of 1: 10
(failed:solvent) for the population and on this basis the Lachenbruch U-test
[Lachenbruch (1967)] indicated 1 type I error and no type II errors.g The
first two variables contributed most to the model. A justification for the use
of ‘good’ as opposed to ‘continuing’ companies for the non-failed sample was
provided by substituting the full 61 firm set for the reduced size good

6Altman and Loris (1976) take a related approach.


7The recent study of Frecka and Hopwood (1983) provides empirical support for this
approach. Lachenbruch et al. (1973) demonstrate that the linear discriminant function suffers
least from non-normality when the constituent variables are bounded above and below and
advise data transformation to approximate normality urior to fitting a model.
‘Whether multicolliniarity is-hkely to prove a serious problem-or not in the application of
such models is still being debated in the literature [e.g., see Pinches (1980, pp. 432434)].
‘Taffler argues that subsequent events call into question whether Rolls Royce, the type I error,
was actually insolvent at the date of appointment of the receiver.
202 R.J. Taffr, Empirical models for monitoring UK corporations

company sample of 45 when the model developed suffered an increased error


rate and a less distinct group separation.
Tafller tested for multivariate normality, which did not hold for the good
sample, and pointed to the lack of robustness of the Bartlett-Box test for
equality of dispersion matrices in such conditions. After a detailed review of
the statistical literature and related empirical studies he was forced to
conclude that for sound empirical reasons a quadratic approach is rarely
likely to prove superior to the conventional linear model at least where the
performance of the model on other data than that from which is derived is
the prime concern.
Application of the Z-model to the failed sample for prior years showed 9
of the 23 companies appearing sound on the basis of their penultimate
accounts and only 8 having failure characteristics 4 years before failure.”
However more interesting were the results from applying the model to the 33
quoted manufacturing firms identified as going bankrupt between 1974 and
1976 of which 4 (12.1%) were misclassified, a result comparable with similar
validation tests of the 1968 Altman Z-model [Altman and McGough (1974),
and Altman et al. (1977)]. In 1975 10.7% of the 1257 quoted industrial
companies with financial statements available on the Jordan Dataquest
computerised database had potential failure or ‘at risk’ Z-scores. Taffler used
these results to suggest the probability of failure given an at risk profile in
the next year was 20.5% and given a solvent profile only 0.34%. Following
the arguments inter alia of Morrison (1969), and Joy and Tollefson (1975) the
model was redeveloped adding the new failures and a small number of
solvent concerns to the original data. However the results proved
disappointing with 40.4% of the 52 failed firms being misclassified by the
model on a straight resubstitution basis.’

Comment. Taffler’s concern in his paper was as much to examine a number


of the statistical and conceptual issues involved as to develop an operational
Z-score model. That this model was used to the apparent satisfaction of the
clients of the stockbroking firm for which it was developed for some 7 years
subsequently” may suggest Taffler was overly concerned with theoretical
issues in assessing the model’s intertemporal stability and related questions
and ignored the practitioner’s ability to make rule-of-thumb adjustments to
ratio definitions to handle changes in the economic environment etc.l’
‘OTaffler argues that this is not a predictive ability test, as is usually considered, nor does
earlier year data constitute a holdout sample. This is because the sample of failed firms patently
did not go bankrupt, the implicit event of interest, until after their last accounts were published
and thus strictly speaking were of non-failed status in previous years.
r’Interestingly it appears that the service finally ceased not primarily through operational
problems but because ‘the style of stockbrokers’ research does not lend itself particularly well to
Z-score analysis’ (Finuncial Weekly, 24.6.83 p. 4).
‘Smilar arguments may apply to the continuing success of Altman’s 1968 model in practical
application despite the data from which it was derived spanning the period 1946 to 1965. In this
context we may also note the generally very robust properties of linear models as a class [e.g.,
Dawes and Corrigan (1974), Ashton (1979)].
R.J. Tafler, Empirical models for monitoring UK corporations 203

2.3. Tisshaw, 1976

The motivation of this study was the concern that because of believed
differences in the financial characteristics between quoted companies with
access to the capital markets and the generally smaller unquoted concerns,
the development of a separate and distinct Z-model was necessary to analyse
privately owned manufacturing companies. Tisshaw’s failed sample consisted
of 31 of the larger privately owned manufacturing companies failing in the 18
month period to June 1976, and each of these was loosely matched by size,
industry and year end with two ‘healthy’ live companies on the Jordan
Dataquest database to provide the solvent group.13,14 Using a conventional
stepwise LDA approach and a carefully selected transformed financial ratio
set, a model consisting of the following 5 variables [with Mosteller-Wallace
(1963) percentage contribution of each variable to the power of the model
provided in brackets] was determined:

X1= earnings before interest and tax/average total liabilities (29.8),


X2= profit before tax/sales (22.2),
X3= net capital employed/total liabilities excluding deferred tax (16.8),
X4= quick assets/net capital employed (16.4), and
X5= the acid test (14.8).

Resubstitution using a cut-off equidistant between group centroids led to


only one type I and one type II error. However, Tisshaw advocated adjusting
the cut-off in practical application for prior probability odds of 1: 5 (potential
failures: sound concerns in the population). 22% of the approximately 2,000
unquoted industrial companies in the Jordan Dataquest population in 1976
had Z-scores below the resulting cut-off with a further 11% in a’ ‘grey area’
defined as that region of the Z-scale between the adjusted and unadjusted
cut-offs.

Comment. Tisshaw’s work clearly demonstrated that it was possible to


analyse the much maligned accounts of unquoted companies in the UK in a
meaningful manner using Z-score techniques showing prima facie these have
information content for the user.

r3Tisshaw details the problems of definition of failure in the case of private companies and
points out ‘that many companies that do fail disguise the failure by either being taken over,
amalgamating, reconstructing or changing in some way, usually leaving the creditors partly paid
or deferred’. The unsatisfactory nature of UK insolvency law and the need for change has
recently been made explicit in the Cork Report (1982).
r4The mean delay in filing the last accounts of the failed sample (11.5 months) differed
significantly (at ~(=0.05) to the average time for the continuing company sample (9.0 months)
and the mean time from year end to failure was 23.3 months with standard deviation 9 months.
Interestingly these results may not be totally consistent with the hypothesis that private
company data lacks timeliness particularly in the case of companies experiencing problems.
204 R.J. Taffler, Empirical models for monitoring UK corporations

2.4. Tafjer, 1977

This UK model, which has received the most exposure and testing to date,
was first described in Taffler and Tisshaw (1977) in a study of auditor
behaviour in the going concern decision area.15 However discussions here
will draw primarily on Taffler (1983a and b) which are more concerned with
tests of the model’s true ex ante predictive ability and the results from its use
in practice in the several years subsequent to its development.
The failed sample consisted of the 46 manufacturing firms quoted on the
London Stock Exchange failing in the 8 year period to the end of 1976
which met certain criteria to ensure reliable source data. The solvent form set
was matched on a 1: 1 basis by industry and size but not by year with the
latest year available being used. Five of the original sample were screened
out as being financially problematical and replaced with firms financially
more sound, to ensure distinct groups. A list of 80 financial ratios was
developed and appropriately treated to improve normality. PCA was also
undertaken to aid formulation and interpretation of the resulting model.
Using a conventional stepwise linear discriminant package the following
model which was parsimonious with respect to its input requirements, could
be readily interpreted and performed well, finally resulted with Mosteller-
Wallace percentage contribution measures in brackets:

X,=profit before tax/average current liabilities (53),


X,= current assets/total liabilities (13),
X,=current liabilities/total assets and (18), and
X, = the no-credit interval (16).i6

The constant term was adjusted for an odds ratio of 1:7 and such that in
practical application the cut-off was 0. The Lachenbruch U-test indicated one
type I errorl’ and no type II errors. Through reference to the principal
component analysis the ratios were interpreted as measuring respectively
profitability, working capital position, financial risk and liquidity.
Table I summarizes what happened in the 6 years subsequent to model
development to the 115 out of 825 (14%) listed manufacturing companies on
the EXSTAT database at risk (Z<O) as at the end of 1976.18 It will be
“They found inter alia only 22% of their 46 company sample of failed firms were qualified on
this basis in their last accounts prior to failure which compares with 44% in the case of 34 firms
in Altman and McGough (1974) and 48% for the 109 such firms between 1970 and 1982
discussed by Altman (1983, table 7-6 p. 220). Recent behaviour suggests the UK auditor is no
less reluctant to qualify despite the very changed economic environment and failure rate.
leThis measures the number of days the company can continue to trade if it can no longer
generate revenues. Fade1 and Parkinson (1978) discuss the ratio.
I’This was again Rolls Royce. See footnote 9 supra.
‘sThis is the UK equivalent of COMPUSTAT and is provided by Extel Statistical Services
Ltd. It now covers all quoted UK industrial and distribution corporations as well as many
unlisted and overseas enterprises and is updated weekly. Data series in most cases start in 1971.
At the beginning of 1977, however, coverage was more restricted.
R.J. Tafler, Empirical models for monitoring UK corporations 205

Table 1
Companies with at risk Z-scores at the end of 1976: A summary of
subsequent eventsa

No of % of
Event companies companies

Financial distress:
Receivership 17 15
Going concern, government, bank or other
emergency support 12 10
Acquisition etc as alternative to bankruptcy 13 11
Major closures and disposals 8 7

50 43
Still at risk (Z 5 0) 33 29
Recoveries (Z z 0) 32 28

115 100

“A detailed definition of these categories is provided in Tamer (1983b,


footnote 23).

observed no fewer than 43% had failed in some sense and a further 29% were
still at risk at the end of 1982 (many of these recovering temporarily then
collapsing again). Less than 3 out of 10 appeared to have effected a
permanent recovery. The results from this ‘proof is in the eating’ approach
[Altman (1978)] are encouraging. I9 That 41 of the 42 listed manufacturing
and construction companies on EXSTAT entering into receivership
subsequent to the model development were correctly classified as ‘at risk’ on
the basis of their last accounts prior to failure, provides an additional
validation test.”
Table 2 shows the distribution of percentage of negative Z-scores prior to
failure. The average time between the date of publication of the last accounts
and the appointment of a receiver was eight months and as such an average

Table 2
The percentage distribution of number of at risk Z-scores prior
to receivership (42 companies).

No of years of negative Z-scores 24 23 22 21


Percentage of firms 29 41 67 98

19A comparison may be made with Deakin’s (1977) results. See also Altman’s (1983, ch. 4, pp.
155-156) comment on the latter.
“These results may be compared with the similar performances of the ZETA model [Altman
(1983, ch. 4, pp. 144-147)] which interestingly enough are better than with the original sample
data!
206 R.J. Tafler, Empirical models for monitoring UK corporations

lead time of 3 l/4 years is provided by the Z-score first going negative.
However note such an analysis does not constitute a test of the predictive
ability of the model as is conveniently rather loosely argued but of its
classiJicatory ability. An at risk Z-score is not a prediction per se of failure
within a specified time frame but a description of a company as having a
financial profile more similar to a group of previous failures, which is a
necessary but not sufficient condition for financial distress, than to a group
of sound firms. In evaluating the operational utility of such a function the
percentage of the company population of interest labelled ‘failing’ is just as
important as ex ante tests of the model’s misclassification probabilitieszl
Taffler in fact shows that in 1980 the conditional probability of companies
with at risk Z-scores at the beginning of the year failing during the year was
no less than 0.33 compared with the conditional probability of companies
not failing given a solvent profile of effectively 1.0, both figures significantly
different to the null positions (5% and 95%) at better than a=O.OOl. He
concludes ‘the derived function would appear to exhibit true ex ante
predictive ability where the events predicted are the financial distress or
otherwise of a company within the next year’.
Taffler also describes a new development of the Z-score approach to
forecast the actual likelihood of a company failing in the next year given an
at risk profile, termed the ‘risk index’ or ‘Z-score of Z-scores’. This consists
of a linear additive weighted composite [Dawes and Corrigan (1974)] of
three factors: how low the company’s Z, the number of years at risk and the
trend in 2,” measured along a live point scale. Table 3 shows the risk index
distribution of negative Z-score firms as at the end of 1982 compared with
the risk index distribution of the 41 companies entering into receivership.
The third line shows the cumulative probabilities derived from these two
rows using Bayesian analysis.

Table 3
Risk index statistics.

Risk index (i) 1 2 3 4 5

Percentage of live, at risk firms


on EXSTAT 30 30 17 16 7
Percentage of receiverships 7 12 25 27 29
Cumulative probability of failure
in the next year P (F/RI 2 i)” 0.33 0.44 0.67 o.so 0.90

“F denotes the event failure and RI the risk index.

‘iIn the case of this model the percentage of companies on EXSTAT at risk had doubled to
22% by the end of 1982 compared with 1977.
22An analogy with a Markov chain process with absorbing state or gambler’s ruin paradigm,
may be drawn [e.g., Scott (1981)].
R.J. TafJle Empirical models for monitoring UK corporations 207

Taffler (1983b) concludes his paper with a detailed discussion of why the
Z-score technique works so well and argues that because of the cognitive
difficulties confronting the financial statement user, only a formal model such
as the Z-score approach can provide efficient and unbiased processing of the
complex information set presented by a company’s accounts.23

Comment. Watts (1981) censures the preliminary Taffler and Tisshaw (1977)
paper on a number of grounds. Watt’s criticism of the paired sample
selection procedure is valid although his query regarding potential industry
heterogeneity affecting the power of the function, albeit theoretically
reasonable, has not been borne out in practical application of the model to
date.24 Watt’s argument about excluding differential misclassification costs
from the function deserves more detailed discussion. Taffler (1983b, footnote
29) argues that ‘only decisions taken can have a cost not the output from an
information model which is only one input to the decision process’. Since no
decision maker is ever likely to act on the basis of a Z-score alone he
believes the use of a loss ratio [Altman (1983, ch. 5)] relating to decision
error costs in the manner of Altman et al. (1977) is incorrect. If a Z-score
value only contributes say 20”/, to how a particular decision is reached, for
example whether or not to advance credit, then the appropriate differential
misclassification cost ratio used in the cut-off computation should be only
l/5 the actual decision error cost ratio.25

2.5. The Bank of England Model, 1979


The development of this model for the Bank of England is described in
Marais (1979) - henceforth referred to as M, and Earl and Marais (1979) -
E&M. The Bank’s experience is assessed in an unsigned article in the Bank
of England Quarterly Bulletin (1982) - BEQB. M and E&M sought to
justify the development of a new model on the grounds of alleged poor
performance of earlier models. To this end they tested ersatz versions of
Beaver’s (1966) best ratio (cash flow/total debt) and the functions of Deakin
(1977) and Taffler, 1977 on their sample data.26 Needless to say the

‘sA transformation of the model to measure company performance throughout the whole
performance spectrum, the PAS (performance analysis)-score, also described in the study, is
discussed in section 4.1 below.
24This model can however be criticised justifiably from a theoretical vantage point for having
construction companies excluded from the formulation samples whereas it is being applied
apparently successfully to analysing this sector in practice.
aSIn the case of this particular model it can be argued that because of its track record with an
‘inappropriate’ 1: 1 misclassification cost ratio, in practice incorporating a more realistic measure
would only serve to increase the number in the population apparently at risk with no reduction
in type I errors (which in any case are minimal to date).
261n the case of Beaver their ratio definition differed to his with cash flow calculated as profit
before tax and depreciation not net income and depreciation. The two multivariate models were
208 R.J. Taffler, Empirical models for monitoring UK corporations

misclassification rates on a resubstitution basis of these three men of straw


were both greater than the fitted Bank of England model and even its best
single ratio.27
38 quoted companies drawn from the manufacturing and distribution
sectors but excluding construction, failing between 1974-1977, constituted
one sample. The non-failed set of 53 concerns was randomly sampled from
the Datastream database28 ‘stratified over the period 1973-1977 to average
out any short-term cyclical effects’ (M, p. 14). 47 conventional ratios and 12
constructed from the sources and uses of funds statement were the financial
variables used and a linear probability function approach employed. The
preferred model resulting consisted of the following variables:

X1= current assets/gross total assets,


X,= lOOO/gross total assets in &‘OOO,
X,= cash flow (profit before tax + depreciation)/current liabilities, and
X,=funds flow (funds generated from operations -net increase in working
capital)/total liabilities (long-term debt + current liabilities).

Although no formal tests of significance of the constituent ratios were


conducted their relative importance in descending order was viewed as: X3
(profitability), X2 (size), X, (liquidity) and X, (funds flow). The two samples
differed very significantly in size with the average value of X, for the failed
set 0.4114 and for the non-failed group 0.1045, the mean values for the two
groups on X, (current assets/total assets) were almost identical. Using a cut-
off score approach resubstitution provided 1 type I and 5 type II errors.
However 4 of the 5 type II errors were not considered true misclassifications
(M, p. 24). Applying the derived model to the data for the two previous years
led to 3 ‘failed’ firms being classified as ‘non-failed’ and 8 vice versa in year t-
1 and 9 and 2 respectively in year t-2. Testing the model on the 10 industrial
company failures in 1978 and 19 non-failed firms including a number ‘known
to be experiencing financial difficulty’, 1 failed firm (British Leyland) was
misclassified and 9 (M, p. 25) or 10 (E&M, p. 22) of the 19 non-failed.
However once again the large number of type II errors were explained away
as not being real misclassifications (M, p. 24).

‘approximated’ by fitting functions with ratios loosely based on those used in their name-sakes
to the Bank of England dataset with similar definitions for 4 out of the 5 ratios in the ‘Deakin
model’ and 2 out of the 4 measures in the ‘Taffler model’. No attempt was made to emulate the
quadratic formulation of the former.
“‘(Our model) appears to improve on earlier models in the UK context’ (E&M, p. 26).
Although papers describing the actual performance and validation results of both the Taffler,
1974 and 1977 models are referenced by Marais the authors preferred to use their own figures.
Criticism of such a nature apparently is not restricted to the UK. See for example Altman’s
(1978) reply to Moyer (1977).
*‘This database is described in section 2.7 below.
R.J. Tuffler, Empirical models fir monitoring UK corporations 209

Comment. The interest to the analyst in this study lies in four areas: (i) It
demonstrates how such techniques have now been tried by Central Banks,
(ii) it illustrates the vital importance of using validation samples in tests of
model efficiency and the misleading conclusions that can arise from the
resubstitution approach, (iii) the Bank of England model highlights the need
for the analyst always to consider the percentage of the population labelled
failing and to minimise this if his model is to have operational utility, and (iv)
it demonstrates how much care needs to be taken in developing such models
if they are to be used in practice. We may speculate the poor performance of
the model may have something to do with inter alia: (i) the use of continuing
firms as opposed to ‘healthy’ enterprises for the non-failed sample, which
would appear to be incorrect at least in the UK,29 (ii) the pooling of
manufacturing and distribution firms in one model which again is likely to
prove problematical in the UK environment as these have quite different
financial characteristics,30 and (iii) the selection of X,, the size measure, by
the stepwise procedures apparently reflecting the fact that the non-failed
firms were sampled from a large firm database31 whereas the failed set
appeared to be drawn from the whole population. The analyst should avoid
the implicit syllogism in the BEQB paper that because their particular model
does not work very well the Z-score technique itself is not very usefu1.32

2.4. Mason and Harris, 1978


The motivation of Mason and Harris (1979) in developing a model
specifically for the identification of construction companies in danger of
failure was the concern that in the UK, at least, contracts often tend to be
awarded on the basis of price without adequate consideration of the
contractor’s solvency and thus his ability to complete the work.33 20
construction companies failing between 1969 and 1978 constituted the failed
set and the continuing sample consisted of 20 ‘particularly sound concerns
on a traditional financial ratio analysis basis’ with 19761977 accounts used.

Z9Lachenbruch (1974) demonstrates that although the true error rates may not be seriously
affected by initial misclassification of the samples in a non-random manner (in the Bank of
England model a number of firms with failing characteristics are included in the non-failed
sample), the apparent error rates will be biased and yield too favourable an outlook.
joThis issue is discussed in sections 3.1 below in the description of a model for the analysis of
distribution firms.
31At the time the study was undertaken Datastream covered approximately the top 1000
quoted UK industrial companies, around 60% of the total in terms of number.
32The Bank now actually uses the Datastream model (described in section 2.7 below) in
practice albeit this also has operational difficulties.
a3Another study in this sector has been undertaken by the City of Bradford Metropolitan
Borough Council to assist them in the selection of an approved list of contractors [Guy (1980)].
However inadequate detail about this preliminary work is published to justify discussion here
save to note the fitting of a 9-variable linear discriminant model to a group of 15 failed and 14
non-failed contractors.
210 R.J. TafJler, Empirical models for monitoring UK corporations

A list of 28 discriminant variables was developed and using conventional


stepwise LDA the following model was derived:

Z=25.4-51.2X1+87.8X,-4.8X3-14.5X,-9.1X,-4.5X6
where
X, =prolit before interest and tax/opening net assets,
X, =profit before interest and tax/opening net capital employed,
X, = debtors/creditors,
X, = current liabilities/current assets,
X, =log,, (days debtors), and
X, = creditors trend measurement.34

The authors indicate the first 2 ratios measure profitability, X3 working


capital position, X4 financial leverage, X, quick assets position and X, trend
and they conclude that since X, and X, appear important discriminators
and X3 and X, add relatively little, short-term liquidity is less important
than more fundamental aspects of a firm’s structure such as its earnings
ability. None of the 40 firms was misclassified on a resubstitution basis but
there were 4 type I errors in a validation sample of 11 failed enterprises
(36.3%). 58% of the total 31 failed enterprises had failing characteristics 4
years before failure.

Comment.35 This model is of interest both to the potential user, who needs
to make solvency judgements in a particularly risk prone sector, and the
theorist. Although a number of methodological problems call into question
the authors claims for this particular model per se, their work would appear
to validate the utility of Z-score techniques in this disparate industry despite
the analyst’s concern about the heterogeneous nature of construction
companies. 36 The model itself, however, appears prone to sample bias37 and
the high misclassification rate of the small validation sample is worrying.

2.7. The Datastream model, 1980


This model is provided to clients of the widely used Datastream plc. on-
3?his is defined as (c, + c, _ ,)/2c, _ 2 - 1 where ct denotes creditors in year t etc.
35A more detailed critique is provided in Taffler (1980). See also the authors’ reply [Mason
and Harris (1980)].
36Specifically can civil engineers, housebuilders, property developers, contractor plant hire and
construction companies etc., be pooled together and the same model applied to all these
activities?
3748 parameters were estimated on the basis of the small size samples, and the two
profitability ratios were correlated at 0.92. Two other correlation coefficients were also high.
These high correlations and the counter-intuitive sign of x1 call into question also the
interpretation of the percentage ratio contributions.
R.J. Taffler, Empirical models for monitoring UK corporations 211

line computerized financial information service. It was developed by Marais


subsequent to his work for the Bank of England and is documented in
Datastream (1980). Datastream stress its use primarily for screening.
46 firms failing between 1974 and 1980 constituted the failed group and 60
non-failed firms were sampled in a similar manner to the Bank of England
study except that firms making consistent losses were replaced with others.
The variable set consisted of over 40 conventional financial ratios and several
others were derived from the flow of funds statement. PCA was also
conducted to highlight the dimensionality of the data set and aid variable
selection. The final derived model consists of the following variables:

X1= (profit before tax + depreciation)/current liabilities,


X,= the acid test (quick assets/current liabilities),
X,= ‘gearing’ (total debt/net capital employed), and
X,= stockturn.

The cut-off would appear to have been set by inspection using similar
arguments to those of the Bank of England study and resubstitution
provided 1 type I error (2.3%) and 6 type II errors (10%) on this basis.38
Year t-l provided 8.7% subsequently failing firms with non-failed Z-scores
and 11.7% vice versa and year t-2 27.3% and 16.7% respectively. A secondary
sample of 10 failed companies were all correctly classified. No Lachenbruch
U-test was conducted nor is any information provided on the significance or
relative importance of any of the constituent ratios.
A particularly interesting feature of the Datastream service is the
classification of 15 types of Z-score trend and the ability to use the
Datastream computer to search for companies not just within particular Z-
score ranges but also with particular trend characteristics. A description of a
typical company analysis is provided in section 4.2 below.

Comment. Because of lack of information provided, it is difficult to appraise


the contribution of the Datastream Z-score model to the extant literature
nor assess fully its operational utility. However, many of the criticisms that
apply to the Bank of England model seem to be equally valid here with
apparently 40% of the 1350 or so listed companies to which the Datastream
model is applied currently possessing a ‘failing’ profile. The analyst may
again speculate that this results partly from the use of continuing rather than
healthy firms for the non-failed samples as well as the attempt to build one
model applicable across the full range of industrial, distribution, service and

38The type II errors were again not considered true misclassifications, it being argued
tautologically that ‘As the sample of non-failed firms was not restricted to healthy firms one
would expect to find a few weak firms in the sample, and it is these firms that have been
assigned low Z-scores’. [Datastream (1980, p. 13)].
JBF-- C
212 R.J. Tafler, Empvical models for monitoring UK corporations

transport sectors. 39 In addition, probably because of the latter, it seems the


model is only measuring the profitability of an enterprise with the balance
sheet largely ignored. In fact an analysis of nearly 1000 companies indicates
that X, correlates at 0.986 with the Datastream 2 and as such can usefully
be used as a proxy.40 The case for a multivariate model in this instance then
is not proven on the available evidence. Concern also may be expressed that
a stockturn measure appears because of its heavy industry dependence
particularly as between manufacturing, distribution and service sectors.41
Nonetheless despite the potential problems of the swamping effect of the
40% of the population with ‘failing’ characteristics4’ and the lack of adequate
balance sheet treatment the Datastream model prima facie must have utility
judging by the number of people prepared to pay to use it.b3

2.8. Betts and Belhoul, 1982 and 1983


These studies take an original reliability engineering perspective. Their
authors view the national economy as a large system with its component
parts economic entities or firms. Failure of these subsystems affect other
subsystems and also, depending on their size, the economy as a whole. Betts
and Belhoul argue that preventative maintenance in the sense of remedial
action to prevent failure is far better for the overall system than breakdown
maintenance - action taken by the receiver or liquidator. ‘Physical
inspection’ of all companies within the economy is not possible because of
the costs involved but they see the Z-score approach as an alternative.
Betts and Belhoul (1982) develop a Z-model consisting of the following
variables with variable ranking in terms of importance given in brackets:

X,=profit before interest and tax/total assets (l),


X,= quick assets/current assets (2),
X,= current assets/net capital employed (4),
X4= working capital/net worth (3), and
X,= days creditors (5).

Their two samples samples were 26 quoted concerns failing mainly


between 1974 and 1977 and 131 ‘going concerns’ sampled randomly from the
3gIncluded are such diverse industries as oils, shipping, hotels, TV stations, laundries and
gambling etc.
40A deciston rule that classified a company with a ratio value below 0.20 as ‘failing’ and vice
versa would apparently provide the same classification as the Datastream model 95% of the
time.
41Taffler (1982) even reports serious problems with thts ratio within the manufacturing sector.
4ZAltman’s ZETA model registered 20% of the 2600 companies on COMPUSTAT tiles as
‘failing’ according to Business Week (24.3.80) and the equivalent figure for the Taffler, 1977
industrial and Taffler, 1980 distribution company model (described in sections 3.1 below) taken
together was 19.6% of the 1050 live companies on the EXSTAT tape used at the end of 1982.
?lee section 5 below.
R.J. Tafler, Empirical models for monitoring UK corporations 213

EXSTAT tape. A set of 26 potentially discriminating financial ratios were


derived for the two groups and a conventional stepwise linear discriminant
approach adopted to derive the model. No type I errors were registered and
only 5 type II. The computed F-statistic for the overall function was 47.11
and all variables were significant using a conditional deletion approach.
Applying the model to an end 1979 EXSTAT tape led to only 6.1% of the
1230 enterprises registering a failing profile which the authors considered to
be on the low side. Betts (1983) shows that applying the model to the last
available accounts of 22 recent failures provides 5 misclassifications.
Betts and Belhoul (1983) introduce ratio stability measures into their
model in the manner of Dambolena and Khoury (1980) and also a balance
sheet decomposition index [Lev (1974, ch. 4)]. Their failed set consisted of 50
firms, 25 actually bankrupt and the other 25 classified as suffering an
equivalent form of financial distress. The going concern group, 93 in number,
was randomly sampled from the EXSTAT tape in the manner of their earlier
study. 5 categories of measures were used: conventional financial ratios (29),
measures of stability (3 year standard deviations of each of the 29 ratios),
trend measures (8), the balance sheet decomposition index (1) and size (3).
9 separate models were derived and Lachenbruch U-test analysis
conducted. Attention was finally focused on the ‘best’ function consisting of
the following variables:

X,= net profit/total assets,


X,= total assets/total debt,
X,= the acid test (quick assets/current liabilities),
X,= SD (quick assets/current assets),
X,= SD (working capital/net capital employed),
X,=SD (days creditors), and
X,= balance sheet decomposition measure.

SD denotes standard deviation. This provided type I errors of 4.3% and


type II errors also of 4.3%.

Comment. The context of the first model is original and the ratios selected
intuitively reasonable. This author would certainly concur with the need to
adjust the cut-off appropriately. However, the second model is probably of
more interest to the analyst, particularly .the authors’ comment on the
important contribution to the function made by the decomposition index
which mirrors the results of Lev (1971) and Moyer (1977). We look forward
both to further validation tests of this model in an operational context and
also the results of further work using the decomposition variable. This
author, however, is fairly sceptical about the utility of a standard deviation
measure computed from 3 data items in practice.
214 R.J. Taf$er, Empirical models for monitoring UK corporations

2.9. El Hennaway and Morris, 1983

The stated objectives of this recent and detailed study were to test whether
the ‘predictive ability’ of failure prediction models might be improved by
deriving them from data several years prior to failure and to include general
economic and industry indicators.
Data was drawn from the 1955-1974 Department of Trade computerized
data bank of company financial statements. 44 manufacturing, construction
and distribution businesses failing between 1960 and 1968 constituted the
failed sample from which models were developed and a similar number of
‘sound’ companies, defined as such largely in terms of continuing high
profitability, the other group. The variable set used in the discriminant runs
consisted of 40 financial ratios which met data availability requirements,
were normally distributed after transformation and had high loadings in a
PCA, a stock market index as a general economic indicator and dummy
variables to represent three broad industry categories: manufacturing,
quarrying and construction and distribution.44 A large number of different
discriminant analyses were undertaken with results for various models
derived from data both five years and one year before failure reported.
The first year function derived from 88 companies was given by
Z= -6.17+11.43X, + 14.07X, +0.55X, - 1.57X4+0.98X5,
where
XI= operating profit before depreciation/total assets,
X2= long-term debt/net capital employed,45
X,= current assets/total assets,
X4= quarrying and construction industry dummy, and
X,= distribution industry dummy.
The fifth year model, derived from 86 firms had the form
Z= -4.86+13.50X,+3.11X2+4.80X,-0.97X4+0.68X,,
where
X,=protit before interest and tax/total assets,
X2= acid test,46
X,= quick assets/total assets,
X,= quarrying and construction industry dummy, and
X,= distribution industry dummy.

44A cluster analytic approach was used to assign the 19 industries present in the data to three
3 ‘meta-industries’.
45Transformed as l/(3 +x), where x is the ratio value.
46Transformed as l/( 1 t-x), where x is the ratio value.
R.J. Taffler, Empirical models for monitoring UK corporations 215

The importance of the constituent variables in each model was assessed by


four different methods which were uniform in highlighting the all
encompassing power of the profitability ratio (X,) in all functions reported.
A large number of classification exercises were conducted using the
Lachenbruch U-test, cross-validation and inter-temporal validation
approaches leading to very high success rates in all cases. The authors
further claimed that introducing prior probability odds of 1:lO and 1:7 into
their models lead to random sample correct classification rates of 98% and
above compared with a proportional chance model (S&85%). They also
suggested that the expected comparative cost would be very much lower even
were misclassifying a failed company as non-failed more expensive than vice
versa.47
El Hennaway and Morris conclude that their results compare favourably
with those of earlier UK models, that a multiple discriminant model derived
from data 5 years before failure can predict bankruptcy at least as well as a
model based on information one year before failure and that industry
membership is an important factor.

Comment. The authors are to be commended both on the statistical rigour


of their study and the care with which it was conducted. However, lack of
space prevented them from doing full justice to their work and as such we
are left with a number of questions. Specific queries relate to whether serious
bias has been introduced by selecting ‘sound’ companies on the basis of ‘their
ability to maintain relatively high rates of return on capital employed
(p. 210).48 It might well be that the classification models are effectively
distinguishing between low profitability and high profitability samples, not
failed and continuing enterprises which could explain the apparent
unexpectedly good discriminant power, particularly of the fifth year models.
Other questions relate to the more philosophical issue as to how models
developed from earlier data can be interpreted, e.g., what does a ‘failing’
Z-score generated by a model derived from data five years prior to failure
actually mean.T4’ Certainly it would have been helpful had the percentage of
the data bank population rated as ‘failing’ been provided to help the analyst
assess the operational utility of the functions derived in practice. A final
point relates to the industry dummies. In fact it is not clear at all that these
actually are significant discriminators in the models published.50 Nonetheless
the paper does generate a number of very interesting questions.

47Unfortunately the information to justify these claims was not presented.


48Although El Hennaway and Morris reference the Taffler, 1974 and 1977 models to justify
their sampling approach, in fact Taffler used a different sampling method that did not lead to
significant high profitability bias in his non-failed samples.
@See Altman (1983, ch. 3, pp. 151-153) for a discussion of such matters with reference to the
Deakin (1972) study.
“It would have been more helpful had the actual conditional deletion F-values been
published in their table 2, not just their rankings.
216 R.J. Tafler, Empirical models for monitoring UK corporations

3. Two new studies


In this section two other Z-score models in operational use that address
issues pertinent to the practitioner are documented for the first time. The
first is for the analysis of distribution companies. The other, a revision of the
Tisshaw, 1976 model, for the analysis of unlisted manufacturing and
construction enterprises, uses a jackknife discriminant approach.

3.1. The Taffler, 1980 distribution mode151


The motivation for the development of a separate distribution model is
that in the UK the characteristics of such enterprises differ significantly from
those of manufacturing concerns. As such constructing a discriminant
function from pooled samples of manufacturers and distribution enterprises
may well lead to a less efficient model because of the lack of discrete
groups.52
Sudarsanam (1981, chs. 6 and 7) conducted a detailed univariate and
multivariate examination of the characteristics of 570 quoted manufacturing
and approximately 200 distribution enterprises over a 6 year period with
data drawn from the EXSTAT tape. He found that although his 87 ratio set
appeared to be measuring very similar dimensions of financial information in
the two samples, as indicated by factor analysis, the means of no less than 58
of the ratios differed between the samples at better than CI= 0.01. In addition,
despite the high degree of vertical integration of manufacturing and
distribution activities in some firms, linear and quadratic discriminant models
were able to classify correctly an average of 86.6% and 86.2% respectively of
the samples on a Lachenbruch U-test basis with priors set equal to sample
sizes. As such we would appear correct in treating the two broad classes of
enterprise separately.
The development of this model broadly followed the approach taken in the
earlier Taffler studies and similar criticisms may thus be applied. The failed
set of firms consisted of those 22 concerns identified as suffering financial
distress in the four years to the middle of 1978 augmented by 2 others which
failed some years earlier. Compared with the US the retail sector in the UK
is substantially less failure prone than manufacturing.‘” As such there was an
inadequate number of cases of receivership and liquidation to develop a
model (only 15) and surrogate definitions of failure were used to increase the
sample size viz. going concern qualification (6 cases), scheme of arrangement

‘IThe assistance of A.E. Jituboh and G.M.A. Joseph in the development of this model is
gratefully acknowledged.
“As apparently in the pooled sample Bank of England and Datastream models discussed in
sections 2.5 and 2.7 above. Altman et al. (1977) dealt with any such potential problems in their
US study by appropriate analytical adjustments of their data.
53After all as Napoleon pointed out the British are a ‘nation of shopkeepers’!
R.J. Taffler, Empirical models for monitoring UK corporations 217

(1 case) and forced divestment of most of the business for solvency reasons
(2 cases).54
The non-failed group was drawn from a random sample of 65 firms with
financial years ending in the 12 months to 30.4.78, and were reduced to 49 in
number, to provide a distinct healthy firm sample. A related set of 83 ratios
was used and similarly transformed etc. and reified via PCA. The final linear
discriminant model resulting consisted of the following variables with
Mosteller-Wallace percentage contributions given in brackets:

X, = cash flow5’/total liabilities (34.0),


X, = debt/quick assets (10.4),
X, = current liabilities/total assets (44.2) and
X, = no-credit interval (11.4).

The ratios, all significant at CI=0.05 or better using the conditional


deletion F-test, are interpreted as measuring respectively: profitability, debt
position, financial risk and liquidity. Prior probability odds of 1:7
(failed:solvent) and a misclassification cost ratio of 2:l (type 1:type II) were
used to establish a cut-off56 leading to one Lachenbruch U-test type I error
(4.2%) and no type II errors.
More interesting however is how the model works in practice. In the 3
years subsequent to its development there were 11 distribution company
failures on EXSTAT with failure defined as receivership or equivalent (4
cases), acquisition as a clear alternative to bankruptcy (3) or rescue
refinancing (4). In each case the company had an at risk profile on the basis
of its last available set of accounts. 19 going concern qualifications or
equivalent were also registered and each associated set of accounts provided
a negative Z-score. Currently 12% of the 250 or so live distribution
companies on file are at risk compared with only 7% in 1977 and 22% of
manufacturing etc. firms using the Taffler, 1977 function. The operational
utility of this model is thus demonstrated.57
A further test of the need for a separate distribution model is to run it
against the manufacturing data set on EXSTAT. On this basis only 9% of
firms with accounts published by mid-1981 were classified as at risk
compared with 19% applying the TaMer, 1977 model, and 6 bankrupt

s4Altman (1983, ch. 11, p. 273) points out a minimum of 15-20 farlures arc required to make
modelling attempts feasible.
55This IS defined as: retained profit (wrth deferred taxation and exceptional and extraordinary
items added back) + depreciation.
56The misclassification cost ratio was set on the basis of likelv Z-score contnbution to a
dectsron x decision error cost ratio in the manner discussed in the reply to Watt’s critique of
Tamer, 1977 in section 2.4 above. For ooerational nurooses the function constant was adiusted
*
to incorporate this explicitly and also so a cut-off of0 resulted.
“A risk index approach similar to that of the Tamer, 1977 model is also used with this
model.
218 R.J. Taffler, Empirical models for monitoring UK corporations

companies were given a clean bill of health. We may conclude once again
separate models are necessary in the UK.

3.2. Jackkni$e analysis


Jackknife analysis is a method for reducing bias in an estimator by means
of sample reuse and provides not just an unbiased, or nearly unbiased
estimator, but also a measure of its variance.58 The essence of the approach
is to partition out the impact or effect of a particular subset on an estimate
derived from the total sample. In the general case suppose we partition a
sample into k subsets (e.g., single cases), e^denotes an estimator derived from
the total sample and e^_, is an estimator derived with subset i deleted
(i= 1,2.. . k). k ‘pseudovalues’ given by k &(k-l)g_i are then computed
and the jackknife statistic derived from an average of the pseudovalues:

The pseudovalues are used to provide approximate confidence intervals for


the jackknife estimate and the significance of the latter is tested using
Student’s t with k- 1 degrees of freedom.
In discriminant analysis the standard function is first computed, then, the
total sample is partitioned and separate functions derived from each of the k
different subsamples. Coefficient pseudovalues in each case are arrived at by
weighting and subtracting the calculated values from the estimates provided
by the total sample. Finally the jackknifed coefficients of the discriminant
function are computed by averaging the psuedovalues as above and their
associated standard errors derived.
Although there is similarity with the Lachenbruch U-method the purposes
are different. In the former case the concern is with the unbiased estimation
of classification error rates provided by a duscriminant function. With
jackknife discriminant analysis, however, interest is with bias reduction in the
discriminant model itself and also the stability of its coefficients.

3.3. The Taffler, 1982 jackkni$e unlisted mode15’

Experience with the Tisshaw, 1976, private company model in practice


suggested that respecification might be advantageous and the opportunity
was taken to explore the utility of the simple jackknife approach in such
applications as of interest here. The failed set of firms consisted of 39
privately owned manufacturing and construction enterprises located mainly
s8The discussion is taken from Crask and Perreault (1977) who quote Tukey that the
technique is named after the boy scout’s jackknife because of its versitility. Miller (1974)
provides a good review of the general jackknife approach.
59The author wishes to acknowledge the assistance of T.L. Nadebaum and P.S. Sudarsanam
in the development of this model.
R.J. TafJler, Empirical models for monitoring UK corporations 219

with the help of lists provided by Trade Indemnity plc., the leading UK
credit insurers, and failing predominantly between 1978 and 1981. Only firms
meeting the lower size bound constraint of turnover above &2m at 1981
prices and failing not much later than 2 years after the end of the last
financial year for which accounts were available were considered to ensure
reliable source data from which to build a model. The non-failed firms
were sampled randomly from the Extel Unquoted Companies Service and
culled of enterprises appearing less than healthy. 56 companies with 1978 or
1979 financial years were used. A related set of ratios to the earlier Taffler
studies was used and similar statistical treatments applied. The initial Fisher
model fitted using a conventional approach variables were:
X,=earnings before interest and tax/sales (EMT/S),
X,= debt/net worth (Debt/NW),
X3= average creditors/cost of sales (ACr/COS),
X,=liabilities/total assets (CL/TA).
This provided 2 type I (5.1%) and 2 type II errors (3.6%) using the
Lachenbruch U-test with a probability-cost ratio of l.‘j”
The jackknifed model was then fitted to the same variable set using 95
subsamples each with a different single case omitted, and the results
compared. The overall classification efficiency was identical and the
discriminant coefficients of similar magnitude to the Fisher function.61 The
apparent lack of bias in the original Fisher discriminant function may reflect
inter alia sufficient cases to reduce the risk of sample bias, the distinct and
well separated groups, and the small number of variables constituting the
function.
An examination of the recently established Performance Analysis Services
Ltd. computerised database of the top 1000 privately owned UK industrial
companies shows that of the 31 unlisted companies on it which could be
identified as bankrupt at the end of 1982 only 2 had solvent Z-scores. In one
case failure occurred 24 months after the end of the last financial year for
which published information was available and in the second, 30 months.’
Currently around 25% of unlisted companies on the database have at risk
profiles.

4. The performance model transform and financial profile analysis


A recent development of the Z-score approach of utility to the analyst
follows from the recognition that the Z-function, after appropriate
60Detailed consideration was given to the appropriate cut-off. Adjusting for prior probability
odds of 1:4, a priori viewed as realistic at the time the model was developed would have
provided 4 type I and 1 type II Lachenbruch U-test errors. However differential costs of
misclassification are ignored. The determining factor thus was an examination of the on the
Performance Analysis Services Limited unlisted company database at City University where a
cut-off of zero provided a manageable percentage of at risk companies and nominal type I
errors.
61The jackknifed Mosteller-Wallace variable percentage contributions were respectively 36x,
200/ 19% and 25%.
220 R.J. Tafler, Empirical models for monitoring UK corporations

transformation, provides a holistic measure of performance applicable


throughout the whole performance spectrum. Additional insight is also
gained by disaggregation of the component ratios of the model to provide a
strengths and weaknesses profile.

4.1. The PAS-score transform


The Z-score construct is essentially bifurcated in nature (it lies above or
below a cut-off) and identifies a firm as resembling more previous failures, or
non-failed enterprises. Although an ordinal measure the Z-score is non-linear
and has no range limits so no ratio scale applies. Thus a Z-score of 2 can
only be viewed as better than a value of 1, not that it is twice as good. As
such it is strictly not valid to average Z-scores to provide industry means nor
interpret Z-score trajectories directly as is conventionally done. Care must
also be taken to recognise that through economy-wide changes impacting on
the Z-scores of all firms, an unchanged Z-score from one year to the next
will not indicate the same degree of relative strength or weakness.
Providing the Z-function meets certain necessary and sufficient conditions
to be specified below such interpretational problems may be overcome by a
simple transform of the Z-score into a ratio measure along a scale C&100. All
firms for a particular year are ranked in ascending sequence and the
percentile in which the Z-score of the concern of interest lies is observed.
This provides its relative performance measure or PAS (performance
analysis)-score. Different calculations are undertaken for each year of data so
-the company’s PAS-score trajectory shows its relative performance over time.
The PAS-score, indicating the percentage of companies doing less well on
this holistic basis in any year now provides an appropriate ratio scale for
measuring how much better or worse a company is performing compared
with earlier years also with other firms. In addition industry means can be
derived appropriately.
Fig. 1 provides the PAS-score trajectory from 1976 to 1981 of the Weir
Group, a major Scottish engineering company, applying the Taffler, 1977
model, together with its industry average and the ‘solvency threshold’ where
the Z-score =O. This latter, measuring the percentage of companies at risk
over time, varies with the state of the economy as indicated. The problems
confronting this business were only picked up very late in the day using
conventional techniques of financial analysis and a belated emergency rescue
consisting of a large capital injection and new management was mounted in
April 1981 with consequent impact on the PAS-score as can be seen.
However note the steady PAS-score decline from 1976 into the at risk region
so that on the basis of the 1980 accounts only 3% of the company
population were doing worse and the risk rating was 3.62
“That the analyst needs to consider a company’s relative performance measure as well as its
absolute Z is highlighted from table 4 below by noting that Weir’s 1981 Z-score was lower than
in 1976 but its relative performance was 50% better. A similar approach is taken by ZETA
Services Inc. [see Altman (1983, figs. 4.2.4.3 or lO.l)] where both ZETA and the relative percentile
are provided.
R.J. Taffler, Empirical models for monitoring UK corporations 221

PAS-Score
1 industry Averqt!
‘60 2 War Croup
a Solvency Threshold

1 I I I I I 1
76 77 78 79 80 81
Year

Fig. 1. The Weir Group PAS-score trajectory.

Shashua and Goldschmidt (1974) specify the following criteria for an


efficient performance index:

(i) it can be readily interpreted,


(ii) each of its constituent elements are consistent and readily interpretable,
(iii)each is independent and non-tautological,
(iv) each is monotonic and has the same partial correlation with the
performance index,
(v) the index has fixed scale limits, and
(vi) it has a linear relationship with company utility.

Considering as an illustration the Taffler, 1977 model, the PAS-score is


readily interpretable by virtue of the way it is derived and calibrated and
each of its ratios measure a distinct and relatively uncorrelated facet of
company performance as the principal component analysis indicates. In
addition each ratio has a monotonic relationship between its level and utility
and the partial correlations are the correct sign.63 Finally the fixed scale
limits and the strict ratio calibration with lineal relationship between the

63The partial correlation coefftcients between the PAS-score (p) and each of the constituent
model ratios for a random sample of 293 companies drawn from the EXSTAT tile as at 30.5.81,
based on their latest accounts are:
rPl ,3,=0.8884, rPz ,,,=0.6361, rp3.r2.,= -0.6876 and r,,.,,,=0.5609.
where the other subscripts denote the ratios. The negative sign for the relationship between the
PAS-score and the current liabilities/total assets ratio reflects the point that the greater the ratio
the weaker the balance sheet.
222 R.J. Taffler, Empirical models for monitoring UK corporations

percentile measure and company utility measured by its population relative


performance will be observed. The PAS-score transform and interpretation
may thus be justified on both theoretical and empirical grounds.

4.2. The financial strengths and weaknesses profile


A development of this approach is to examine the component parts of the
model on a related basis. Table 4 provides the financial protile of the Weir
Group with the entries denoting the deciles in which the ratios in each year
lie with 1 indicating the weakest 10% of the relevant EXSTAT population
and 10 the strongest 10%. The profitability improvement (ratio 1) to an
average position for British industry will be noted as will the very strong
short-term liquidity measure (ratio 4) following the capital injection.
However the PAS-score recovery is held back by a continuing very poor
working capital position (ratio 2) and the balance sheet weakness-financial
risk indjcator (ratio 3).64
Table 4
Strengths and weaknesses profile: The Weir Group PLC.”

Ind PAS-
Year Z-score PBT/CL CA/TL CLITA NCI PAS-score score av.

1981 2.0 5 2 3 7 39 44
1980 -4.6 2 1 1 2 3 44
1979 -2.6 1 1 1 2 4 46
1978 0.5 2 2 2 4 14 48
1977 1.1 3 2 2 4 16 47
1976 2.3 3 2 3 5 26 50

“Year ends are to 31.12. The ratios are described in section 2.4.

5. The Z-score approach in practice

Following the US lead Z-score analysis is now in quite widespread use in


the UK. However because of the lack of published operational models and
the costs of model development, most users subscribe to one of the two main
commercially available services, Datastream and Performance Analysis
Services Ltd.65* 66 The former provides a computerized on-line statistical
services and the latter a book service with commentary on company
prospects and a pre-programmed micro-computer.67

64A related approach is also taken by ZETA Services, Inc. [see Altman (1983, pp. 268-269)].
65The latter uses various of the Taffler models described in this paper.
66A commercial service using the Betts and Belhoul (1982) model has also recently been
established.
“This also provides facilities inter alia for Z-score financial modelling and a forecasting
routine for projecting next year’s Z-score and associated statistics.
R.J. Tafler, Empirical models for monitoring UK corporations 223

In total we may estimate possibly 7&75 serious users of the technique in


the UK of which around 30 subscribe to Performance Analysis and another
25 or so to Datastream (Financial Weekly, 24.6.83 p. 4) with the remainder
built in-house. Performance Analysis Services indicates currently about half
their subscribers are fund managers and investment advisors with the other
users variously involved in credit and supplier analysis, corporate finance
and investment banking, accounting and lending banking. A client
breakdown for Datastream is not available, however because of the nature of
the system we might expect a heavier weighting in the investment area. What
may be noted is that in the UK there has been to date less uptake among
commercial bankers and also professional accountants. This latter mirrors
Altman’s (1983, p. 140) US experience and similar explanations may well
apply.68
Practitioners in the UK stress that the Z-score approach is not a
substitute for conventional judgement but provides an additional analytical
tool and a screening device [e.g., Marais (1979)]. However both Dun and
Bradstreet and Performance Analysis combine qualitative judgement and
Z-scores together in a formal manner to take an overall view of a business.
The Tisshaw, 1976 formula provides an input to the Dun and Bradstreet
Dunscore which is used to monitor the performance and health of different
industries over time via Dun and Bradstreet’s computerised database of
company financial information.69 The results of such analyses are
documented regularly in D&B Credit News and more widely disseminated in
the press and public media. Using the output from the Tisshaw model to
provide a base-line or starting point analysts draw on typical D&B credit
reporting intelligence to modify the Z-score appropriately. The Dunscore is
finally generated by a transformation of this adjusted Z-score into a linear
scale with a range G15, which is interpreted appropriately.
Performance Analysis Services also use Z-score analysis as an integral part
of their Prognosis Service. Company Z-scores, risk indices, PAS-score
trajectories and financial strengths and weaknesses profiles are employed as
the prime tools in the measurement of company performance and the
management’s track record. Their analysts then go behind the figures to
judge whether management policies are going to make the PAS-score rise or
fall and, in the case of at risk firms, whether they are actually going to fail or
not. The resulting forecasts provide the basis for the Performance Analysis
‘recovery’ and ‘downside’ portfolios used by their investment clients
68For example perceived differential cost considerations in the case of the auditor going
concern decision [Altman (1983, ch. 7)]. However, the widespread exposure of the Z-score
technique in professional accounting magazines, standard student texts, in the examination
syllabuses of the professional bodies and even in their own publications [e.g., Westwick (1981)
and Argenti (1983)], will be noted.
e91nterestingly they find the Tisshaw model developed for unlisted companies also valid for
larger quoted concerns.
224 R.J. Tafler, Empirical models for monitoring UK corporations

(Financial Weekly, 24.6.83 p. 4). The arguments of Performance Analysis


Services to justify the effectiveness of this approach echo those of Treynor
(1976). Treynor believes that superior stockmarket performance is only likely
from investment ideas based on ‘unconventional, innovative research’ and
superior interpretation of publicly available information - which he terms
ideas that ‘travel slowly’, rather than through investment ideas ‘whose
implications are straightforward and obvious.. . and consequently travel
quickly’.70

6. Conclusions and further work


In this paper we first described and evaluated the published UK Z-score
models and drew a number of lessons pertinent to practitioners in other
environments particularly the need for true validation samples in assessing
model performance and the requirement to guard against too great a
percentage of type II errors in a company population.
Next two models new to the literature, one for the analysis of distribution
companies, and the other for unlisted concerns, were briefly described and
their track records to date appraised. In the first instance it was shown that
unlike the experience of Altman et al. (1977) in the US, separate models for
manufacturing and retail companies need to be constructed in the UK. The
second new model is of interest in its adoption of the jackknife approach to
the development of the discriminant function and the practical utility of such
statistical methodology, particularly in the case of small sample sizes, was
highlighted.
Section 4 presented a brief review of difficulties conventionally met in the
UK in interpreting the output from Z-models and justified a simple
approach, also used by ZETA Services, Inc. in the US, to turn the bifurcated
Z-function into a performance model applicable throughout the whole
performance range. The use of this approach and associated analysis of a
company’s financial strengths and weaknesses profile disaggregating the
component parts of the Z-score model was also illustrated.
In the final section the extent and nature of use of the approach in the UK
was reviewed and the manner in which the Z-score is formally incorporated
as an input into the qualitative judgement process in two organisations was
described. Taffler (198 1) terms this a man with model approach in contrast to
the man versus model arguments in much of the cognitive psychological
literature.
The resistance to the Z-score approach still in many quarters in the UK,

“Performance Analysis claims that their portfolios substantially outperform the market index
on a consistent basis. Altman (1983, chs. 9 and 10) describes a number of actual and potential
applications of Z-score type models by the equity investor and bond holder etc.
R.J. Taffler, Empirical models for monitormg UK corporations 225

despite its demonstrable track record in practice, would appear to be largely


based on the misconception that it substitutes for the decision maker’s
intuition and experience rather than being complementary and enhancing.71
Analysts need to recognise that such a response to the important innovation
in credit and investment analysis techniques Z-scores constitute is natural
and inevitable and can best be overcome by showing decision makers’
directly how Z-scores can help them in a practical manner in particular
decision situations. The academic literature is now replete with models but
lacking in applications7’ and for further progress of the Z-score technique
this situation needs urgently to be redressed.
Man tends to be an inferior ‘intuitive statistician’ [Slavic et al. (1977)], and
measurement is but one small part of a decision process. As such the
unbiased Z-score approach is best used as a measurement tool to help free
the decision maker to concentrate on those aspects of the decision task that
require his particular skill, experience and judgement.

‘ISuch fears arc not unique to the UK. For example Altman (1983, ch. 5) describes the lack of
use of such models by the field lending officer as ‘probably due to concern that the computerized
model might be a threat to the lending officer’s position and function and that nothing could
replace (his) ‘hands-on’ experience’.
“A notable exception is the application of Altman’s (1968) Z-score model to help manage a
company turnaround strategy [Altman (1983, ch. 6)].

References

Altman, E.I., 1968, Financial ratios, discriminant analysis and the prediction of corporate
bankruptcy, Journal of Finance 23, no. 4, 589-609.
Altman, E.I., 1978, Examining Moyer’s re-examination of forecasting financial failure, Financial
Management, Winter, 76-79.
Altman, E.I., 1983, Corporate financial distress: A complete guide to predicting, avoiding and
dealing with bankruptcy (Wiley-lnterscience, New York).
Altman, E.I. and B. Loris, 1976, A financial early warning system for over-the-counter broker
dealers, Journal of Finance 31, no. 4, 1201-1217.
Altman, E.I. and T.P. McGough, 1974, Evaluation of a company as a going concern, Journal of
Accountancy 138, no. 6,5&57.
Altman, E.I., R.G. Haldeman and P. Narayanan, 1977, ZETA analysis: A new model to identify
bankruptcy risk of corporations, Journal of Banking and Finance 1, no. 1, 29951.
Altman, EL, R.B. Avery, R.A. Eisenbeis and J.F. Sinkey, Jr., 1981, Application of classification
techniques in business, banking and finance (JAI Press, Greenwich, CT).
Argenti, J., 1983, Predicting corporate failure, Institute of Chartered Accountants in England
and Wales, Accountants Digest no. 138.
Ashton, R.H., 1979, Some implications of parameter sensitivity research for judgment modelling
in accounting, Accounting Review 54, no. 1, 170-179.
Bank of England, 1982, Techniques for assessing corporate financial strength, Bank of England
Quarterly Bulletin, June, 221-223.
Beaver, W.H., 1966, Financial ratios as predictors of failure, Empirical Research in Accounting
selected studies, supplement to the Journal of Accounting Research 5, 71-111.
Betts, J., 1983, The identification of companies at risk of financial failure, Working Environment
Research Group, Report no. 5 (University of Bradford, Bradford).
226 R.J. Taffler, Empirical models for monitoring UK corporations

Betts, J. and D. Belhoul, 1982, The identification of companies in danger of failure using
discriminant analysis, 7th Advances in Reliability Technology Symposium proceedings, April
(University of Bradford, Bradford).
Betts, J. and D. Belhoul, 1983, Applications of reliability analysis in the management of the
economy, 4th Eurodata conference, March (Venice).
Bolitho, N., 1973, How to spot insolvency before it happens, Investors Chronicle, 23 March,
1146-l 147.
Cork, Sir K., ch., 1982, Insolvency law and practice: Report of the revtew committee, Cmnd.
8558, (H.M.S.O., London).
Crask, M.R. and W.D. Perreault, Jr., 1977, Validation of discriminant analysis in marketing
research, Journal of Marketing Research 14, no. 1, 60-68.
Dambolena, I.G. and S.J. Khoury, 1980, Ratio stability and corporate failure, Journal of Finance
35, no. 4, 1017-1026.
Datastream plc., 1980, The Datastream Z-score service: A technical review (Datastream plc.,
London).
Dawes, R.M. and B. Corrigan, 1974, Linear models in decision making, Psychological Bulletin
81, no. 2, 95-106.
Deakm, E.B., 1972, A discriminant analysis of predictors of business failure, Journal of
Accounting Research 10, no. 1, 167-179.
Deakin, E.B., 1977, Business failure prediction: An empirical analysis, in: E.I. Altman and A.W.
Sametz, eds., Financial crises, institutions and markets in a fragile environment (Wiley-
Interscience, New York, NY).
Earl, M.J. and D.A.J. Marais, 1979, The prediction of corporate bankruptcy in the UK using
discriminant analysis, Working paper 79/5 (Oxford Centre of Management Studies, Oxford).
Eisenbeis, R.A. and R.B. Avery, 1972, Discriminant analysis and classification procedures:
Theory and applications (D.C. Heath, Lexington, MA).
Fadel, H. and J.M. Parkinson, 1978, Liquidity evaluation by means of ratio analysis, Accounting
and Business Research 8, no. 30, 101-107.
Frecka, T.J. and W.S. Hopwood, 1983, The effects of outliers on the cross-sectional
distributional properties of financial ratios, Accounting Review 58, no. 1, 115-128.
Guy, P., 1980, The Bradford ‘Z-score’ formula, Public Finance and Accountancy, Aug., 32-33.
Joy, O.M. and J.O. Tollefson, 1975, On the financial applications of discriminant analysis,
Journal of Financial and Quantitative Analysis 10, no. 5, 723-739.
Lachenbruch, P.A., 1967, An almost unbiased-method of obtaining confidence intervals for the
probability of misclassification in discriminant analysis, Biometrics 23, no. 4, 639-645.
Lachenbruch, P.A., 1974, Discriminant analysis when the initial samples are misclassified. II:
Non-random misclassification models, Technometrics 16, no. 3, 419424.
Lachenbruch. P.A.. C. Sneerinser and L.T. Revo. 1973, Robustness in the linear and auadratic
discriminant function to certain types of non-normality, Communications in Statist& 1, no.
1, 39-56.
Lev, B., 1971, Financial failure and informational decomposition measures, in: R.R. Sterling and
W.F. Bentz, eds., Accounting in perspective: Contributions to accounting thought by other
disciplines (South-Western, Cincinnati, OH).
Lev, B., 1974, Financial statement analysis: A new approach (Prentice-Hall, Englewood Cliffs,
NJ).
Lev, B. and S. Sunder, 1979, Methodological issues in the use of financial ratios, Journal of
Accounting and Economics 1, 187-210.
Marais, D.A.J., 1979, A method of quantifying companies relative financial strength, Bank of
England discussion paper no. 4.
Mason, R.J. and F.C. Harris, 1979, Predicting company failure in the construction industry,
Proceedings of the Institution of Civil Engineers 66, Part 1, May, 301-307.
Mason, R.J. and F.C. Harris, 1980, Discussion OE Predicting company failure in the construction
industry, Proceedings of the Institution of Civil Engineers 68, Part 1, Feb., 153-154.
Miller, R.G., 1974, The jackknife - a review, Biometrika 61, no. 1, l-15.
Morrison, D.G., 1969, On the interpretation of discriminant analysis, Journal of Marketing
Research 6, no. 2, 156-163.
Mosteller, F. and D.L. Wallace, 1963, Inference in the authorship problem, Journal of the
American Statistical Association 58, no. 302, 275-309.
R.J. Tafler, Empirical models for monitoring UK corporations 221

Moyer, R.C., 1977, Forecasting financial failure: A re-examination, Financial Management,


Spring, 11-17.
Pinches, G.E., 1980, Factors influencing classification results from multiple discriminant analysis,
Journal of Business Research 8, no. 4, 429456..
Scott, J., 1981, The probability of bankruptcy: A comparison of empirical predictions and
theoretical mod&Journal of Banking and Finance 5, 317-334. ^ _
Shashua. L. and Y. Goldschmidt, 1974. An index for evaluating financial oerformance. Journal
of Finance 29, no. 3, 797-814..
Slovm, P. et al., 1977, Behavioral decision theory, Annual Review of Psychology 28, l-39.
Sudarsanam, P.S., 1981, Inter-industry differences in the accounting numbers of UK quoted
companies: A multivariate analysis, Ph.d. thesis (City University Business School, London).
Tattler, R.J., 1976, Finding those firms in danger, Accountancy Age, 16 July.
TatTier, R.J., 1978, The assessment of financial viability using published accounting information
and a multivariate approach, AUTA Review 10, no. 1.
Taffler, R.J., 1980, Discussion oh Predicting company failure in the construction industry,
Proceedings of the Institution of Civil Engineers 68, Part 1, Feb., 151-153.
Tamer, R.J., 1981, Improving man’s ability to use accounting information: A cognitive synegesis.
Working paper no. 32 (City University Business School, London).
Tafher, R.J., 1982, Forecasting company failure in the UK using discriminant analysis and
financial ratio data, Journal of the Royal Statistical Society, Series A 145, Part 3, 342-358.
Taffler, R.J., 1983a, The Z-score approach to measuring company solvency, The Accountant’s
Magazine 87, no. 921, March, 91-96.
Taffler, R.J., 1983b, The assessment of company solvency and performance using a statistical
model: A comparative UK-based study, Accounting and Business Research, 15 no. 52,
Autumn, 295-308.
Tafller, R.J. and H.J. Tisshaw, 1977, Going, going, gone, four factors which predict, Accountancy
88, no. 1003, March, 50-52 and 54.
Tisshaw, H.J., 1976, Evaluation of downside risk using financial ratios, M.Sc. thesis (City
University Business School, London).
Treynor, J.L., 1976, Long-term investing, Financial Analysts Journal 35, no. 3, 56-59.
Walter, J.E., 1957, Determination of technical solvency, Journal of Business 30, no. 1, 3c43.
Watts, J.A., 1981, Financial distress prediction - A critical review, Certified Accountant, Aug.,
217-221.
Westwick, CA., 1981, Do the figures make sense? A practical guide to analytical review
(Institute of Chartered Accountants of England and Wales).

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy