Stanley Lawie 2008 Large Sample THMethod
Stanley Lawie 2008 Large Sample THMethod
net/publication/249545172
CITATIONS READS
8 2,952
2 authors:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Clifford Stanley on 12 June 2015.
Geochemistry: Exploration, Environment, Analysis, Vol. 7 2007, pp. 1–10 1467-7873/07/$15.00 2007 AAG/ Geological Society of London
2 C. R. Stanley & D. Lawie
relationship (=mµ+b). If RMS absolute differences were inconsistent with the data (note that the double negative here is
employed in the regression, these regression parameters (m and appropriate because we fail to reject the hypothesis that this
b) may be converted to those that would be obtained had the model fits the data). Thus, at a first approximation, this model
RMS standard deviations been used in the regression by estimates the unknown measurement–error relationship in
multiplication of the slope and intercept by 1/√2 (=0.70711). these duplicate data.
Alternatively, if median standard deviations were used, the
regression parameters may be converted to those that would be Error measurement model
obtained had the RMS standard deviations been used in the
The Thompson–Howarth error analysis linear model defining
regression by multiplication of the slope and intercept by
the measurement–error relationship derived by either of the
1/0.67449=1.48260 (Thompson & Howarth 1976, 1978;
above techniques has three constraints. The first is implicit
Fletcher 1981). Lastly, if median absolute differences were
in Thompson and Howarth’s goal of specifically evaluating
used, both correction factors need to be employed (i.e. the slope
geochemical concentration data. The measurements must be
and intercept must be multiplied by 1/(1.41421 0.67449)=
positive because they are geochemical concentrations. In
1.04836; Stanley 2003a, 2003b, 2006b) to obtain what would
addition, Thompson and Howarth characterize sampling error
have been obtained had the RMS standard deviations been used
in terms of a standard deviation, which also must be positive.
in the regression. Thus, the expected (RMS) measurement error
Thus, a Thompson–Howarth error analysis is undertaken using
for samples at any concentration within the range of duplicate
a scatterplot of the positive quadrant.
concentrations may be estimated using the functional relation-
The second constraint is an assumption (Thompson &
ship determined by this simple regression (Thompson &
Howarth 1973) that measurement error increases linearly with
Howarth 1976, 1978; Fletcher 1981).
the measurement magnitude. Thus, the slope of the line
describing this relationship, which is the ‘relative error term’ in
Small-sample Thompson–Howarth error analysis method the model (m, sometimes called ‘proportional error’ and com-
monly expressed as a percentage), must be non-negative. If the
If only a small number of duplicate measurements are available
line describing measurement error has a negative slope, the
(e.g. n c 50), an alternative method is employed, in which
formula for the effective lower detection limit (see Equation 2
geoscientists postulate a linear measurement–error model by
below) would produce an upper detection limit, rather than a
arbitrarily choosing a slope and intercept that they think applies
lower one.
to their data, or which represent a maximum acceptable error
The third constraint (Thompson & Howarth 1973) is that the
for their data, and then test whether this function is consistent
intercept of this line, which is the ‘absolute error term’ in the
with the data (Thompson & Howarth 1976, 1978; Fletcher
model (b, sometimes called ‘fixed error’ or ‘error at zero
1981). This is achieved by plotting on a Thompson–Howarth
concentration’, and expressed in the same units as the measure-
scatterplot the two lines corresponding to the 90th and 99th
ments), must also be non-negative. This, again, is required
percentile bounds for the half normal distribution on the
because standard deviations are non-negative.
postulated model, a procedure achieved by multiplying the
Clearly, special procedures are necessary if an unconstrained
postulated slope and intercept by 2.32617 and 3.64277, respect-
linear regression of the absolute differences (or standard
ively, to obtain the corresponding slope and intercept for these
deviations) against the means produces an optimal slope (m) or
critical lines (Fig. 2; note that any number of percentile
intercept (b) that is negative. In order to be consistent with
confidence bound lines can be used, provided the appropriate
Thompson & Howarth’s (1973) original assumptions, if the
multiplier is known for the confidence bounds employed).
calculated intercept is negative (b < 0), the regression may be
Then, the number of duplicate absolute differences that plot
redone using the constraint that the intercept equals zero. This
above each of these lines is determined. Using binomial
forces the regression line through the origin and guarantees a
statistics, the probability that this number of duplicate absolute
non-negative slope. It assumes that the model no longer
differences (or standard deviations) plots above each percentile
involves an absolute error term, and describes all measurement
critical value line is calculated. If these probabilities are relatively
error using only a relative error term. Alternatively, if the
high, then the postulated one standard deviation linear
calculated slope is negative (m < 0), the regression may be
measurement–error model can be considered consistent with
redone using the constraint that the slope equals zero. This is
the data. Thompson & Howarth (1976, 1978; Fletcher 1981)
equivalent to equating the intercept to the mean of the group
assembled binomial statistics tables for the 90th and 99th
median (mean) absolute differences (standard deviations), and
percentiles for up to 50 observations to allow these calculations,
produces a ‘regression’ line that is horizontal, guaranteeing a
although Stanley (2003a, 2003b) provided a MATLAB program
non-negative intercept. It assumes that the model involves no
capable of outputting a corresponding binomial probability
relative error term, and describes all measurement error using
table for a wide range of percentiles and n, along with the
only an absolute error term (Stanley 2003a, 2003b).
associated slope and intercept multipliers.
Note that because the purpose of Thompson and Howarth’s
Figure 2 presents an example of this small-sample procedure
methods is to determine the level of measurement error in
on a Thompson–Howarth scatterplot. Based on the 15 dupli-
geochemical analysis, both the chemical concentrations and
cate pairs under consideration, a measurement–error relation-
their errors (standard deviations) must be positive. Thus,
ship is postulated (in this case, =mµ+b; solid line). No
regressions resulting in a negative slope and intercept are not
duplicates (out of 15) plot above the 99th percentile line, and
realistic. Nevertheless, many other forms of natural science data
only two duplicates plot above the 90th percentile line. The
can be evaluated using the Thompson and Howarth procedures
probabilities of these happening at random are 45.0957% and
provided the data are positive and a linear (with positive slope
86.0058%, respectively (Thompson & Howarth 1976, 1978;
and intercept) measurement–error model is applicable.
Fletcher 1981). These probabilities thus allow determination of
whether it is likely that these duplicate data were derived from
the postulated measurement–error relationship defined by the Effective detection limit
model slope and intercept. In this case, the probabilities are Once a measurement–error relationship has been determined,
relatively high, and this measurement–error model is not both the slope and intercept of the regression line can be used
4 C. R. Stanley & D. Lawie
ALTERNATIVE LARGE-SAMPLE
THOMPSON–HOWARTH ERROR ANALYSIS
A fundamental mathematical property that must be exhibited by
data before many forms of numerical manipulation can be
undertaken on those data is ‘additivity’. This property of a
variable essentially dictates that linear operations (i.e. addition,
subtraction, multiplication or division) on the variable have
physical meaning (Stanley 2006a).
Thompson and Howarth’s large-sample error analysis
approach involves the regression of duplicate standard devia-
tions (or their proxies) on duplicate means. However, variances,
not standard deviations, are additive (Shaw 1961; Zitter & God
1971; Pitard 1993). Thus, Thompson and Howarth’s large-
sample error analysis approach appears to be inconsistent with
Fig. 3. Thompson–Howarth scatterplot graphically illustrating the the concept of additivity, involving numerical operations on
relationship between an error model (=0.25µ+1; 25% relative error variables that are not additive (i.e. the standard deviations). If
plus 1 ppm absolute error), and two precision lines (=µ/2 and
=µ/3), and the effective detection limits defined by the correspond- inconsistent with additivity, Thompson and Howarth’s large-
ing intersections (equal to 4=2b/(1 2m)=2/(1 Y) and 12=3b/ sample error analysis approach would be invalid, providing
(1 3m)=3/(1 ¾)). biased estimates of measurement error from duplicate data.
However, by considering exactly what Thompson and
Howarth’s large-sample error analysis approach involves, one
can demonstrate that the approach is valid and consistent with
to determine the ‘effective detection limit’ of the duplicate data. the concept of additivity. Specifically, Thompson and
The ‘effective detection limit’ (de) is defined as the magnitude of Howarth’s large-sample error analysis involves a linear OLS
a measurement at which the precision (twice the total error regression of median group duplicate standard deviations (or
divided by the measurement) equals 100% (p=100%=2/µ; or their proxies) on average group duplicate means to describe
=µ/2; Thompson & Howarth 1973, 1976, 1978; Howarth measurement error as a function of concentration. The regres-
& Thompson 1976). The Analytical Methods Committee sion involves median group standard deviations, and these are
(1987) suggests a three-standard-deviation detection limit, merely the square roots of the median group variances. How-
where =µ/3 (Fig. 3). A discussion of the merits of this ever, these median group variances are functionally related to
definition of the effective detection limit is presented in the (proportional to) the mean group variances by a multiplicative
Appendix. factor derived from the distributional form of a normal
The expected (average) measurement error () predicted by distribution (requiring the original assumption by Thompson
the error model originally defined by Thompson & Howarth and Howarth that the errors be normally distributed;
(1973) can be determined at any concentration (µ) by the Thompson & Howarth 1973, 1976, 1978; Howarth &
following formula: Thompson 1976). This multiplicative factor, 2.1981=1.48262, is
the square of the factor used by Thompson and Howarth to
= mµ + b convert the median standard deviations into mean standard
deviations (Thompson & Howarth 1973, 1976, 1978; Howarth
The ‘effective detection limit’ (de) for these concentrations, & Thompson 1976). Thus, Thompson and Howarth’s regres-
based on their observed level of precision, is thus: sion uses a median standard deviation derived from a median
variance that is proportional to the mean variance. Conse-
de = 2b quently, although Thompson and Howarth’s use of the median
1 2m standard deviation (or its proxy) was originally intended to
produce numerically stable results, and was ironically not
where m is expressed in slope units (i.e. not as a percentage). employed in an effort to be non-parametric (because normality
For example, if the relative error term for an error model is assumed), additivity is also coincidentally achieved by this
derived from a set of duplicates is 25%, and the absolute error procedure because the median standard deviations are derived
term is 1 ppm, then the effective detection limit is 4 ppm from the additive mean variances. As a result, OLS regression
[=2b/(1 2m)=(2 1)/(1 2 X)]. This effective detection of the median group duplicate standard deviations on the
limit is equivalent to the abscissa axis coordinate of the average group duplicate means produces an unbiased estimate
intersection point between the model line describing measure- of the measurement–error relationship, provided the errors are
ment error as a function of concentration, and a line through normally distributed.
the origin with a slope of Y (=Y µ; the line where precision Stanley (2006b) presented an alternative Thompson–
equals 100%; Fig. 3) on a Thompson–Howarth scatterplot. Howarth large-sample error analysis approach that does not
Clearly, because the Thompson–Howarth error analysis involve an assumption of normally distributed errors. In this
approach requires that the measurements be always positive, a modified technique, the RMS group standard deviations:
realistic (non-negative) effective (lower) detection limit cannot
be calculated unless the slope and intercept are constrained to
be non-negative, as dictated by Thompson and Howarth’s
model (Thompson & Howarth 1973, 1976, 1978; Howarth &
Thompson 1976). Note also that an effective detection limit ¯ =
!& n
i=1
i2
n
cannot be calculated when the relative error term is greater than
50% because the resulting Thompson and Howarth instead of the median group standard deviations (or their
measurement–error model is everywhere greater than the proxies) are regressed against the average group means. This
precision line (=µ/2) within the positive quadrant. avoids use of the multiplicative factor required to convert
Thompson–Howarth error analysis 5
Fig. 5. Thompson–Howarth error analysis results for the two Fig. 6. Thompson–Howarth error analysis results for the two
synthetically derived data-sets presented in Figure 4 (top and bottom, synthetically derived data-sets presented in Figure 4 (top and bottom,
respectively). Results from OLS regressions on the individual respectively). Individual duplicate variances have been regressed
duplicate standard deviations on the duplicate means do not approxi- against duplicate means using one- and two-parameter perfect square
mate the true, underlying error introduced to these data-sets and quadratic models, respectively.
significantly underestimate the parameters of the lines.
with non-negative intercept (because to reside in the positive
The second quadratic regression model is associated with a quadrant, b0, b1 and b2 must be non-negative.
Thompson–Howarth error described by both relative and Note that Thompson & Howarth (1976) acknowledged the
absolute error terms. It has the form: possibility of using a quadratic or higher polynomial model to
describe measurement error as a function of concentration.
2 = (mµ + b)2 = m2µ2 + 2mbµ + b2 Unfortunately, the model they referred to described error in
terms of duplicate standard deviations, and so was inconsistent
(or =mµ+b in mean versus standard deviation space). When with additivity and would have produced biased results. In
the square root of this two-parameter (m, b) quadratic regression contrast, Thompson (1988) introduced a quadratic model that
model is calculated, the result is a straight line (=mµ+b) with regressed mean against variance, acknowledging the advantage
a non-negative slope (m) and non-negative intercept (b) on a this model has in terms of additivity. Unfortunately, this model:
Thompson–Howarth scatterplot. Both of these first two quad-
ratic regression models are constrained to be perfect squares, 2 = m2µ2 + b2
and so when the square roots of these quadratic equations are
taken, these models result in straight lines on scatterplots of (Equation 13 in Thompson 1988) when plotted in mean versus
replicate mean versus replicate standard deviation. standard deviation space describes a non-linear measurement–
The third quadratic regression model is not associated with error relationship. Furthermore, this model is not general,
any specific Thompson–Howarth error model. It has the form: because it is merely equivalent to the quadratic model of
Equation 6, but without its second parameter.
2 = b 22µ2 + b1µ + b0 Consequently, the three general quadratic regression
models (Equations 4, 5 and 6) derived using non-linear OLS
regression and calculated in Excel using the ‘SOLVER’ utility
(or =√b 22µ2+b1µ+b0 in mean versus standard deviation space). are presented for the two synthetic data-sets in Figure 6
This three-parameter (b2, b1, b0) quadratic model is not con- (Equation 4 for the first data-set, Equation 5 for the second).
strained to be a perfect square, and so there is no guarantee that The corresponding square roots of the three regression models
the square root of this quadratic model (=√b 22µ2+b1µ+bo) will applied to the two synthetic data-sets are plotted on
be a straight line. As a result, when plotted on a scatterplot Thompson–Howarth scatterplots in Figure 7. In both cases,
of replicate mean versus replicate standard deviation, this the regression models produce slopes and intercepts closely
model most likely occurs as a curved line (concave downward) approximating the 5%, and 5% plus 2 ppm error magnitudes
Thompson–Howarth error analysis 7
Table 1. Comparison of measurement–error models derived from several Thompson–Howarth error analysis-type methods.
Regression dependent variable (scatterplot ordinate) Data-set no. 1 slope (5% error) Data-set no. 2 slope (5% error) Data-set no. 2 intercept (2 ppm error)
Group median standard deviation 0.0507192 0.0476848 2.2166554
Group RMS standard deviation 0.0493528 0.0449496 2.0998716
Individual standard deviation 0.0401696 0.0369281 1.6933478
Individual variance 0.0504482 0.0454358 2.1852808
8 C. R. Stanley & D. Lawie
Fig. 9. Thompson–Howarth error analysis results from 3488 Cu Fig. 10. Thompson–Howarth error analysis results from 5500 Pb
assays of duplicate drillcore samples from an anonymous porphyry assays of duplicate drillcore samples from an anonymous Broken-
Cu–Mo–Au deposit. In (A), the OLS relative measurement error Hill-type Pb–Zn–Ag deposit. In (A), the OLS relative measurement
estimates derived using Thompson and Howarth’s original approach error estimates derived using Thompson and Howarth’s original
(black circles; dotted line; y=0.072517x), using Stanley’s modified approach (black circles; dotted line; y=0.028964x), using Stanley’s
approach (grey triangles; dashed line; y=0.089676x), and using a modified approach (grey triangles; dashed line; y=0.032118x), and
quadratic regression model for the individual duplicate statistics using a quadratic regression model for the individual duplicate
(open circles; solid line; y=0.059864x) are presented. In (B), the statistics (open circles; solid line; y=0.031055x) are presented. In (B),
distribution of relative errors is compared with what would be the distribution of relative errors is compared with what would be
expected from normally distributed errors. expected from normally distributed errors.
10%, 20%, 30%, 40% and 50% relative error levels (the thin line the duplicate concentration data exhibit something other than a
curves). On Figures 9B, 10B, 11B and 12B, the thick lines uniform distribution; this further undermines the ability of the
describe the corresponding frequency curves in the real dupli- two grouped data Thompson–Howarth error analysis tech-
cate data. These thick line curves are clearly fundamentally niques to produce accurate estimates of error (note that the two
different from what would be expected from normal error (half synthetic duplicate data-sets presented above have uniform data
normal distributions), as all have much larger positive tails than distributions, so the results from all three Thompson–Howarth
expected. As a result, none of these data-sets appear to exhibit error analysis approaches are highly comparable; Table 1).
even near-normally distributed errors. Thus, the disparities in Although weighted OLS regression could be used to address
the regression results of Figures 9A, 10A, 11A and 12A are this data grouping problem (using weights proportional to the
likely at least partially functions of the fact that these duplicate square of the range of mean concentrations in each group;
data do not have errors that are normally distributed (Stanley & Thompson 1988; Stanley 2003a, 2003b), a simpler approach is
Smee 2007; Stanley & Lawie 2007a). available, and is described above. Because OLS regression of
However, a cursory examination of the distributions of the individual replicate variances against the individual replicate
duplicate means in Figures 9 to 12 reveals that each of these means does not involve grouping of the data, it does not suffer
data-sets exhibits very positively skewed concentration distri- from the above positive skewness problem. It also is consistent
butions. This causes the groups of 11 with lower concentrations with the concept of additivity, and thus provides unbiased
to span small ranges in concentration, enabling the median and estimates of measurement error. As a result, OLS regression of
RMS standard deviations (or their proxies) to accurately esti- individual replicate variances against the individual replicate
mate error at these low concentrations. However, it also causes means represents the preferred approach for simply, accurately
the groups of 11 with higher concentrations to span large ranges and reliably estimating measurement error.
in concentration. This prevents the median and RMS standard The above arguments involving the assumption of normally
deviations (or their proxies) from accurately estimating error at distributed errors and the variations in accuracy of group
these concentrations because the duplicate data used to make statistics should give pause to geoscientists who, using the
these estimates span a large range in concentration (Fig. 1). As original implementation of the Thompson–Howarth error
a result, the quality with which the group statistics estimate analysis method, believe that their estimate of the magnitude of
error may vary across the range of concentrations observed if measurement error is accurate, and that their geochemical data
Thompson–Howarth error analysis 9
Fig. 11. Thompson–Howarth error analysis results from 2156 Ni Fig. 12. Thompson–Howarth error analysis results from 928 Au
assays of duplicate drillcore samples from an anonymous dissemi- assays of duplicate drillcore samples from an anonymous high SX
nated sulphide Ni deposit. In (A), the OLS relative measurement epithermal Au–Ag deposit. In (A), the OLS relative measurement
error estimates derived using Thompson and Howarth’s original error estimates derived using Thompson and Howarth’s original
approach (black circles; dotted line; y=0.048989x), using Stanley’s approach (black circles; dotted line; y=0.099096x), using Stanley’s
modified approach (grey triangles; dashed line; y=0.063934x), and modified approach (grey triangles; dashed line; y=0.350179x), and
using a quadratic regression model for the individual duplicate using a quadratic regression model for the individual duplicate
statistics (open circles; solid line; y=0.072909x) are presented. In (B), statistics (open circles; solid line; y=0.360222x) are presented. In (B),
the distribution of relative errors is compared with what would be the distribution of relative errors is compared with what would be
expected from normally distributed errors. expected from normally distributed errors.
APPENDIX: A BETTANEY, L. & STANLEY, C.R. 2001. Geochemical data quality: the “fit-for-
purpose” approach. Explore, Newsletter of the Association of Exploration
In contrast to the two-standard-deviation effective detection Geochemists, 111, 21–22.
limit described in this paper, an alternative, three-standard- FISHER, N.I., LEWIS, T. & EMBLETON, B.J.J. 1987. Statistical Analysis of Spherical
deviation effective detection limit has been proposed (Analytical Data. Cambridge University Press, New York.
Methods Committee 1987). This detection limit can be calcu- FLETCHER, W.K. 1981. Analytical Methods in Geochemical Prospecting. Handbook
lated by substituting ‘3b’ for ‘2b’ in the numerator and ‘1 3m’ of Exploration Geochemistry, 1. Elsevier Scientific Publishing,
Amsterdam.
for ‘1 2m’ in the denominator of Equation 2 (Fig. 3). This FRANCOIS-BONGARCON, D. 1998. Error variance information from paired data;
alternative definition results in a larger effective definition of applications to sampling theory. Exploration and Mining Geology, 7 (1-2),
the detection limit, and although originally recommended 161–165.
in 1978 by the International Union of Pure and Applied GARRETT, R.G. & GRUNSKY, E.C. 2003. S and R functions for the display of
Thompson-Howarth plots. Computers and Geosciences, 29 (2), 239–242.
Chemists (IUPAC), this detection limit definition has been HOWARTH, R.J. & THOMPSON, M. 1976. Duplicate analysis in practice – Part 1.
‘slow to be adopted’ amongst applied geochemists (R. Howarth, Examination of proposed methods and examples of its use. The Analyst,
pers. comm.). 101, 699–709.
This may be because applied geochemists operate under PITARD, F.F. 1993. Pierre Gy’s Sampling Theory and Sampling Practice 2nd edn. CRC
different constraining parameters from analytical chemists. Press, Boca Raton.
SHAW, D.M. 1961. Manipulative errors in geochemistry. Transactions of the Royal
For example, until the recent use of inductively coupled mass Society of Canada, 60, 41–55.
spectrometry, many elements analysed in geochemical surveys STANLEY, C.R. 2006b. On the special application of Thompson-Howarth error
exhibited concentrations close to the conventional two- analysis to geochemical variables exhibiting a nugget effect. Geochemistry:
standard-deviation effective detection limit. Use of a higher Exploration, Environment, Analysis, 6, 357–368.
detection limit (such as the three-standard-deviation effective STANLEY, C.R. 2006a. Numerical transformation of geochemical data, II:
Stabilizing measurement error to facilitate data interpretation. Geochemistry:
detection limit) would thus define a large proportion of Exploration, Environment, Analysis, 6, 79–96.
samples as undetected, and could prevent the extraction of STANLEY, C.R. 2003a. THPLOT.M: A MATLAB function to implement
valuable information inherent in low concentration data. generalized Thompson-Howarth error analysis using replicate data.
Furthermore, in geochemical exploration, geochemists seek Computers and Geosciences, 29 (2), 225–237.
STANLEY, C.R. 2003b. Corrigenda to “THPLOT.M: A MATLAB function to
element concentration patterns to interpret, and thus rigorous implement generalized Thompson-Howarth error analysis using replicate
precision in individual samples is not required because the data” [Computers & Geoscience 29:2, 225-237]. Computers and Geosciences,
concentrations of many samples are interpreted collectively. 29(8), 1069.
As a result, lower effective detection limits, corresponding to STANLEY, C.R. & LAWIE, D. 2007a. Thompson-Howarth error analysis: 2.
Adaptations to the small sample method for assessing measurement
95% (not 99%) confidence levels, are likely to be acceptable error in geochemical samples. (In preparation). Geochemistry: Exploration,
in exploration. In mining applications, the issue of how to Environment, Analysis, 2.
define the detection limit is essentially irrelevant, because only STANLEY, C.R. & LAWIE, D. 2007b. Average relative error in geochemical
the precision of concentrations close to the cut-off grade is of determinations: clarification, calculation and a plea for consistency. (In
press). Exploration and Mining Geology.
prime importance. Cut-off grades are usually much greater STANLEY, C.R. & SINCLAIR, A.J. 1986. Relative error analysis of replicate
than the detection limit, regardless of how the detection limit geochemical data: advantages and applications. in: Programs and Abstracts.
is defined, and so these grades typically exhibit sufficient GeoExpo –, 1986, 77–78.
precision. STANLEY, C.R. & SMEE, B.W. 2007. Strategies for reducing sampling errors in
In short, whereas three-standard-deviation effective detec- exploration and resource definition drilling programs for gold deposits. (In
press). Geochemistry: Exploration, Environment, Analysis.
tion limits may be favoured by IUPAC (analytical chemists), the THOMPSON, M. 1988. Variation of precision with concentration in an analytical
choice of ‘3’ is arbitrary, and there is no reason why ‘2’ cannot system. The Analyst, 113, 1579–1587.
be used. Because geochemistry does not have the same analyti- THOMPSON, M. 1982. Regression methods and the comparison of accuracy.
cal precision requirements as chemistry, lower detection limits The Analyst, 107, 1169–1180.
THOMPSON, M. 1973. DUPAN 3, A subroutine for the interpretation of
that allow low concentrations to provide at least some infor- duplicated data in geochemical analysis. Computers and Geosciences, 4,
mation (even though there may be much noise in these data) 333–340.
may be favoured. Perhaps this is why the IUPAC definition of THOMPSON, M. & HOWARTH, R.J. 1973. The rapid estimation and control of
effective detection limit has not been whole-heartedly embraced precision by duplicate determinations. The Analyst, 98, 153–160.
THOMPSON, M. & HOWARTH, R.J. 1976. Duplicate analysis in practice - part 1.
by the applied geochemistry community. theoretical approach and estimation of analytical reproducibility. The
Analyst, 101, 690–698.
THOMPSON, M. & HOWARTH, R.J. 1978. A new approach to the estimation of
analytical precision. Journal of Geochemical Exploration, 9, 23–30.
REFERENCES ZITTER, H. & GOD, C. 1971. Ermittlung, Auswertung und Ursachen von
ANALYTICAL METHODS COMMITTEE 1987. Recommendations for the definition, Fehlern bei Betriebsanalysen. Fresenius’ Journal of Analytical Chemistry, 255 (1),
estimation and use of the detection limit. The Analyst, 112, 199–204. 1–9.