0% found this document useful (0 votes)
8 views34 pages

ERRORS

This document discusses sources of error in chemical analyses, including systematic/determinate errors and random/indeterminate errors. Systematic errors can be constant or proportional and are detected using standard reference materials, independent analysis, or blank determinations. Random errors arise from many uncontrollable variables and result in a normal distribution of results around the mean. Statistical analysis allows characterization of random errors using concepts like population, sample, mean, standard deviation, and standard error.

Uploaded by

vincvivares01
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views34 pages

ERRORS

This document discusses sources of error in chemical analyses, including systematic/determinate errors and random/indeterminate errors. Systematic errors can be constant or proportional and are detected using standard reference materials, independent analysis, or blank determinations. Random errors arise from many uncontrollable variables and result in a normal distribution of results around the mean. Statistical analysis allows characterization of random errors using concepts like population, sample, mean, standard deviation, and standard error.

Uploaded by

vincvivares01
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 34

ERRORS IN CHEMICAL ANALYSES

E W
V I
R E Systematic / determinate

Random/
Indeterminate

Gross Error (outliers)


E W
V I
R E
E W
V I
R E
Accuracy and
Absolute Error

Precision and
Relative Error
E W
V I Types of Systematic Errors

R E • Instrumental – caused by nonideal instrument behavior, faulty


calibrations or use under inappropriate conditions
• Method – nonideal chemical or physical behavior of analytical
systems
• Personal – carelessness, inattention, lack of skill or personal
limitations of the experimenter.

Effect of Systematic Errors


• Constant – independent of the size of the
sample being analyzed
• Proportional – decrease or increase
inproportion to the size of the sample
DETECTION OF SYSTEMATIC
ERRORS
Standard reference materials (SRMs)

- are substances sold by the National Institute of


Standards and Technology (NIST) and certified to contain
specified concentrations of one or more analytes.
- materials that contain one or more analytes at known
concentration levels
- may be determined by synthesis, or by the processes:
(1) by analysis with a previously validated reference method,
(2) by analysis by two or more independent, reliable measurement methods
(3) by analysis by a network of cooperating laboratories that are technically
competent and thoroughly knowledgeable with the material being tested.
DETECTION OF SYSTEMATIC
ERRORS
Independent Analysis
If standard samples are not available, a second independent and reliable analytical method can be used in parallel with the method
being evaluated. The independent method should differ as much as possible from the one under study. This practice minimizes the
possibility that some common factor in the sample has the same effect on both methods. Again, a statistical test must be used to
determine whether any difference is a result of random errors in the two methods or due to bias in the method under study (see

Blank Determinations

A blank contains the reagents and solvents used in a determination, but no analyte. Often, many of the sample constituents are added to
simulate the analyte environment, which is called the sample matrix. In a blank determination, all steps of the analysis are performed on
the blank material. The results are then applied as a correction to the sample measurements. Blank determinations reveal errors due to
interfering contaminants from the reagents and vessels employed in the analysis. Blanks are also used to correct titration data for the
volume of reagent needed to cause an indicator to change color
DETECTION OF SYSTEMATIC
ERRORS
Blank Determinations

• A blank contains the reagents and solvents used in a determination, but no analyte.
• The sample constituents are added to simulate the analyte environment, which is
called the sample matrix, collection of all the constituents in the sample.
• In a blank determination, all steps of the analysis are performed on the blank
material. The results are then applied as a correction to the sample measurements.
• Reveals errors due to interfering contaminants from the reagents and vessels
employed in the analysis
• used to correct titration data for the volume of reagent needed to cause an indicator
to change color
RANDOM ERRORS
CAUSED BY THE MANY UNCONTROLLABLE VARIABLES THAT ACCOMPANY EVERY MEASUREMENT
Consider the calibration of a 10-mL pipet. In this experiment a
small flask and stopper were weighed. Ten milliliters of water
were transferred to the flask with the pipet, and the flask was
stoppered. The flask, the stopper, and the water were then
weighed again. The temperature of the water was also measured
to determine its density. The mass of the water was then
calculated by taking the difference between the two masses. The
mass of water divided by its density is the volume delivered by
the pipet. The experiment was repeated 50 times
Note
• 26% of the results occur in the volume range from 9.981 to 9.983 mL.
This is the group containing the mean and median value of 9.982 mL.
• more than half the results are within 60.004 mL of this mean.
• from a low of 9.969 mL to a high of 9.994 mL, a 0.025-mL spread of data
as the number of measurements increases, the
histogram approaches the shape of the
continuous curve

Figure 6-3 A histogram (A) showing distribution of the 50 results in Table 6-3 and a Gaussian
curve (B) for data having the same mean and standard deviation as the data in the histogram.
• The spread in a set of replicate measurements is the
difference between the highest and lowest result.

• A histogram is a bar graph.

• A Gaussian, or normal error curve, is a curve that shows


the symmetrical distribution of data around the mean of an
infinite set of data
STATISTICAL METHODS FOR RANDOM ERRORS

STATISTICAL ANALYSIS ONLY REVEALS INFORMATION THAT IS ALREADY


PRESENT IN A DATA SET.
NO NEW INFORMATION IS CREATED BY STATISTICAL TREATMENTS.
STATISTICAL METHODS, DO ALLOW US TO CATEGORIZE AND
CHARACTERIZE DATA IN DIFFERENT WAYS AND TO MAKE OBJECTIVE AND
INTELLIGENT DECISIONS ABOUT DATA QUALITY AND INTERPRETATION
• A population is the collection of all measurements of interest to
the experimenter, while a sample is a subset of measurements
selected from the population.

• Statistic refers to an estimate of a parameter that is made from a


sample of data

• Parameter refers to quantities such as m and s that define a


population or distribution
• vv
Sample mean x is the arithmetic average of a limited sample drawn from a population
of data.
Sample mean x is the arithmetic average of
a limited sample drawn from a population
of data.
Population mean m, in contrast, is the true
mean for the population.

In the absence of systematic error, the


population mean is also the true value
for the measured quantity.
• When N is small, x differs from m because a small sample of data
may not exactly represent its population.

• In most cases we do not know m and must infer its value from x.

• The probable difference between x and m decreases rapidly as the


number of measurements making up the sample increases; usually
by the time N reaches 20 to 30, this difference is negligible.

• Note that the sample mean x is a statistic that estimates the


population parameter m.
A normal error curve has several general properties:
(a)The mean occurs at the central point of maximum frequency,
(b) there is a symmetrical distribution of positive and negative deviations about the
maximum, and
(c)there is an exponential decrease in frequency as the magnitude of the deviations
increases. Thus, small uncertainties are observed much more often than very large
ones.
The number of degrees of freedom indicates the number of independent results
that enter into the computation of the standard deviation.

When m is unknown, two quantities must be extracted from a set of replicate data: x
and s. One degree of freedom is used to establish x because, with their signs
retained, the sum of the individual deviations must be zero. Thus, when N-1
deviations have been computed, the final one is known. Consequently, only N-1
deviations provide an independent measure of the precision of the set. Failure to use
N-1 in calculating the standard deviation for small samples results in values of s
that are on average smaller than the true standard deviation s
If a series of replicate results, each containing N measurements, are taken randomly
from a population of results, the mean of each set will show less and less scatter as
N increases. The standard deviation of each mean is known as the standard error of
the mean and is given the symbol sm
The pooled estimate of s, which we call spooled, is a weighted average
of the individual estimates.

To calculate spooled, deviations from the mean for each subset are
squared; the squares of the deviations of all subsets are then summed and
divided by the appropriate number of degrees of freedom. The pooled s is
obtained by taking the square root of the resulting number. One degree of
freedom is lost for each subset. Thus, the number of degrees of freedom
for the pooled s is equal to the total number of measurements minus the
number of subsets.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy