0% found this document useful (0 votes)
108 views7 pages

Department of Electronic Engineering Ele2Emi Electronic Measurements & Instrumentation

The document provides an overview of units and accuracy in electronic measurements and instrumentation. It discusses: 1. The importance of using appropriate units of measurement, recording values accurately, and assessing sources of error. 2. Common units used in electronics like Hz, V, A, W, Ω, and physical constants. 3. Types of measurements like direct, indirect, and null measurements. 4. Statistical concepts involved in measurements like average, mean, median, and mode. 5. Guidelines for significant figures and calculating errors in measurements.

Uploaded by

Harry Ramza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
108 views7 pages

Department of Electronic Engineering Ele2Emi Electronic Measurements & Instrumentation

The document provides an overview of units and accuracy in electronic measurements and instrumentation. It discusses: 1. The importance of using appropriate units of measurement, recording values accurately, and assessing sources of error. 2. Common units used in electronics like Hz, V, A, W, Ω, and physical constants. 3. Types of measurements like direct, indirect, and null measurements. 4. Statistical concepts involved in measurements like average, mean, median, and mode. 5. Guidelines for significant figures and calculating errors in measurements.

Uploaded by

Harry Ramza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Department of Electronic Engineering

ELE2EMI
Electronic Measurements & Instrumentation

1 Units and Accuracy


In all experimentation, analysis and reporting, it is essential to perform the work reliably, use appro-
priate units of measurement, and record the values accurately.
How much credence can be given to a set of results also depends on the correct assessment of the
contributions of various sources of errors. These must be understood, and their deleterious effect on
precision must be included in the results, either by explicit error estimates, condence intervals, or by
deciding on a sensible number of signicant gures.
This lecture is based on material from chapters 1 and 2 of Carr.

1.1 Units
1.1.1 Metric Prexes

Memorise these!
Metric Prex Name Power of Ten
y yocto -24
z zepto -21
a atto -18
f femto -15
p pico -12
n nano -9
µ micro -6
m milli -3
c centi -2
d deci -1
0
da deca 1
h hecto 2
k kilo 3
M mega 6
G giga 9
T tera 12
P peta 15
E exa 18
Z zetta 21
Y yotta 24
1
1.1.2 Units of Measurement

The next table lists some of the units of measurement useful to Electronic Engineers. Memorise
these also!
Quantity Unit Symbol Value
Frequency hertz Hz or H s−1
Velocity metre/second m.s−1
Acceleration metre/second-squared m.s−2
Force newton N kg.m.s−2
Energy joule J N.m
Power watt W J.s−1
Electric Current ampere A
Electric Charge coulomb C A.s
Voltage volt V J.C−1
Electric Field volt/metre V.m−1
Resistance ohm Ω V.A−1
Resistivity ohm.metre Ω.m
Conductance siemens S Ω−1
Conductivity siemens/metre S.m−1
Capacitance farad F C.V−1
Magnetic Flux Linkage weber Wb V.s
Magnetic Flux Density tesla T Wb.m−2
Inductance henry H Wb.A−1
The unit siemens is named after Werner von Siemens who in 1847 founded a certain German company
to build telegraph lines utilising his rst major invention, the pointer telegraph.

1.1.3 SI units

Although there is a proliferation of units, of which the table above lists only a few, they are all
abbreviations for products of powers of a few basic units. These fundamental units are prescribed in
the SI (Syst`eme Internationale, also called the International System of Units) and mandated by the
ISO (International Standards Organisation). These SI units are:
Quantity Unit Symbol
Length metre m
Time second s
Mass kilogram kg
Current ampere A
Temperature kelvin K
Amount of matter mole mol
Luminous intensity candela cd
Notice that the ofcial unit of mass in the SI system is not the gram, but the kilogram.
The mole is used principally for chemicals. One mole of a substance is the amount that contains as
many constituent particles as there are atoms in 12 gram of Carbon-12. This number of molecules is
called Avogadro's number, and is approximately 6 × 1023 . Clearly, a mole of a material has a mass
proportional to that of each particle. A mole of water has a mass of approximately 18 gram, and
occupies about 18 cubic centimetres in normal room conditions. By the way, a cubic metre of water
weighs approximately a tonne (1000 kg).
Luminosity is important to engineers who work with bre optics and optoelectronics.

2
1.1.4 Physical Constants

The next table lists several physical constants of interest. Even if you cannot remember their values,
you must learn their physical meanings, and it is advisable to memorise their orders of magnitude.
For example, Boltzmann's constant is a factor of proportionality between the heat energy of a typical
particle of matter and the temperature of the material that the particles comprise. If you multiply
Boltzmann's constant by Avogadro's number, then you obtain approximately 8.31 J/K, which is the
gas constant R; this concerns the energy in a mole of an ideal (non-interacting, indenitely compress-
ible) gas.
Constant Value Symbol
Speed of light (celeritus) 2.9979 × 10 m/s
8
c
Boltzmann's constant 1.38 × 10 J/K
−23
k
Electron charge 1.609 × 10 C
−19
e
Permittivity of free space 8.85 × 10−12 F/m 0
Permeability of free space 4π × 10−7 H/m µ0
Planck's constant 6.626 × 10 J.s
−34
h
Newton's universal constant of gravitation 6.67 × 10−11 m3 .s2 .kg−1 G

1.2 Measurement Categories


No doubt there are many ways to broadly classify measurement procedures. Carr slices them thus:

Direct: Measuring the desired quantity `face-to-face', as it were. For example, using a ruler to mea-
sure the width of a bench.
Indirect: Measuring a second quantity from which the rst may be inferred. For example, measuring
the color of the sun and inferring its surface temperature.
Null: Balancing the desired quantity by a controllable source. For example, adjusting a calibrated
resistor in a Wheatstone bridge until the current owing between the two sides equals zero, then
calculating the value of the unknown resistance in another of the arms. (See lecture 2 for bridge
circuits.)

1.3 Average, Arithmetic Mean, Geometric Mean, Median, Mode


The term average can bear different meanings. In science and engineering, several of these meanings
have been made precise.
The arithmetic mean of a set of quantities is their sum divided by the number of terms in the sum:
e.g., amean(a, b) = (a + b)/2 and amean(a, b, c) = (a + b + c)/3.

The geometric mean of n quantities is the n-th root of their product: gmean(a, b, c) = 3 a.b.c.
In electronics, physics and mathematics, there are other kinds of mean, such as the theoretical mean
value of a continuous quantity: for example the root mean square (rms) value of a sinusoidal wave-
form, which is dened using an integral formula.
Statistics, a eld especially important to measurement (and to manufacturing), utilises the median and
mode. The median of a set of quantities is the value with half of the set of values above it, and half of
them below it. Evidently, the median makes sense only for one dimensional measurements!
The mode is the most frequent value. There can be more than one mode, especially when the set of
values is few or sparsely spread.

3
1.4 Signicant Figures
When quoting a value, we should not use either more or fewer gures than its precision justies. For
example, a mercury thermometer may tell us the temperature to the nearest degree Celsius, but it's
doubtful whether it is manufactured precisely enough, or that we could read it well enough, to claim
that it's now 23.4 degrees!
Also, don't place too much credence in the numbers that your calculator displays. If you've measured
a quantity to two signicant gures, for example a temperature of 23o C, then plugged it into a formula
for the resistance of a temperature-sensitive conductor, then please don't trust the calculator's ten digit
number! Only the rst two digits have any chance of being accurate!

1.5 Error
1.5.1 Absolute and Relative Errors

Suppose that the precision is better than 1 degree, but worse than 0.1 degree. If we are condent (from
knowledge and observation, not from vanity or imagination) that the temperature is between 23.2 and
23.8 degrees, then we should note the mean value and the error, and record the temperature (in o C) as
the mean plus or minus the error: 23.5 ± 0.3. In more precise terminology, this type of plus-or-minus
error is called the absolute error.
The ratio of the absolute error to the mean quantity is called the relative error. When this is expressed
as a percentage, this ratio is understandably called the percentage error.

1.5.2 Approximate Rules for Combining Errors

When adding or subtracting two quantities, their absolute errors add. However, when multiplying or
dividing, their relative errors add.
As a rule of thumb, in combining quantities, errors only get worse, not better.
These approximate rules for error calculations assume that the quantities are statistically independent,
so that a large series of measurements of all the quantities shows no correlation between them. In this
situation, plotting one quantity against another shows a pattern that's fairly uniform in all directions
out from the point described by the two mean values.
An exception to the rules for combining errors can occur when quantities are highly correlated: as an
obvious example, irrespective of the errors in our measurement, if T is the current temperature then
T −T is always zero exactly. However, independence is often a reasonable initial working hypothesis.

1.6 “Goodness” of Measurements


Carr , sections 2-5 to 2-7, pages 31 to 39.
The correctness of measurements is not to be taken for granted. A single measurement may be
untrustworthy for a number of reasons, to do with the source of the measured quantity, the instrument,
the environment, and ourselves.
We can gain a better appreciation of the inuences on our measurements by repeating them in iden-
tical, similar, or very different conditions, and at small and long intervals. Terms describing relevant
factors include the following.

Error: The difference between the reported value and the (usually unknown) true value of a quantity.
Validity: How well an instrument (or measurement technique) reects what it is purported to mea-
sure. Depends on details of the instrument, and varies with the operating conditions.
4
Robustness: When the input to an instrument varies slightly, does its output stably reect the changes,
or does it become unstable, or chaotic?
Reliability: Given very different values, or measurements taken at very different times, are the mea-
surements consistent?
Repeatability: Do repeated measurements, on a constant true value, give the same answer?
Accuracy: How close is the mean measurement of a series of trials to the true value?
Precision: How much do the measurements vary from trial to trial?
Resolution: How nely can we and/or the instrument separate one value from another that's close to
it?
Mistake: “Human error”!

Errors are to be expected; they are intrinsic in the physical processes of measurement making. Carr ,
section 2-7 lists four categories of measurement errors and some subcategories, as follows.

1. Theoretical errors: The explicit or implicit model on which we base our interpretation of our
measurements may be inapplicable or inaccurate.

(a) Range of Validity: A model is applicable only within a limited range of conditions. Be-
yond that, it will give inaccurate predictions.
(b) Approximation: Models have nite precision even within their range of validity. Don't
quote ten signicant gures when only two are trustworthy.

2. Static errors: A very broad category!

(a) Reading errors: Due to misreading, or a difculty in accurately reading, the display of
the instrument.
i. Parallax: Analog meters use a needle as a pointer to indicate the measured value.
Reading this at an oblique angle causes a misreading, known as a parallax reading
error.
ii. Interpolation: The needle often rests between two calibrated marks. Guessing its
position by interpolation is subject to an error that depends on the size of the scale,
and on the visual acuity and experience of the person reading the meter.
iii. Last-digit bobble: Digital readouts re often observed to oscillate between two neigh-
bouring values, for example a digital voltmeter (DVM) may alternately show 3.455
and 3.456 volts. This occurs when the actual value is about midway between the two
displayed values. Small variations in the system under test, or in the meter itself, are
sufcient to change the reading when it is delicately poised between the two values.
(b) Environmental errors: Measurements can be affected by changes in ambient factors.
i. Temperature.
ii. Pressure.
iii. Electromagnetic (EM) elds: Static electric or magnetic elds, dynamic (changing)
elds, and propagating elds (radiation) can interfere with measurements. A particu-
larly common example is the mains electricity supply, which is ideally a sinusoid; in
Australia this is specied to have a frequency of 50 Hz. In reality, mains power is not
a pure sinusoid, so it contributes interference at other frequencies also.
(c) Characteristic errors: Static errors intrinsic to the measuring instrument or process.
Physical limitations and manufacturing quality control are factors in several characteristic
errors. Incorrect calibration can also contribute!
5
i. Manufacturing Tolerances: Design and manufacturing processes are frequently in-
exact. For example, the calibrated marks on a ruler are not 1.0000 millimetres apart.
Hopefully some will be slightly above and some slightly below, so that over a series
of measurements these errors will be random and so balance out, but they might not
— the errors in the manufacturing process of one or more batches of rulers might be
systematically biased.
ii. Zero Offset: a meter (for example) may read zero when the actual value is nonzero.
This is a common form of calibration error.
iii. Gain error: ampliers are widely used in instruments such as CRO probes, and we
may trust that “times 10” means precisely what it says only when the amplication
has been carefully calibrated.
iv. Processing error: modern instruments contain complex processing devices such as
analog computers which can introduce errors into the process leading to the displayed
value of a measurement. Digital devices have nite precision (see quantization errors,
below) and are occasionally wrongly programmed: a small programming error often
produces large errors in the results.
v. Repeatability error: instruments change over time, which is why they must be regu-
larly calibrated, just as a car must be serviced. Instruments change, however slightly,
even between consecutive measurements. The act of measurement itself may affect
the instrument, for example spring scales lose some elasticity with every use.
vi. Nonlinearity: ideally, an instrument designed to be linear has an output which is
proportional to its input, but this is only approximately true, and then only within
a range of validity. Drive an amplier to too high a gain and it will operate in its
nonlinear regions, producing a severely distorted output signal.
vii. Hysteresis: Some measurement systems remember some of their past history, and
produce different results if a different path is taken to the same nal set of external
conditions.
viii. Resolution: devices can only resolve (that is, distinguish) values that are sufciently
separated. For example, optical instruments cannot easily resolve objects less than
one wavelength apart.
ix. Quantization: When analog values are recorded on a digital system (analog to dig-
ital conversion), the values are rounded to the nearest available step.

3. Dynamic errors: Due to the measured quantity changing, or the measured object moving,
during measurement. Carr mentions two kinds of dynamic errors.

(a) Mechanical: Such as the inertia of the needle of an analog meter.


(b) Electronic: For example, a sample and hold circuit with a long time constant, used in an
attempt to record a high frequency sine wave.

4. Insertion errors: We wish to know the values that quantities have in a system when the mea-
suring instrument is absent, but of course we can only with the instrument present, and that
constitutes a different system! The values can differ between the two systems, because the
effect of the instrument is never zero, and may not be negligible!

(a) Classical insertion error: An example is the use of a voltmeter with a low resistance
compared to the component or subsystem across which it is connected. Another is an
ammeter with a large resistance compared to the current loop in which it is placed.
(b) Quantum insertion error: The theory of quantum physics places restrictions on the pre-
cision with which certain quantities may simultaneously be measured (Heisenberg Uncer-
tainty Principle). In optics this can be a signicant concern. On the other hand, we don't
expect the HUP to worry us much in the Electronic Measurements and Instrumentation
laboratory classes.
6
1.7 Dealing with Errors
Carr, sections 2-8 and 2-9, pages 39 to 41.
To minimise measurement errors, one may improve the procedure, or one may use statistical averag-
ing.
Procedural improvements may include using more accurate instruments, in particular meters that
cause less disturbance to the system being measured, for example a voltmeter with an impedance that
is much higher than that of the circuit under test.
Statistical averaging involves attempting the same measurement on different occasions, or by using
different instruments — for example by measuring the current in a loop using several ammeters in
series and reading them at the same time.

References
It is intended that the lecture and laboratory notes sufce when preparing for this subject's examina-
tion.
If you wish to investigate any of the topics further, any of a variety of related texts may be found
useful, but for the record, the lecturing material is largely based on material from these texts:

1. Elements of Electronic Instrumentation and Measurement , Third Edition, by Joseph J. Carr,


published by Prentice-Hall, 1996.
2. Principles of Electronic Instrumentation and Measurement , by Howard M. Berlin and Frank C.
Getz, Jr., published by Macmillan, 1988.
3. Modern Dictionary of Electronics , Sixth Edition, by Rudolf F. Graf, published by Howard W.
Sams & Co., Inc., in the early 1980's.

We will cite Carr's work the most often, but its text is lled with egregious typographical errors
despite having had three proof-readers.
Berlin and Getz's book is commendable but appears to be out of print.
As with all dictionaries, the brief denitions given in Graf's do not do justice to the terms. With-
out some prior acquaintance, it's easy to get the wrong impression. Also, more relevant to current
technology would be Graf's Seventh Edition, published in 1999.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy