M&i - Unit 1 - Basics of Measurements and Instruments
M&i - Unit 1 - Basics of Measurements and Instruments
Content :
1. Functional elements of an instrument
2. Static characteristics
3. Dynamic characteristics
4. Errors in measurements
5. Statistical evaluation of measurement data
6. Direct and indirect measurement methods
7. Classification of instruments
8. Standards and calibration
Introduction
MEASUREMENTS:
• The measurement of a given quantity is essentially an act or the result of
comparison between the quantity (whose magnitude is unknown) & a predefined
standard. Since two quantities are compared, the result is expressed in numerical
values.
• Measurement ( Metrology) is the science of determining values of physical
variables.
MEASURING INSTRUMENT:
• Device for determining the value or magnitude of a quantity or variable
• Physical quantity: variable such as pressure, temperature,
mass, length, etc.
• Data: Information obtained from the
instrumentation/measurement system as a result of the
measurements made of the physical quantities
• Information: Data that has a calibrated numeric relationship to
the physical quantity.
• Parameter: Physical quantity within defined (numeric) limits.
• Expression
𝑃= 1−
| 𝑋𝑛− 𝑋𝑛
𝑋𝑛 |
• - Value of nth measurement.
• - Avg of the set of measured values.
Pbm: Find the precision of the 3rd measurement from
the table given
Measure Value of
ment
number
1
measurement
49
𝑃= 1−
| 𝑋𝑛− 𝑋𝑛
𝑋𝑛 |
2 51 =
3 52
= 50.2
4 50
Value of 3rd measurement = 52, where
5 49
n= 3
𝑃=1− | 52 − 50.2
50.2 |
0.9642 = 96.4%
Pbm: Find the precision of the 7th measurement when an voltmeter is used in
an application, the measured values are given in the table.
0.7143= 71%
4. Repeatability:
•Repeatability is the degree of closeness with which a given value may be
repeatedly measured .
•It is the closeness of output readings when the same input is applied
repetitively over a short period of time.
•The measurement is made on the same instrument, at the same location,
by the same observer and under the same measurement conditions.
• It may be specified in terms of units for a given period of time
5. Reproducibility:
• Reproducibility relates to the closeness of output readings for the same input
when there are changes in the method of measurement, observer, measuring
instrument location, conditions of use and time of measurement.
•Perfect reproducibility means that the instrument has no drift. Drift means
that with a given input the measured values vary with time.
10.Stability
•
The ability of an instrument to retain its performance throughout its
specified storage life and operating life is called as Stability.
11.Linearity
• Linearity is defined as the ability of an instrument to reproduce its
input linearly.
• Linearity is simply a measure of the maximum deviation of the
calibration points from the ideal straight line.
• linearity=Maximum deviation of o/p from idealized straight line ∕
Actual readings
• No-linearity is defined as the maximum deviation of the
output over the straight line
Linearity is expressed in many different
ways:
Independent Linearity: It is the maximum
deviation from the straight line so placed
as to minimize the maximum deviation
• ii) Zero based linearity: It is the maximum deviation from the
straight line joining the origin and so placed as to minimize the
maximum deviation.
• iii) Terminal based linearity: It is the maximum deviation from the
straight line joining both the end points of the curve.
• Linearity of out-input relation is one of the best characteristics of
the measurement system, because of the convenience of scale
reading.
• Lack of linearity thus does not necessarily degrade sensor
performance. If the nonlinearity can be modelled and an
appropriate correction applied to the measurement before it is
used for monitoring and control, the effect of the non-linearity can
be eliminated.
12.Range or Span
• The region between the limits with in which an instrument is
designed to operate for measuring, indicating or recording a
physical quantity is called the range of the instruments.
• The Scale Range of an instrument is thus defined as the difference
between the largest and the smallest reading of the instrument.
• Span = Xmax - Xmin
• For example for a thermometer calibrated between C to C , the
range is C to C , but the span is 400 - 100 = C
13.Bias
• The constant error which exists over the full range of measurement of
an instrument is called bias.
• Such a bias can be completely eliminated by calibration.
• The zero error is an example of bias which can be removed by
calibration
14.Tolerance
• It is the maximum allowable error that is specified in terms of
certain value while measurement, it is called as tolerance.
• It specifies the maximum allowable deviation of a manufactured device
from a mentioned value
15.True value
• The true value of variable quantity being measured may be defined as
the average of an infinite number of measured values
• when the average deviation due to the various contributing factors
tends to zero. Such an ideal situation is impossible to realize the
practice and hence it is not possible to determine the true value of a
quantity by experimental means.
• The reason for this is that there are several factors such as lags, loading
effects, wear or noise pick-up etc. Normally an experimenter would
never know that the value or quantity being measured by experimental
means is the true value of the quantity or not.
16.Hysteresis
• Hysteresis is a phenomenon which depicts different
output effects while loading and unloading.
• Hysteresis takes place due to the fact that all the
energy put into the stressed parts when loading is not
recoverable while unloading.
• When the input of an instrument is varied from zero to
its full scale and then if the input is decreased from its
full scale value to zero, the output varies.
• The output at the particular input while increasing and
decreasing varies because of internal friction or
hysteric damping.
Pbm: A PMMC type voltmeter, having a full-scale reading of 250 V
and internal resistance of 400 kilo-ohms, is connected with the series
resistance of 100 kilo-ohms. Calculate the sensitivity of the voltmeter
(in Ohms/Volts).
Sensitivity (S) = (Rm + Rs) ⁄ Vfld
Random errors:
• Some errors still result, though the systematic and instrumental
errors are reduced or at least accounted for. The causes of such
errors are unknown and hence the errors are called random errors.
• Ways to minimize this error The only way to reduce these errors is by
increasing the number of observations and using the statistical
methods to obtain the best approximation of the reading.
Static Error
• Static error is defined as the difference between the best
measured value and the true value of the quantity. Then:
• Es = Am - At
• Where, Es = error, Am = measured value of quantity, and
At = true value of quantity.
• Es is also called the absolute static error of quantity A.
• The absolute value of error does not indicate precisely the
accuracy of measurement.
• For example, an error of 2 A is negligible when the current being-
measured is of the order of 1000 A while the same error highly
significant if the current under measurement is 10 A.
• Thus another term relative static error is introduced.
• The relative static error is the ratio of absolute static error to the
true value of the quantity under measurement. Thus the relative
static error Er is given by:
• Percentage static error % Er = Er x 100
• Static Correction
• It is the difference between the true value and the measured value
of the quantity, or
• δC = At - Am
Pbm1: A meter reads 115.50 V and the true value of the voltage is
115.44 V. Determine the static error, and the static correction for
this instrument.
Solution: The error is: Es = Am - At = 115.50 - 115.44 = +0.06 V
Static correction δC = At - Am = -0.06 V.
Pbm2: A thermometer reads 71.50 C and the static correction given is
+0.50 C. Determine the true value of the temperature.
Solution: True value of the temperature,
At = Am + δC = 71.5+ 0.5 = 72.00 C.
Direct measurement
• The quantity to be measured is determined directly.
• Example – Measure distance by scale
• With direct measurements, measuring instruments such as Vernier
calipers, micrometers, and coordinate measuring machines are used to
measure the dimensions of the target directly.
• These measurements are also known as absolute measurements.
• Measurements can be performed over a wide range specified by the
scale of the measuring instrument, but there is also the chance that the
measurement will be wrong due to erroneous readings of the scale.
Indirect measurement
• The quantity to be measured is not measured directly. But other
related parameter is measured and inference is drawn from there.
• Example – Measure distance by optical method where we use
telescope to calculate distance.
• With indirect measurements, the dimensions are measured using
measuring instruments such as dial gauges that look at the difference
between targets and reference devices such as gauge blocks and ring
gauges.
• These are also known as comparative measurements due to the fact
that a comparison is performed using an object with standard
dimensions.
• The more predetermined that the shape and dimensions of a
reference device are, the easier the measurement becomes.
• However, this method also has the disadvantage of the measurement
range being limited.
• To measure the length of a bar, the unit of length is taken as
meter in SI unit. A human being can make direct length
comparisons with a preciseness of about 0.25 mm.
• The direct method for measurement of length can be utilized
with a good degree of accuracy but when it comes to
measurement of mass, the problem becomes much more
intricate.
• It is just not possible for human beings to distinguish
between wide margins of mass.
• The indirect methods of measurement consists of a
transducing element, which converts the quantity to be
measured in an analog form.
• The analog signal is then processed by some intermediate
means and is then fed to the end devices, which present the
results of the measurement.
Classification of Instruments
• An instrument is a device in which we can determine the magnitude or value
of the quantity to be measured. The measuring quantity can be voltage,
current, power and energy etc.
• Generally instruments are classified in to two categories
1. Absolute instrument
• An absolute instrument determines the magnitude of the quantity to be
measured in terms of the
• instrument parameter. This instrument is really used, because each time the
value of the
• measuring quantities varies. So we have to calculate the magnitude of the
measuring quantity,
• analytically which is time consuming. These types of instruments are suitable
for laboratory use.
• Example: Tangent galvanometer.
• .
2.Secondary instrument
• This instrument determines the value of the quantity
to be measured directly. Generally these
• instruments are calibrated by comparing with
another standard secondary instrument.
• Examples of such instruments are voltmeter,
ammeter and wattmeter etc. Practically
• secondary instruments are suitable for
measurement
2.1 Indicating instrument
• This instrument uses a dial and pointer to determine the
value of measuring quantity.
• The pointer indication gives the magnitude of measuring
quantity.
2.2 Recording instrument
• This type of instruments records the magnitude of the
quantity to be measured continuously over a specified
period of time.
2.3 Integrating instrument
• This type of instrument gives the total amount of the
quantity to be measured over a specified period of time.
2.4 Electromechanical indicating instrument
• For satisfactory operation electromechanical
indicating instrument, three forces are
necessary.
• They are
• (a) Deflecting force
• (b) Controlling force
• (c)Damping force
2.4.1 Deflecting force
• When there is no input signal to the instrument, the pointer will
be at its zero position.
• To deflect the pointer from its zero position, a force is necessary
which is known as deflecting force.
• A system which produces the deflecting force is known as a
deflecting system. Generally a deflecting system converts an
electrical signal to a mechanical force.
2.4.1 Magnitude effect
• When a current passes through the coil (Fig.1.2), it produces a
imaginary bar magnet.
• When a soft-iron piece is brought near this coil it is magnetized.
Depending upon the current direction the poles are produced in
such a way that there will be a force of attraction between the
coil and the soft iron piece.
• This principle is used in moving iron attraction type instrument.
• If two soft iron pieces are place near a current carrying coil there
will be a force of repulsion between the two soft iron pieces.
• This principle is utilized in the moving iron repulsion type
instrument.
2.4.2 Force between a permanent magnet and a current carrying
coil
When a current carrying coil is placed under the influence of
magnetic field produced by a permanent magnet and a force is
produced between them. This principle is utilized in the moving coil
type instrument.
α𝛳
pointer deflection “Q “.