0% found this document useful (0 votes)
111 views80 pages

M&i - Unit 1 - Basics of Measurements and Instruments

Uploaded by

maheshwariphd19
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
111 views80 pages

M&i - Unit 1 - Basics of Measurements and Instruments

Uploaded by

maheshwariphd19
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 80

18ECC203T Measurement AND INSTRUMENTATION

Unit I:BASICS OF MEASUREMENTS AND INSTRUMENTS

Content :
1. Functional elements of an instrument
2. Static characteristics
3. Dynamic characteristics
4. Errors in measurements
5. Statistical evaluation of measurement data
6. Direct and indirect measurement methods
7. Classification of instruments
8. Standards and calibration
Introduction
MEASUREMENTS:
• The measurement of a given quantity is essentially an act or the result of
comparison between the quantity (whose magnitude is unknown) & a predefined
standard. Since two quantities are compared, the result is expressed in numerical
values.
• Measurement ( Metrology) is the science of determining values of physical
variables.

BASIC REQUIREMENTS OF MEASUREMENT:


• i) The standard used for comparison purposes must be accurately defined &
should be commonly accepted
• ii) The apparatus used & the method adopted must be provable.

MEASURING INSTRUMENT:
• Device for determining the value or magnitude of a quantity or variable
• Physical quantity: variable such as pressure, temperature,
mass, length, etc.
• Data: Information obtained from the
instrumentation/measurement system as a result of the
measurements made of the physical quantities
• Information: Data that has a calibrated numeric relationship to
the physical quantity.
• Parameter: Physical quantity within defined (numeric) limits.

• Why we need measurement


• To improve the quality of the product
• To improve the efficiency of production
• To maintain the proper operation
• Measured Value: Any value or any reading calculated from measurement
system
or measuring instrument.
• True value: Any value calculated from rated value known as True value of
Actual
Value.
e.g. Motor Actual Speed

• Error : Any deviation of measured value from true value


Measured Value-True Value
Methods of Measurement
Two methods
1. Direct Method
2. Indirect method

Direct method: Unknown quantity ( Measurand) is directly compared against a


standard. The result is expressed as a numerical number and a unit.
Direct methods are common for the measurement of physical quantities like
length, mass and time.

Two classification: Deflection method and Comparison methods


Deflection methods: It includes the deflection of pointer on a scale due to the
quantity to be measured. Eg: Wattmeter, ammeter, voltmeter.

Comparison methods: It include the comparison of the quantity under


measurement with apre-defined standard quality which gives measurement. Eg:
Potentiometer.
Indirect method: In this method the comparison is done with a standard through
the use of a calibration s/m. These methods are used those cases where the
desire parameter to be measured. Eg Acceleration, power
Functional elements of an instrument
• Measurement systems contain four main functional elements. They are:
a. Primary sensing element.
b. Data Conditioning elements
1.Variable conversion element.
2. Variable manipulation element.
c. Data transmission element.
e. Data presentation element
Primary sensing element:
• The quantity under measurement makes its first contact with the primary
sensing element of a measurement system.
• i.e., the measurand- (the unknown quantity which is to be measured) is first
detected by primary sensor which gives the output in a different analogous
form
• This output is then converted into an e electrical signal by a transducer -
(which converts energy from one form to another).
• The first stage of a measurement system is known as a detector transducer
stage’.
Example 2: Generalized Instrument System
Variable conversion element:
• The output of the primary sensing element may be electrical signal of any
form; it may be voltage, a frequency or some other electrical parameter
• For the instrument to perform the desired function, it may be necessary
to convert this output to some other suitable form.
Variable manipulation element:
• The function of this element is to manipulate the signal presented to it
preserving the original nature of the signal.
• It is not necessary that a variable manipulation element should follow the
variable conversion element Some non -linear processes like modulation,
detection, sampling , filtering, chopping etc., are performed on the signal
to bring it to the desired form to be accepted by the next stage of
measurement system
• This process of conversion is called signal conditioning’
• The term signal conditioning includes many other functions in addition to
Variable conversion & Variable manipulation
• In fact the element that follows the primary sensing element in any
instrument or measurement system is called conditioning element’
• When the elements of an instrument are actually physically
separated, it becomes necessary to transmit data from one to
another.
• The element that performs this function is called a data
transmission element’.
Data presentation element:
• The information about the quantity under measurement has to be
conveyed to the personnel handling the instrument or the system
for monitoring, control, or analysis purposes.
• This function is done by data presentation element In case data is
to be monitored, visual display devices are needed these devices
may be analog or digital indicating instruments like ammeters,
voltmeters etc.
• In case data is to be recorded, recorders like magnetic tapes, high
speed camera & TV equipment, CRT, printers may be used.
• For control & analysis is purpose microprocessor or computers
may be used. The final stage in a measurement system is known
as terminating stage’
Static and Dynamic characteristics
• The performance characteristics of an instrument are
mainly divided into two categories:
i) Static characteristics
ii) Dynamic characteristics
• Static characteristics:
• The set of criteria defined for the instruments, which are
used to measure the quantities which are slowly varying
with time or mostly constant, i.e., do not vary with time, is
called ‘static characteristics’ .
1.Accuracy 2. Sensitivity 3. Precision 4.Reproducibility
5. Repeatability 6.Drift 7.Static error 8.Dead zone
9.Threshold 10.Stability 11.Linearity 12.Range or Span
13.Bais 14.Tolerance 15. True value 16.Hysteresis
1.Accuracy

• It is the degree of closeness with which an instrument reading


approaches the true value of the quantity being measured.
• The accuracy of a measurement indicates the nearness to the
actual/true value of the quantity.
• The accuracy can be expressed in following ways:
• a) Point accuracy:
– Such an accuracy is specified at only one particular point of scale. It does
not give any information about the accuracy at any other point on the
scale.
• b) Accuracy as percentage of scale span:
– When an instrument as uniform scale, its accuracy may be expressed in
terms of scale range.
• c) Accuracy as percentage of true value:
– The best way to conceive the idea of accuracy is to specify it in terms of
the true value of the quantity being measured.
2.Sensitivity
• The sensitivity denotes the smallest change in the measured
variable to which the instrument responds. It is defined as the
ratio of the changes in the output of an instrument to a
change in the value of the quantity to be measured.
• if the calibration curve is linear, as shown, the sensitivity of
the instrument is the slope of the calibration curve. If the
calibration curve is not linear as shown, then the sensitivity
varies with the input.
• The manufactures specify sensitivity as the ratio of magnitude
of the measured quantity to the magnitude of the response.
• This ratio is called as Inverse sensitivity or deflection factor,
• Inverse sensitivity or deflection factor is defined as the reciprocal of
sensitivity.
• Inverse sensitivity or deflection factor = 1/ sensitivity
A sensitive instrument can quickly detect a small change in measurement.
ii) Measuring instruments that have smaller scale parts are more sensitive.
iii) Sensitive instruments need not necessarily be accurate.
Problem:
A particular ammeter requires a change of 2A in its
coil to produce a change in deflection of the pointer
by 5mm. Determine the sensitivity and deflection
factor.
Solution: Input is current while the output is
deflection
Sensitivity = Change in output/ change in input
= 5mm/ 2A = 2.5mm/A
Deflection Factor = 1/ sensitivity = 1/2.5 = 0.4 A/mm
3.Precision
• Measure
of consistency or repeatability of
measurement
• Given a fixed value of variable.
• Denotes the closeness with which individual measurements
are departed or distributed about the average of number of
measured values.
• For example consider an instrument on which readings can be
taken upto 1∕100th of unit.
• The instrument has zero adjustment error. So, when we take a
readings, the instrument is highly precise.
• However as the instrument has a zero adjustment error the
readings obtained are precise, but they are not accurate.
• Thus, when a set of readings show precision, the results agree
among themselves. However, it is not essential that the
results are accurate.
• Precise instrument may not be accurate.
• Precision is composed of two characteristics
– Conformity
– Number of significant figures

• Conformity: Ability of an instrument to produce the same


reading or it is the degree of similarity between the individual
measurements.
• It is also called repeatability or reproducibility and the number
of significant figures.

• Expression
𝑃= 1−
| 𝑋𝑛− 𝑋𝑛
𝑋𝑛 |
• - Value of nth measurement.
• - Avg of the set of measured values.
Pbm: Find the precision of the 3rd measurement from
the table given
Measure Value of
ment
number
1
measurement

49
𝑃= 1−
| 𝑋𝑛− 𝑋𝑛
𝑋𝑛 |
2 51 =
3 52
= 50.2
4 50
Value of 3rd measurement = 52, where
5 49
n= 3

𝑃=1− | 52 − 50.2
50.2 |
0.9642 = 96.4%
Pbm: Find the precision of the 7th measurement when an voltmeter is used in
an application, the measured values are given in the table.

Measurement Voltage measured in


number volts
1 23V
2 27V = 26.25
3 26V
4 28V
5
6
27v
28v
𝑃 = 1− | 27 − 26.25
26.25 |
7 27v
8 24v

0.7143= 71%
4. Repeatability:
•Repeatability is the degree of closeness with which a given value may be
repeatedly measured .
•It is the closeness of output readings when the same input is applied
repetitively over a short period of time.
•The measurement is made on the same instrument, at the same location,
by the same observer and under the same measurement conditions.
• It may be specified in terms of units for a given period of time
5. Reproducibility:
• Reproducibility relates to the closeness of output readings for the same input
when there are changes in the method of measurement, observer, measuring
instrument location, conditions of use and time of measurement.

•Perfect reproducibility means that the instrument has no drift. Drift means
that with a given input the measured values vary with time.

•Reproducibility and Repeatability are a measure of closeness with which a


given input may be measured over and over again. The two terms cause
confusion

•Reproducibility is specified in terms of scale readings over a given period of


time.
•On the other hand, Repeatability is defined as the variation of scale reading
and is random in nature.
6. Drift:
• Drift is a departure in the output of the instrument over the period of time.
• An instrument is said to have no drift if it produces same reading at different
times for the same variation in the measured variable.
• Drift is unrelated to the operating conditions or load.
• The following factors could contribute towards the drift in the instruments:
i) Wear and tear
ii) Mechanical vibrations
iii) Stresses developed in the parts of the instrument
iv) Temperature variations
v) Stray electric and magnetic fields
vi) Thermal emf
• Drift can occur in the flow meters due to wear of nozzle or venturi. It may
occur in the resistance thermometer due to metal contamination etc.
Drift may be classified into three categories:
a) Zero drift: Drift is called zero drift if the whole of instrument calibration shifts
over by the same amount. It may be due to shifting of pointer or permanent set.
b) Span drift: If the calibration from zero upwards changes proportionately it is
called span drift. It may be due to the change in spring gradient.
c) Zonal drift: When the drift occurs only over a portion of the span of the
instrument it is called zonal drift.
• Drift is an undesirable quality in industrial instruments
because it is rarely apparent and cannot be easily
compensated for.
• Thus, it must be carefully guarded against by continuous fields
can be prevented from affecting the measurements for proper
shielding.
• Effect of mechanical vibrations can be minimized by having
proper mountings.
• Temperature changes during the measurement process
should be preferably avoided or otherwise be properly
compensated for.
7.Static error
• It is the deviation from the true value of the
measured variable.
• It involves the comparison of an unknown
quantity with an accepted standard quantity.
• The degree to which an instrument approaches
to its excepted value is expressed terms of error
of measurement.
8.Dead zone
•It is the largest changes of input quantity
for which there is no output.
•For e.g. the input that is applied to an
instrument may not be sufficient to
overcome friction.
• It will only respond when it overcomes
the friction forces
9. Threshold
•Threshold is the smallest measurable input, below
which no output change can be identified.
•While specifying threshold, manufactures give the
first detectable output change.

10.Stability

The ability of an instrument to retain its performance throughout its
specified storage life and operating life is called as Stability.
11.Linearity
• Linearity is defined as the ability of an instrument to reproduce its
input linearly.
• Linearity is simply a measure of the maximum deviation of the
calibration points from the ideal straight line.
• linearity=Maximum deviation of o/p from idealized straight line ∕
Actual readings
• No-linearity is defined as the maximum deviation of the
output over the straight line
Linearity is expressed in many different
ways:
Independent Linearity: It is the maximum
deviation from the straight line so placed
as to minimize the maximum deviation
• ii) Zero based linearity: It is the maximum deviation from the
straight line joining the origin and so placed as to minimize the
maximum deviation.
• iii) Terminal based linearity: It is the maximum deviation from the
straight line joining both the end points of the curve.
• Linearity of out-input relation is one of the best characteristics of
the measurement system, because of the convenience of scale
reading.
• Lack of linearity thus does not necessarily degrade sensor
performance. If the nonlinearity can be modelled and an
appropriate correction applied to the measurement before it is
used for monitoring and control, the effect of the non-linearity can
be eliminated.
12.Range or Span
• The region between the limits with in which an instrument is
designed to operate for measuring, indicating or recording a
physical quantity is called the range of the instruments.
• The Scale Range of an instrument is thus defined as the difference
between the largest and the smallest reading of the instrument.
• Span = Xmax - Xmin
• For example for a thermometer calibrated between C to C , the
range is C to C , but the span is 400 - 100 = C

13.Bias
• The constant error which exists over the full range of measurement of
an instrument is called bias.
• Such a bias can be completely eliminated by calibration.
• The zero error is an example of bias which can be removed by
calibration
14.Tolerance
• It is the maximum allowable error that is specified in terms of
certain value while measurement, it is called as tolerance.
• It specifies the maximum allowable deviation of a manufactured device
from a mentioned value

15.True value
• The true value of variable quantity being measured may be defined as
the average of an infinite number of measured values
• when the average deviation due to the various contributing factors
tends to zero. Such an ideal situation is impossible to realize the
practice and hence it is not possible to determine the true value of a
quantity by experimental means.
• The reason for this is that there are several factors such as lags, loading
effects, wear or noise pick-up etc. Normally an experimenter would
never know that the value or quantity being measured by experimental
means is the true value of the quantity or not.
16.Hysteresis
• Hysteresis is a phenomenon which depicts different
output effects while loading and unloading.
• Hysteresis takes place due to the fact that all the
energy put into the stressed parts when loading is not
recoverable while unloading.
• When the input of an instrument is varied from zero to
its full scale and then if the input is decreased from its
full scale value to zero, the output varies.
• The output at the particular input while increasing and
decreasing varies because of internal friction or
hysteric damping.
Pbm: A PMMC type voltmeter, having a full-scale reading of 250 V
and internal resistance of 400 kilo-ohms, is connected with the series
resistance of 100 kilo-ohms. Calculate the sensitivity of the voltmeter
(in Ohms/Volts).
Sensitivity (S) = (Rm + Rs) ⁄ Vfld

Rm = 400 KΩ, Rs = 100 KΩ, Vfld = 250


V
S = (400 + 100) ⁄ 250
S = (500 × 103) ⁄ 250
S = 2000 ohm/volt
• Determine the value of current (in mA) required for the full-scale
deflection of a voltmeter when the sensitivity of the voltmeter is
125 Ohms/Volt.

The sensitivity of a voltmeter is given in ohms per volt. It is


determined by dividing the sum of the resistance of the
meter (Rm) plus the series resistance (Rs), by the full-scale reading in
volts. In equation form, sensitivity is expressed as follows:
Sensitivity (S) = (Rm + Rs) ⁄ Vfld
or
Sensitivity (S) = Sensitivity = Ohm/Volt = 1/Volt/Ohm =
1/Ampere
S = 1/125 = 0.008A
S = 8 mA
Dynamic characteristics
• The set of criteria defined for the instruments, which are changes
rapidly with time, is called ‘dynamic characteristics’.
• The various dynamic characteristics are:
i) Speed of response
ii) Measuring lag
iii) Fidelity
iv) Dynamic error
• Speed of response:
It is defined as the rapidity with which a measurement system
responds to changes in the measured quantity.
• Measuring lag:
It is the retardation or delay in the response of a measurement system to
changes in the measured quantity. The measuring lags are of two types:

• a) Retardation type:
In this case the response of the measurement system begins
immediately after the change in measured quantity has
occurred.
• b)Time delay lag:
In this case the response of the measurement system begins
after a dead time after the application of the input.
• Fidelity:
It is defined as the degree to which a measurement system
indicates changes in the measurand quantity without dynamic error.
• Dynamic error:
It is the difference between the true value of the quantity
changing with time & the value indicated by the measurement system
if no static error is assumed. It is also called measurement error.
Errors in measurements

• An error may be defined as the difference between the


measured value and the actual value.
• For example, if the two operators use the same device
or instrument for finding the errors in measurement, it
is not necessary that they may get similar results.
• There may be a difference between both
measurements. The difference that occurs between
both the measurements is referred to as an ERROR.
Types of Errors in Measurement System
1. Gross Errors
2. Blunders
3. Measurement Errors
Gross errors are caused by mistake in using instruments
or meters, calculating measurement and recording data
results.

The best example of these errors is a person or operator


reading pressure gage 1.01N/m2 as 1.10N/m2.
1. Gross Errors:
• The gross errors mainly occur due to carelessness or lack of experience
of a human begin
• These errors also occur due to incorrect adjustments of instruments
• These errors cannot be treated mathematically
• These errors are also called personal errors’.
• Ways to minimize gross errors:
• The complete elimination of gross errors is not possible but one can
minimize them by the following ways:
• Taking great care while taking the reading, recording the reading &
calculating the result.
• Without depending on only one reading, at least three or more readings
must be taken preferably by different persons.
2. Blunder
• Blunders are final source of errors and these errors are
caused by faulty recording or due to a wrong value while
recording a measurement, or misreading a scale or
forgetting a digit while reading a scale.

• These blunders should stick out like sore thumbs if one


person checks the work of another person. It should not
be comprised in the analysis of data.
3. Measurement error

• The measurement error is the result of the variation of a


measurement of the true value.
• Measurement error consists of a random error and systematic
error.
• The best example of the measurement error is, if electronic
scales are loaded with 1kg standard weight and the reading is
10002 grams, then
• The measurement error is = (1002 grams-1000 grams) = 2
grams
• Measurement Errors are classified into two types: systematic
error and random errors
Systematic errors:
• A constant uniform deviation of the operation of an
instrument is known as a Systematic error
• The Systematic errors are mainly due to the short
comings of the instrument & the characteristics of
the material used in the instrument, such a s
defective or worn parts, ageing effects, environ metal
effects, etc.
• Types of Systematic errors:
• There are Four types of Systematic errors as:
i) Instrumental errors ii) Environmental errors
iii) Observational errors Iv) Theoretical Errors
Instrumental errors: Instrumental errors occur due to wrong
construction of the measuring instruments. These errors may occur
due to hysteresis or friction. These types of errors include loading
effect and misuse of the instruments.
• These errors can be mainly due to the following three reasons:
• a) Short comings of instruments:
• These are because of the mechanical structure of the
instruments.
• For example friction in the bearings of various moving parts;
irregular spring tensions, reductions in due to improper handling ,
hysteresis, gear backlash, stretching of spring, variations in air
gap, etc .,
• Ways to minimize this error: Selecting and planning the proper
procedure for the measurement instruments and recognizing the
effect of errors with the proper correction factors and calibrating.
b) Misuse of instruments:
• A good instrument if used in abnormal way gives misleading results.
Poor initial adjustment, Improper zero setting, using leads of high
resistance etc., are the examples of misusing a good instrument.
• Such things do not cause the permanent damage to the instruments
but definitely cause the serious errors.
C) Loading effects
• Loading effects due to improper way of using the instrument cause the
serious errors.
• The best example of such loading effect error is connecting a well
calibrated volt meter across the two points of high resistance circuit.
• The same volt meter connected in a low resistance circuit gives
accurate reading..
• to minimize this error:
• Thus the errors due to the loading effect can be avoided by using an
instrument intelligently and correctly.
Environmental errors:
• These errors are due to the conditions external to the measuring
instrument.
• The various factors resulting these environmental errors are
temperature changes, pressure changes, thermal emf, and ageing of
equipment and frequency sensitivity of an instrument.
• Ways to minimize this error:
• The various methods which can be used to reduce these errors are:
i) Using the proper correction factors and using the information
supplied by the manufacturer of the instrument
ii) Using the arrangement which will keep the surrounding
conditions Constant
iii) Reducing the effect of dust ,humidity on the components by
hermetically sealing the components in the instruments
iv) The effects of external fields can be minimized by using the
magnetic or electro static shields or screens
v) Using the equipment which is immune to such environmental
effects.
Observational errors:
• These are the errors introduced by the observer.
• These are many sources of observational errors such as parallax error
while reading a meter, wrong scale selection, etc.
• Ways to minimize this error
• To eliminate such errors one should use the instruments with mirrors,
knife edged pointers, etc.,
• The systematic errors can be subdivided as static and dynamic errors.
• The static errors are caused by the limitations of the measuring device
while the dynamic errors are caused by the instrument not
responding fast enough to follow the changes in the variable to be
measured.
Theoretical errors
• Theoretical errors are caused by simplification of the model system.
• For example, a theory states that the temperature of the system
surrounding will not change the readings taken when it actually does,
then this factor will begin a source of error in measurement.

Random errors:
• Some errors still result, though the systematic and instrumental
errors are reduced or at least accounted for. The causes of such
errors are unknown and hence the errors are called random errors.
• Ways to minimize this error The only way to reduce these errors is by
increasing the number of observations and using the statistical
methods to obtain the best approximation of the reading.
Static Error
• Static error is defined as the difference between the best
measured value and the true value of the quantity. Then:
• Es = Am - At
• Where, Es = error, Am = measured value of quantity, and
At = true value of quantity.
• Es is also called the absolute static error of quantity A.
• The absolute value of error does not indicate precisely the
accuracy of measurement.
• For example, an error of 2 A is negligible when the current being-
measured is of the order of 1000 A while the same error highly
significant if the current under measurement is 10 A.
• Thus another term relative static error is introduced.
• The relative static error is the ratio of absolute static error to the
true value of the quantity under measurement. Thus the relative
static error Er is given by:
• Percentage static error % Er = Er x 100
• Static Correction
• It is the difference between the true value and the measured value
of the quantity, or
• δC = At - Am

Pbm1: A meter reads 115.50 V and the true value of the voltage is
115.44 V. Determine the static error, and the static correction for
this instrument.
Solution: The error is: Es = Am - At = 115.50 - 115.44 = +0.06 V
Static correction δC = At - Am = -0.06 V.
Pbm2: A thermometer reads 71.50 C and the static correction given is
+0.50 C. Determine the true value of the temperature.
Solution: True value of the temperature,
At = Am + δC = 71.5+ 0.5 = 72.00 C.

Pbm3: A thermometer is calibrated for the range of 100 C to 150 C. The


accuracy is specified within +-0.25 percent. What is the maximum static
error?
Solution: Span of thermometer = 150 - 100 = 500 C
• Maximum static error =
Pbm4: An analogue indicating instrument with a scale
range of 0 - 2.50 V shows a voltage of 1.46 V. A voltage has
a true value of 1.50 V. What are the values of absolute
error and correction? Express the error as a fraction of the
true value and the full scale deflection.
Solution: Absolute error = Am - At = 1.46 - 1.50 = -0.04 V
Absolute correction δC = δA = +0.04 V
Relative error

Relative error expressed as a percentage of full scale


division
Pbm5: A pressure indicator showed a reading as
22 bar on a scale range of 0-25 bar. If the true
value was 21.4 bar, determine: i) Static error ii)
Static correction iii)Relative static error
Solution:
i) Static error = 22 - 21.4 = + 0.6 bar
ii) Static correction = - (+0.6) = - 0.6 bar
iii) Relative error = 0.6 / 21.4 = 0.028 or
2.8 %
Pbm6: A pressure gauge which has a linear calibration curve
has a radius of scale line as 120 mm and pressure of 0 to 50
Pascal is displayed over an arc of 300o. Determine the
sensitivity of the gauge as a ratio of scale length to pressure.
Statistical evaluation of measurement data
• Out of the various possible errors, the random errors cannot be
determined in the ordinary process of measurements.
• Such errors are treated mathematically
• The mathematical analysis of the various measurements is called
statistical analysis of the data’.
• For such statistical analysis, the same reading is taken number of
times, g generally using different observers, different instruments & by
different ways of measurement.
• The statistical analysis helps to determine analytically the uncertainty
of the final test results.
• Arithmetic mean & median:
• When the n umber of readings of the same measurement are taken,
the most likely value from the set of measured value is the arithmetic
mean of the number of readings taken.
Arithmetic mean &median:
When the number of readings of the same measurement are
taken, the most likely value from the set of measured value is
the arithmetic mean of the number of readings taken.
Arithmetic Mean() :
This mean is very close to true value, if number of readings is
very large.
The arithmetic mean value can be mathematically obtained
as,
= =
Deviation from the mean: amount from which the obtained
reading is deviated from the mean
Deviation cane be – or + value, but the algebraic sum of all
deviation is always zero.
……
• Average Deviation: It is the sum of the scaler
values( without sign) of the deviations divided
by the number of readings. This is also called
mean deviation

• The deviation tells us about the departure of a


given reading from the arithmetic mean of the
data set.
• Average deviation gives the precision of
instruments, low avg deviation shows that the
instruments can be used for high precision.
Standard Deviation: The SD of an infinite number of data
is the square root of the sum of all the individual
deviations squared, divided by the number of reading
=
• It is also called as root mean square deviation.
• Reduction in this quantity, improves the measurements.
• SD of finite numbers
=
Variance or Mean square deviation
V=r
• Standard deviation of Mean:
• Standard Deviation of standard deviation
Direct and Indirect measurement methods
Broadly the measurement can be categorized in to two categories
1. Direct measurement 2. Indirect measurement

Direct measurement
• The quantity to be measured is determined directly.
• Example – Measure distance by scale
• With direct measurements, measuring instruments such as Vernier
calipers, micrometers, and coordinate measuring machines are used to
measure the dimensions of the target directly.
• These measurements are also known as absolute measurements.
• Measurements can be performed over a wide range specified by the
scale of the measuring instrument, but there is also the chance that the
measurement will be wrong due to erroneous readings of the scale.
Indirect measurement
• The quantity to be measured is not measured directly. But other
related parameter is measured and inference is drawn from there.
• Example – Measure distance by optical method where we use
telescope to calculate distance.
• With indirect measurements, the dimensions are measured using
measuring instruments such as dial gauges that look at the difference
between targets and reference devices such as gauge blocks and ring
gauges.
• These are also known as comparative measurements due to the fact
that a comparison is performed using an object with standard
dimensions.
• The more predetermined that the shape and dimensions of a
reference device are, the easier the measurement becomes.
• However, this method also has the disadvantage of the measurement
range being limited.
• To measure the length of a bar, the unit of length is taken as
meter in SI unit. A human being can make direct length
comparisons with a preciseness of about 0.25 mm.
• The direct method for measurement of length can be utilized
with a good degree of accuracy but when it comes to
measurement of mass, the problem becomes much more
intricate.
• It is just not possible for human beings to distinguish
between wide margins of mass.
• The indirect methods of measurement consists of a
transducing element, which converts the quantity to be
measured in an analog form.
• The analog signal is then processed by some intermediate
means and is then fed to the end devices, which present the
results of the measurement.
Classification of Instruments
• An instrument is a device in which we can determine the magnitude or value
of the quantity to be measured. The measuring quantity can be voltage,
current, power and energy etc.
• Generally instruments are classified in to two categories
1. Absolute instrument
• An absolute instrument determines the magnitude of the quantity to be
measured in terms of the
• instrument parameter. This instrument is really used, because each time the
value of the
• measuring quantities varies. So we have to calculate the magnitude of the
measuring quantity,
• analytically which is time consuming. These types of instruments are suitable
for laboratory use.
• Example: Tangent galvanometer.
• .
2.Secondary instrument
• This instrument determines the value of the quantity
to be measured directly. Generally these
• instruments are calibrated by comparing with
another standard secondary instrument.
• Examples of such instruments are voltmeter,
ammeter and wattmeter etc. Practically
• secondary instruments are suitable for
measurement
2.1 Indicating instrument
• This instrument uses a dial and pointer to determine the
value of measuring quantity.
• The pointer indication gives the magnitude of measuring
quantity.
2.2 Recording instrument
• This type of instruments records the magnitude of the
quantity to be measured continuously over a specified
period of time.
2.3 Integrating instrument
• This type of instrument gives the total amount of the
quantity to be measured over a specified period of time.
2.4 Electromechanical indicating instrument
• For satisfactory operation electromechanical
indicating instrument, three forces are
necessary.
• They are
• (a) Deflecting force
• (b) Controlling force
• (c)Damping force
2.4.1 Deflecting force
• When there is no input signal to the instrument, the pointer will
be at its zero position.
• To deflect the pointer from its zero position, a force is necessary
which is known as deflecting force.
• A system which produces the deflecting force is known as a
deflecting system. Generally a deflecting system converts an
electrical signal to a mechanical force.
2.4.1 Magnitude effect
• When a current passes through the coil (Fig.1.2), it produces a
imaginary bar magnet.
• When a soft-iron piece is brought near this coil it is magnetized.
Depending upon the current direction the poles are produced in
such a way that there will be a force of attraction between the
coil and the soft iron piece.
• This principle is used in moving iron attraction type instrument.
• If two soft iron pieces are place near a current carrying coil there
will be a force of repulsion between the two soft iron pieces.
• This principle is utilized in the moving iron repulsion type
instrument.
2.4.2 Force between a permanent magnet and a current carrying
coil
When a current carrying coil is placed under the influence of
magnetic field produced by a permanent magnet and a force is
produced between them. This principle is utilized in the moving coil
type instrument.

2.4.3 Force between two current carrying coil


When two current carrying coils are placed closer to each other there will
be a force of repulsion between them. If one coil is movable and other is
fixed, the movable coil will move away from the fixed one. This principle
is utilized in electrodynamometer type instrument
2.5 Controlling force
• To make the measurement indicated by the pointer definite
(constant) a force is necessary which will be acting in the opposite
direction to the deflecting force.
This force is known as controlling force.
• A system which produces this force is known as a controlled
system.
• When the external signal to be measured by the instrument is
removed, the pointer should return back to the zero position.
• This is possibly due to the controlling force and the pointer will
be indicating a steady value when the deflecting torque is equal
to controlling torque.
=
2.5.1 Spring control
• Two springs are attached on either end of spindle (Fig. 1.5).The spindle
is placed in jeweled bearing, so that the frictional force between the
pivot and spindle will be minimum.
• Two springs are provided in opposite direction to compensate the
temperature error. The spring is made of phosphorous bronze.
• When a current is supply, the pointer deflects due to rotation of the
spindle.
• While spindle is rotate, the spring attached with the spindle will oppose
the movements of the pointer.
• The torque produced by the spring is directly proportional to the

α𝛳
pointer deflection “Q “.

• The deflecting torque produced Td proportional to ‘I’. When TC = Td ,


the pointer will come to a steady position.
Therefore
Standards and Calibration
• Calibration is the process of making an adjustment or marking a scale so
that the readings of an instrument agree with the accepted & the certified
standard
• The calibration offers a guarantee to the device or instrument that it is
operating with required accuracy, under stipulated environmental
conditions.
• The calibration procedure involves the steps like visual inspection for
various defects, installation according to the specifications, zero
adjustment etc.
• The calibration is the procedure for determining the correct values of
measurand by comparison with standard ones.
• The standard of device with which comparison is made is called a
standard instrument.
• The instrument which is unknown & is to be calibrated is called test
instrument. Thus in calibration, test instrument is compared with standard
instrument.
• Types of calibration methodologies:
• There are two methodologies for obtaining the comparison between
test instrument & standard instrument. These methodologies are
i) Direct comparisons
ii) Indirect comparisons
Direct comparisons:
• In a direct comparison, a source or generator applies a known input to
the meter under test.
• The ratio of what meter is indicating & the known generator values
gives the meter¶ s error.
• In such case the meter is the test instrument while the generator is the
standard instrument.
• The deviation of meter from the standard value is compared with the
allowable performance limit.
• With the help of direct comparison a generator or source also can be
calibrated.
Indirect comparisons:
• In the indirect comparison, the test instrument is compared
with the response standard instrument of same type
• i .e., if test instrument is meter, standard instrument is also
meter, if test instrument is generator; the standard
instrument is also generator & so on.
• If the test instrument is a meter then the same input is
applied to the test meter as well a standard meter.
• In case of generator calibration, the output of the generator
tester as well as standard, or set to same nominal levels.
• Then the transfer meter is used which measures the outputs
of both standard and test generator.
Standard
• All the instruments are calibrated at the time of manufacturer
against measurement standards.
• A standard of measurement is a physical representation of a unit of
measurement.
• A standard means known accurate measure of physical quantity.
• The different size of standards of measurement is classified as
i) International standards
ii) Primary standards
iii) Secondary standards
iV) Working standards
• For the improvements in the accuracy of absolute measurements
the international units are replaced by the absolute units in 1948.
Absolute units are more accurate than the international units.
International standards
• International standards are defined as the international agreement.
These standards, maintained at the international bureau of weights
and measures and are periodically evaluated and checked by absolute
measurements in term s of fundamental units of physics.
• These international standards are not available to the ordinary users
for the calibration purpose.
Primary standards
• These are highly accurate absolute standards, which can be used as
ultimate reference standards. These primary standards are maintained
at national standard laboratories in different countries.
• These standards representing fundamental units as well as some
electrical and mechanical derived units are calibrated independently by
absolute measurements at each of the national laboratories.
• These are not available for use, outside the national laboratories. The
main function of the primary standards is the calibration and
verification of secondary standards.
Secondary standards
• As mentioned above, the primary standards are not available for
use outside the national laboratories.
• The various industries need some reference standards. So, to
protect highly accurate primary standards the secondary
standards are maintained, which are designed and constructed
from the absolute standards.
• These are used by the measurement and calibration laboratories
in industries and are maintained by the particular industry to
which they belong. Each industry has its own standards.
Working standards
• These are the basic tools of a measurement laboratory and are
used to check an d calibrate the instruments used in laboratory
for accuracy and the performance.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy