0% found this document useful (0 votes)
75 views15 pages

Course Material: B.TECH IV-Semester Mechanical Engineering BY

This document provides an overview of course materials for mechanical engineering measurement techniques taught at Gandhi Engineering College. It discusses the basic elements of a measuring system, including the standard, workpiece, instrument, person, and environment. It then describes 10 common methods of measurement, such as direct, indirect, absolute, comparative, transposition, and contactless methods. Finally, it discusses precision and accuracy in measurement, defining these terms and distinguishing between them. It also covers sources of error in measurement and how errors can be minimized.

Uploaded by

harem king
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views15 pages

Course Material: B.TECH IV-Semester Mechanical Engineering BY

This document provides an overview of course materials for mechanical engineering measurement techniques taught at Gandhi Engineering College. It discusses the basic elements of a measuring system, including the standard, workpiece, instrument, person, and environment. It then describes 10 common methods of measurement, such as direct, indirect, absolute, comparative, transposition, and contactless methods. Finally, it discusses precision and accuracy in measurement, defining these terms and distinguishing between them. It also covers sources of error in measurement and how errors can be minimized.

Uploaded by

harem king
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

1

COURSE MATERIAL
B.TECH IV- Semester
MECHANICAL ENGINEERING

BY

PALLAVI CHAUDHURY

ASSISTANT PROFESSOR

MECHANICAL
MEASUREMENT

GANDHI ENGGINEERING COLLEGE, BBSR


DEPARTMENT OF MECHANICAL ENGINEERING
2

Measuring system element


A measuring system is made of five basic elements. These are:
1. Standard
2. Work piece
3. Instrument
4. Person
5. Environment.
The most basic element of measurement is a standard without which no measurement is possible.
Once the standard is chosen a measuring instrument incorporations this standard is should be obtained.
This instrument is then used to measure the job parameters, in terms of units of standard contained in
it. The measurement should be performed under standard environment. And, lastly, there must be
some person or mechanism (if automatic) to carry out the measurement.

Methods of Measurement
3

These are the methods of comparison used in measurement process. In precision measurement various
methods of measurement are adopted depending upon the accuracy required and the amount of permissible
error.
The methods of measurement can be classified as:
1. Direct method 6. Coincidence method
2. Indirect method 7. Deflection method
3. Absolute/Fundamental method 8. Complementary method
4. Comparative method 9. Contact method
5. Transposition method 10. Contactless method etc.
Direct method of measurement.

This is a simple method of measurement, in which the value of the quantity to be measured is obtained
directly without any calculations. For example, measurements by using scales, vernier calipers,
micrometers, bevel protector etc. This method is most widely used in production. This method is not very
accurate because it depends on human insensitiveness in making judgment.
Indirect method of measurement.
In indirect method the value of quantity to be measured is obtained by measuring other quantities
which are functionally related to the required value. E.g. angle measurement by sine bar, measurement
of screw pitch diameter by three wire method etc.
Absolute or Fundamental method.
It is based on the measurement of the base quantities used to define the quantity. For example,
measuring a quantity directly in accordance with the definition of that quantity, or measuring a quantity
indirectly by direct measurement of the quantities linked with the definition of the quantity to be
measured.
Comparative method.
In this method the value of the quantity to be measured is compared with known value of the same
quantity or other quantity practically related to it. So, in this method only the deviations from a master
gauge are determined, e.g., dial indicators, or other comparators.
Transposition method.
It is a method of measurement by direct comparison in which the value of the quantity measured is first
balanced by an initial known value A of the same quantity, and then the value of the quantity measured
is put in place of this known value and is balanced again by another known value B. If the position of
the element indicating equilibrium is the same in
4

both cases, the value of the quantity to be measured is√ . For example, determination of a mass by
means of a balance and known weights, using the Gauss double weighing method.
Coincidence method.
It is a differential method of measurement, in which a very small difference between the value of
the quantity to be measured and the reference is determined by the observation of the coincidence
of certain lines or signals. For example, measurement by vernier caliper micrometer.
Deflection method.
In this method the value of the quantity to be measured is directly indicated by a deflection of a
pointer on a calibrated scale.
Complementary method.
In this method the value of the quantity to be measured is combined with a known value of the
same quantity. The combination is so adjusted that the sum of these two values is equal to
predetermined comparison value. For example, determination of the volume of a solid by liquid
displacement.
Method of measurement by substitution.
It is a method of direct comparison in which the value of a quantity to be measured is replaced by a
known value of the same quantity, so selected that the effects produced in the indicating device by
these two values are the same.
Method of null measurement.
It is a method of differential measurement. In this method the difference between the value of the
quantity to be measured and the known value of the same quantity with which it is compared is
brought to zero.
Contact method.
In this method the sensor or measuring tip of the instrument actually touches the surface to be
measured. e.g., measurements by micrometer, vernier caliper, dial indicators etc. In such cases
arrangement for constant contact pressure should be provided to prevent errors due to excessive contact
pressure.
5

Contactless method.
In contactless method of measurement, the there is no direct contact with the surface to be measured.
e.g., measurement by optical instruments, such as tool makers microscope, projection comparator etc.

1 Precision and Accuracy


Precision
 The terms precision and accuracy are used in connection with the performance of the
instrument. Precision is the repeatability of the measuring process.

 It refers to the group of measurements for the same characteristics taken under identical
conditions. It indicates to what extent the identically performed measurements agree with
each other. If the instrument is not precise it will give different (widely varying) results for
the same dimension when measured again and again. The set of observations will scatter
about the mean. The scatter of these measurements is designated as 0, the standard deviation.
It is used as an index of precision. The less the scattering more precise is the instrument.
Thus, lower, the value of 0, the more precise is the instrument.

Accuracy
 Accuracy is the degree to which the measured value of the quality characteristic agrees with the
true value. The difference between the true value and the measured value is known as error of
measurement.

 It is practically difficult to measure exactly the true value and therefore a set of observations
is made whose mean value is taken as the true value of the quality measured.

Distinction between Precision and Accuracy


 Accuracy is very often confused with precision though much different. The distinction
between the precision and accuracy will become clear by the following example. Several
measurements are made on a component by different types of instruments (A, B and C
respectively) and the results are plotted.

 In any set of measurements, the individual measurements are scattered about the mean, and
the precision signifies how well the various measurements performed by same instrument on
the same quality characteristic agree with each other.
6

Figure 1.5 Precision And Accuracy

 The difference between the mean of set of readings on the same quality characteristic and the
true value is called as error. Less the error more accurate is the instrument. Figure 1.5 shows
that the instrument A is precise since the results of number of measurements are close to the
average value. However, there is a large difference (error) between the true value and the
average value hence it is not accurate.

 The readings taken by the instruments are scattered much from the average value and hence it
is not precise but accurate as there is a small difference between the average value and true value.
Fig. 1.5 (c) shows that the instrument is accurate as well as precise.

2 Errors in Measurement
 It is never possible to measure the true value of a dimension, there is always some error. The
error in measurement is the difference between the measured value and the true value of the
measured dimension.
7

 Error in measurement =Measured value - True value. The error in measurement may be
expressed or evaluated either as an absolute error or as a relative error.

Absolute Error
 True absolute error. It is the algebraic difference between the result of measurement and the
conventional true value of the quantity measured.

 Apparent absolute error. If the series of measurement are made then the algebraic difference
between one of the results of measurement and the arithmetical mean is known as apparent
absolute error.

Relative Error
 It is the quotient of the absolute error and the value of comparison used for calculation of that
absolute error. This value of comparison may be the true value, the conventional true value or
the arithmetic mean for series of measurement.

The accuracy of measurement, and hence the error depends upon so many factors, such as:
- Calibration standard
- Work piece
- Instrument
- Person
- Environment etc. as already described.
No matter, how modern is the measuring instrument, how skillful is the operator, how accurate the
measurement process, there would always be some error. It is therefore attempted to minimize the
error. To minimize the error, usually a number of observations are made and their average is taken as
the value of that measurement.
 If these observations are made under identical conditions i.e., same observer, same instrument
and similar working conditions excepting for time, then, it is called as Single Sample Test'.

 If however, repeated measurements of a given property using alternate test conditions, such
as different observer and/or different instrument are made, the procedure is called as

`Multi-Sample Test'. The multi-sample test avoids many controllable errors e.g., personal error, instrument
zero error etc. The multi-sample test is costlier than the single sample test and hence the later is in wide
use.
8

 In practice good numbers of observations are made under single sample test and statistical
techniques are applied to get results which could be approximate to those obtainable from
multi-sample test.

Types of Error
During measurement several types of error may arise, these are
1. Static errors which includes
- Reading errors
- Characteristic errors
- Environmental errors.
2. Instrument loading errors.
3. Dynamic errors.
Static errors
These errors result from the physical nature of the various components of measuring system. There
are three basic sources of static errors. The static error divided by the measurement range (difference
between the upper and lower limits of measurement) gives the measurement precision.
Reading errors
Reading errors apply exclusively to the read-out device. These do not have any direct relationship
with other types of errors within the measuring system.
Reading errors include: Parallax error, Interpolation error.
Attempts have been made to reduce or eliminate reading errors by relatively simple techniques. For
example, the use of mirror behind the readout pointer or indicator virtually eliminates occurrence of
parallax error.
Interpolation error.
It is the reading error resulting from the inexact evaluation of the position of index with regards to
two adjacent graduation marks between which the index is located. How accurately can a scale be
readthis depends upon the thickness of the graduation marks, the spacing of the scale division and the
thickness of the pointer used to give the reading Interpolation error can be tackled by increasing;
using magnifier over the scale in the viscinity of pointer or by using a digital read out system.
9

Characteristic Errors
It is defined as the deviation of the output of the measuring system from the theoretical predicted
performance or from nominal performance specifications.
Linearity errors, repeatability, hysteresis and resolution errors are part of characteristic errors if the
theoretical output is a straight line. Calibration error is also included in characteristic error.
Loading Errors
Loading errors results from the change in measurand itself when it is being measured, (i.e., after the
measuring system or instrument is connected for measurement). Instrument loading error is the
difference between the value of the measurand before and after the measuring system is
connected/contacted for measurement. For example, soft or delicate components are subjected to
deformation during measurement due to the contact pressure of the instrument and cause a loading error.
The effect of instrument loading errors is unavoidable. Therefore, measuring system or instrument should
be selected such that this sensing element will minimize instrument loading error in a particular
measurement involved.
Environmental Errors
These errors result from the effect of surrounding such as temperature, pressure, humidity etc. on measuring
system.
External influences like magnetic or electric fields, nuclear radiations, vibrations or shocks etc. also lead to
environmental errors.
Environmental errors of each component of the measuring system make a separate contribution to the
static error. It can be reduced by controlling the atmosphere according to the specific requirements.
Dynamic Errors
Dynamic error is the error caused by time variations in the measurand. It results from the inability of
the system to respond faithfully to a time varying measurement. It is caused by inertia, damping,
friction or other physical constraints in the sensing or readout or display system.
For statistical study and the study of accumulation of errors, these errors can be broadly classified into
two categories
1. Systematic or controllable errors, and
2. Random errors.
10

Systematic Errors
Systematic errors are regularly repetitive in nature. They are of constant and similar form. They
result from improper conditions or procedures that are consistent in action. Out of the systematic
errors all except the personal error varies from individual to individual depending on the personality of
observer. Other systematic errors can be controlled in magnitude as well as in sense. If properly
analyzed they can be determined and reduced. Hence, these are also called as controllable errors.
Systematic errors include:
1. Calibration Errors. These are caused due to the variation in the calibrated scale from its
normal value. The actual length of standards such as slip gauge and engraved scales will vary
from the nominal value by a small amount. This will cause an error in measurement of
constant magnitude. Sometimes the instrument inertia and hysteresis effect do not allow the
instrument to transit the measurement accurately. Drop in voltage along the wires of an
electric meter may include an error (called single transmission error) in measurement.

2. Ambient or Atmospheric conditions (Environmental Errors). Variation in atmospheric


condition (i.e., temperature, pressure, and moisture content) at the place of measurement from
that of internationally agreed standard values (20° temp. and 760 mm of Hg pressure) can
give rise to error in the measured size of the component. Instruments are calibrated at these
standard conditions; therefore error may creep into the given result if the atmosphere conditions
are different at the place of measurement. Out of these temperatures is the most significant
factor which causes error in, measurement due to expansion or contraction of component being
measured or of the instrument used for measurement.

3. Stylus Pressure. Another common source of error is the pressure with which the work piece is
pressed while measuring. Though the pressure involved is generally small but this is sufficient
enough to cause appreciable deformation of both the stylus and the work piece.

In ideal case, the stylus should have simply touched the work piece. Besides the deformation effect
the stylus pressure can bring deflection in the work piece also.
Variations in force applied by the anvils of micrometer on the work to be measured results in the
difference in its readings. In this case error is caused by the distortion of both micrometer frame
and work-piece.
4. Avoidable Errors. These errors may occur due to parallax, non-alignment of work piece
centers, improper location of measuring instruments such as placing a thermometer in
11

sunlight while measuring temperature. The error due to misalignment is caused when the centre line of
work piece is not normal to the centre line of the measuring instrument.
5. Random Errors. Random errors are non-consistent. They occur randomly and are accidental
in nature. Such errors are inherent in the measuring system. It is difficult to eliminate such
errors. Their specific cause, magnitudes and source cannot be determined from the knowledge of
measuring system or conditions of measurement.

The possible sources of such errors are:


1. Small variations in the position of setting standard and work piece.
2. Slight displacement of lever joints of measuring instruments.
3. Operator error in scale reading.
4. Fluctuations in the friction of measuring instrument etc.
Comparison between Systematic Errors and Random Errors

Systematic Errors Random Errors


These errors are repetitive in nature and These are non-consistent. The sources
are of constant and similar form giving rise to such errors are random.
These errors result from improper Such errors are inherent in the measuring system
conditions or procedures that are or measuring instruments.
consistent in action.
Except personal errors, all other systematic Specific causes, magnitudes and sense of these
errors can be controlled in magnitude and sense. errors cannot be determined from the knowledge
of measuring system or
condition.
If properly analyzed these can be These errors cannot be eliminated, but the
determined and reduced or eliminated. results obtained can be corrected.
These include calibration errors, variation in These include errors caused due to variation in
contact pressure, variation in atmospheric position of setting standard and work-piece,
conditions, parallax errors, misalignment errors errors due to displacement of lever joints of
etc. instruments, errors resulting from
backlash, friction etc.
12

Errors likely to creep in Precision Measurements


13

The standard temperature for measurement is 20°C and all instruments are calibrated at this
temperature. If the measurements are carried out at temperature other than the standard
temperature, an error will be introduced due to expansion or contraction of instrument or part to be
measured. But if the instrument and the work piece to be measured are of same material, accuracy
of measurement will not be affected even if the standard temperature is not maintained. Since both
will expand and contract by the same amount.
The difference between the temperature of instrument and the work piece will also introduce an
error in the measurement, especially when the material of the work piece or instrument has higher
coefficient of expansion. To avoid such errors, instrument and the work piece to be measured
should be allowed to attain the same temperature before use and should be handled as little as
possible. For example, after wringing together several slip gauges to form a stock for checking a
gauge, they should be left with the gauge for an hour, if possible preferably on the table of the
comparator which is to be used for performing the comparison.
To attain accurate results, high grade reference gauges should be used only in rooms where the
temperature is maintained very close to the standard temperature.
Handling of gauges changes its temperature, so they should be allowed to stabilize.
There are two situations to be considered in connection with the effect of temperature, these are:
(a) Direct measurement. Let us consider a gauge block being measured directly by
interferometry. Here, the effect of using a non-standard temperature produces a proportional
error, E = l  (t - ts), where

- L = nominal length
-  = coefficient of expansion
- (t - ts) = deviation from standard temperature
- t = temperature during measurement
- ts = standard temperature
(b) Comparative measurement. If we consider two gauges whose expansion coefficients are
respectively α1 and α2, then the error due to nonstandard temperature will be, Error, E = l (α1 -
α2) (t - ts)
14

As the expansion coefficients are small numbers, the error will be very small as long as both parts are at
the same temperature. Thus, in comparative measurement it is important that all components in the
measuring system are at the same temperature rather than necessarily at the standard temperature.
Other ambient conditions may affect the result of measurement. For example, if a gauge block is being
measured by interferometry, then relative humidity, atmospheric pressure and CO2 of the air affects the
refractive index of the atmosphere. These conditions should all be recorded during the measurement and the
necessary correction made.
Internationally accepted temperature for measurement is 20°C and all instruments are calibrated at this
temperature. To maintain such controlled temperature, the laboratory should be air-conditioned.
15

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy