Measurement and Metrologysem - 4
Measurement and Metrologysem - 4
1. 0
1 Introduction to Measurements
.
Measurement
Definition: According to the International Vocabulary of Basic and General Terms in Metrology
(ISO 1993), ―Measurement is the set of operations having the object of determining a value of a
quantity.‖ The act of measurement, commonly referred to as "to measure," involves experimentally
comparing the unknown value of a quantity with a suitable standard unit, established by convention.
Measurement encompasses the process of quantifying and assigning numerical values to observable
phenomena, properties, or attributes. It involves the comparison of an unknown quantity to a
standard unit of measurement, thereby expressing its magnitude in a consistent and reproducible
manner. Whether it be the length of a pencil, the temperature of a room, or the velocity of a moving
object, measurement provides the means to describe and characterize the physical world with
precision and accuracy.
Significance of Measurement
Measurement is more important than just a tool for quantification; it is the basis of everyday
decision-making, scientific research, and technological advancement.
Measurements are fundamental to many disciplines, including physics, chemistry, biology,
engineering, and economics. The significance of measuring in various kinds of domains is,
Scientific Inquiry:
Measurements form the bedrock of scientific inquiry, enabling observation, experimentation, and
analysis across disciplines like physics, chemistry, biology, and astronomy. Accurate measurements
are essential for formulating hypotheses, testing theories, and validating empirical observations,
driving progress in understanding the universe's fundamental laws and principles.
Technological Innovation:
Measurement underpins technological innovation by supplying critical data for designing,
developing, and optimizing new technologies and products. Across sectors such as microelectronics,
telecommunications, aerospace, and healthcare, precise measurements ensure reliability,
performance, and safety. Without accurate measurements, technological advancement would
stagnate, hindering our ability to tackle complex challenges and enhance quality of life.
Global Trade and Commerce: Measurements play a vital role in global trade by providing a
common language for quantifying goods and services. Metrology establishes international standards,
ensuring consistency and fairness in trade transactions. Accurate measurements promote market
efficiency, transparency, and consumer protection against fraud.
Healthcare and Safety: In healthcare and safety-critical industries, measurements are crucial for
monitoring and maintaining human health, safety, and well-being. Metrology supports accurate
medical diagnostics, pharmaceutical formulations, and environmental monitoring. Precise
measurements enable effective diagnosis, treatment, and safety in medical practices and procedures.
Methods of Measurement
Different methods are used when precise measurements are required to identify physical variables.
These methods define the unit and magnitude of the quantity under examination.
The choice of method depends on acceptable error margins and desired accuracy levels, all aimed at
minimizing measurement uncertainty. Here are the conventional methods:
1. Direct Method: Compares either primary or secondary standards directly with the quantity
being measured, using tools like bevel protractors, micro metres, and vernier calipers.
2. Indirect Method: Measures related quantities to calculate the desired value using mathematical
equations. Examples include using a side bar to determine angles or evaluating strain induced by
force in a bar.
3. Fundamental or Absolute Method: Measures fundamental quantities defining a specific
quantity, either directly or indirectly.
4. Comparative Method: Compares the quantity with known values, noting deviations from a
master gauge. Examples include dial indicators and comparators.
5. Transposition Method: Balances the measured quantity with known values to determine its
value, often used in determining mass with a balance and known weights.
6. Null Measurement Method: Minimizes the gap between the measured quantity and the
specified value until it reaches zero.
7. Coincidence Method: Detects minute variations between the evaluated quantity and a reference
using differential measurements.
8. Deflection Method: Directly displays the quantity by moving a pointer along a scale that has
been calibrated.
9. Substitution Method: Substitutes the quantity under measurement with a known value, ensuring
identical effects on the indicating device.
10. Complementary Method: Combines a known value with the quantity to be measured to meet a
predetermined comparison value.
11. Contact Method: Involves making contact with the surface being measured and the sensor of
the instrument, keeping the contact pressure constant. Dial indicators and micrometers are two
examples.
12. Contactless Method: Measures the surface without direct contact using tools like profile
projectors and optical equipment.
13. Composite Method: Compares the real shape of a component against its tolerance thresholds,
especially useful for interconnected components with combined tolerances. Implemented using
composite GO gauges, ensuring interchangeability.
Measurement involves three essential elements as shown in Figure 1.1, the object or phenomenon
being measured (measured), the instrument or method used for measurement (reference), and the
unit of measurement against which comparisons (comparator) are made.
Quantifying and characterizing a broad variety of physical qualities, such as length, mass, time,
temperature, pressure, etc., are based on these elements taken together.
Standards
Definition: A standard is defined as a benchmark or guideline established by an authority to
determine the measure of quantity, weight, extent, value, or quality. For example, the meter serves
as a standard unit of length measurement, established by an international governing body. The
existence of robust standards is indispensable for the functioning of modern civilization, particularly
in industries, commerce, and international trade.
Standards play an essential role in assuring the consistency, uniformity, and reproducibility of
measurements on a global scale. They enable the interchangeability of parts and manufacturing
processes, which underpin the entire industrial economy.
Material Standards
Linear measurements of the material commonly rely on two standards:
British/English system, represented by the yard, and Metric system, utilizing meters. The Metric
system is widely used by most nations due to its convenience and practicality.
The official definition of a yard or meter is the distance between two designated lines on a metal bar
that is meticulously maintained under particular support and temperature parameters. Legislation
passed by the Parliament governs the official usage of these lines, which serve as legally recognized
norms.
Yard
The imperial standard yard (Figure 1.2) comprises a 38-inch bronze bar with a 1-inch square cross-
section. It contains two 1/2-inch diameters, 1/2-inch-deep holes, each fitted with a 1/10-inch
diameter gold plug. To avoid bending and unintentional damage, these plugs are
Placed on the bar's neutral axis. The gold plugs have a polished top surface that is engraved with
longitudinal and transverse lines. The yard is the distance between the central transverse lines on the
plugs at a certain temperature and support condition.
Meter
The International Bureau of Weights and Measures created the standard in 1875. The prototype meter is
made out of a platinum-iridium alloy bar with a certain cross-section, as seen in Figure 1.3.
For accuracy, the meter's upper surface is polished and has two engraved lines. To reduce deflection, it
is maintained at 0°C and supported by two rollers that are 58.9 centimeters apart. The distance between
the center sections of a 102-centimeter-long bar made of platinum-iridium alloy with a web cross-
section is known as a meter's length.
Wavelength Standard
To address the limitations of metallic standards like the meter and yard, there arose a need for a more
precise and consistent standard of length. Jacques Cabinet, a French philosopher, proposed utilizing the
wavelength of monochromatic light as a natural and unchanging unit of measurement. In 1907, the
International Angstrom (A) unit was established, defined by the wavelength of red cadmium in dry air at
15°C (with 6438.4696 A equaling one wavelength of red cadmium). The Seventh General Conference
on Weights and Measures, held in 1927, approved a new definition of the standard unit of length, the
meter, based on the wavelength of red cadmium as an alternative to the International
Prototype Meter.
According to the new standard, a meter was defined as equivalent to 1650763.73 wavelengths of the
red-orange radiation of krypton-86 gas, with an accuracy of about 1 part in 10. This refinement allowed
for the meter and yard to be precisely defined in terms of the wavelength of krypton-86 radiation:
1 meter = 1650763.73 wavelengths
1 yard = 0.9144 meters = 0.9144 × 1650763.73 wavelengths = 1509458.3 wavelengths
While the Krypton-86 standard effectively met the growing technological need for precise standards,
there was a belief that a definition rooted in the speed of light would offer both technical feasibility and
practical benefits. This perspective led to a significant shift in the definition of the meter, which was
agreed upon during the 17th General Conference on Weights and Measures on October 20, 1983. As per
this new definition, a meter is now defined as the distance travelled by light in a vacuum within
1/299792458 of a second. This definition can be practically implemented through the utilization of an
iodine-stabilized helium neon laser.
Sub-Standards
The international standard yard and prototype meter are not suitable for general use.
Instead, there is a hierarchy of working standards for practical measurements.
These standards are categorized into four grades based on the required level of accuracy:
1. Primary Standards:
These standards offer precise definitions of units and are maintained under highly controlled
conditions. Examples include the international yard and meter. Primary standards are rarely used,
typically every 10 to 20 years, and only for comparison with secondary standards. They have no
direct application in engineering.
2. Secondary Standards:
These are designed to closely replicate primary standards in terms of design, material, and length.
Secondary standards are periodically compared with primary standards to record any deviations.
They are stored in various locations for safekeeping and occasionally compared with tertiary
standards as needed.
3. Tertiary Standards:
Tertiary standards are the primary reference points in laboratories and workshops. They are exact
replicas of secondary standards and are used for regular comparisons with working standards.
4. Working Standards: These standards are used more frequently in laboratories and workshops and
are typically made from lower-grade materials than primary, secondary, and tertiary standards to
reduce costs. Working standards are derived from fundamental standards and can be either line or
end standards, with line standards often being manufactured in an H-cross-sectional form.
source-induced thermal expansion, and thermal expansion brought on by manual handling are
also important.
6. To achieve higher accuracy, it is necessary to analyze and address sources of error within each of
the elements.
Precision:
Precision measures consistency in measurements, reflecting agreement among multiple
measurements under similar conditions. Repeatability, the ability of a measuring device to reproduce
consistent results, is crucial. Precision ensures reliable consistency, with reproducibility being
essential. Lack of precision yields varying results for repeated measurements, highlighting the
importance of internationally recognized standards.
Accuracy:
Accuracy refers to the agreement between a measured dimension and its true magnitude. It
represents how closely the measured value aligns with the true value. Achieving the exact true value
is practically unattainable due to inherent uncertainties in the measuring process.
Deviations from the true value leave uncertainty about whether the measured quantity truly
represents its intended value. Figure 1.4 illustrates the relationship between accuracy and precision.
a. Precise but not accurate b. Accurate but not precise c. Accurate but notprecise
Figure 1.4 Accuracy & Precision
Readability:
Readability pertains to the ease with which readings from a measuring instrument can be interpreted.
It refers to the instrument's ability to present its indications clearly and understandably. Instruments
with finely spaced graduation lines generally enhance readability, although excessively fine lines
may hinder readability without the aid of magnification. Micro metres, for instance, may incorporate
a vernier scale to improve readability, and additional magnification devices can further enhance
readability.
Repeatability:
Repeatability signifies the ability of a measuring instrument to produce consistent results when
measurements are repeated under the same conditions. This includes consistency in measurements
carried out by the same observer, with the same instrument, and without changes in location or
measurement method. Repeatability is often quantified in terms of the dispersion of measurement
results.
Reproducibility:
Reproducibility refers to the consistency of variation patterns in measurements when individual
measurements of the same quantity are conducted by different observers, methods, and instruments,
or under different conditions, locations, and times. Similar to repeatability, reproducibility can also
be quantified in terms of the dispersion of measurement results.
Calibration:
Calibration is a critical process in ensuring the accuracy of a measuring instrument. It involves
aligning the instrument's scale with known standard signals, typically performed by manufacturers
before use. Calibration entails adjusting the instrument to produce zero output for zero input and to
display accurate output for known input values, particularly near the full scale. Regular calibration
checks are necessary to maintain accuracy, ideally performed under similar environmental
conditions as actual measurements.
Magnification:
Magnification involves amplifying the output signal of a measuring instrument to enhance
readability. The degree of magnification should be balanced with the desired measurement accuracy,
avoiding excessive magnification that may limit the instrument's measurement range. Generally,
higher magnification leads to a narrower range of measurement.
Range:
The range is a set of values over which a system or measuring instrument can function as intended
and provide acceptable measurements. It establishes the upper and lower bounds on how accurately
the device can identify and measure a physical quantity. A thermometer's range, for instance, might
be -10°C to 100°C, meaning that it can measure temperatures precisely and accurately within this
range. While a shorter range might make the instrument only useful for certain measurement jobs, a
greater range enables the instrument to accommodate a wider spectrum of measurement values.
Threshold:
The threshold is the minimum observable input value that triggers the instrument to respond or
measure anything. It marks the beginning of the observed occurrence and is the point at which the
instrument changes from a state of non-detection to detection. The threshold, for example, in a
motion sensor is the minimum amount of movement necessary to trigger the sensor and record a
measurement. Since they specify the lowest amount of input signal that the instrument can detect
and measure with any degree of accuracy, thresholds are essential for assessing the sensitivity and
accuracy of measurements.
Hysteresis:
Depending on the direction of the input change, a measuring device's outputs may exhibit different
values for the same input. This phenomenon is known as hysteresis.
Put otherwise, hysteresis results in a lag or delay in the instrument's reaction to subsequent changes
in the input variable. Measurements made under increasing and decreasing input circumstances may
differ due to this non-linear behavior. For example, even when the input pressure values are the
same, hysteresis in a pressure sensor may result in a minor variation in the sensor's output between
rising and falling pressure.
Measurement Errors
A certain amount of error is always present when measuring a dimension; hence it is difficult to
determine its true value. The measurement error denotes the difference between the actual value of
the dimension being measured and its measured value. The actual value minus the measured value is
the mathematical expression for the measurement error.
There are two primary ways to evaluate or express measurement errors:
Relative error, on the other hand, is the ratio of the absolute error to the comparison value used for
calculating that error. This comparison value can be the true value, the conventional true value, or
the arithmetic mean for a series of measurements.
Types of Errors
During the process of measurement, various types of errors may arise, which can be categorized as
follows:
Static Errors:
These errors stem from the physical characteristics of the components within the measuring system.
There are three primary sources of static errors, and the precision of measurement can be determined
by dividing the static error by the measurement range. Static errors include:
Reading Errors:
These errors occur solely in the read-out device and are independent of other errors within the
measuring system. Examples of reading errors include parallax errors and interpolation errors.
Techniques such as using a mirror behind the readout pointer can mitigate parallax error, while
interpolation error can be addressed by employing magnifiers or digital read-out systems.
Characteristic Errors:
These errors refer to the deviation of the output of the measuring system from its theoretically
predicted or nominal performance. Linearity errors, repeatability, hysteresis, resolution errors, and
calibration errors fall under characteristic errors if the theoretical output follows a linear trend.
Environmental Errors:
These errors stem from external factors such as temperature, pressure, humidity, magnetic or electric
fields, radiation, vibrations, or shocks. Each
Instrument loading error is quantified as the difference between the values of the measured before
and after the instrument is connected. For instance, delicate components may deform under the
pressure exerted by the instrument, resulting in loading errors. Minimizing instrument loading error
requires careful selection of sensing elements and measurement instruments.
Dynamic Errors:
Time variations in the measured cause dynamic errors, which are the outcome of the system's
incapacity to precisely react to time-varying measurements. These errors are caused by factors such
as inertia, damping, friction, or physical constraints within the sensing, readout, or display systems.
For statistical analysis and the examination of error accumulation, errors are typically categorized
into two main types:
Except for personal errors, which vary between individuals based on the observer's personality,
systematic errors can be controlled in both magnitude and direction. Through proper analysis,
systematic errors can be identified and minimized, earning them the moniker of "controllable
errors."
Random Errors
Random errors lack consistency and occur sporadically and accidentally. They are inherent to the
measuring system and are challenging to eliminate. The specific cause, magnitude, and source of
random errors cannot be determined solely from knowledge of the measuring system or
measurement conditions.
Examples of random errors include
(a). Small variations in the positioning of setting standards and workpieces.
(b). Slight displacements of lever joints in measuring instruments.
(c). Operator errors in scale reading.
(d). Fluctuations in the friction of measuring instruments.
Measuring Instruments
Linear measurement encompasses the assessment of various dimensions, including lengths,
diameters, heights, and thicknesses, both externally and internally. It serves as a fundamental aspect
of metrology, facilitating accurate and precise quantification in diverse fields such as manufacturing,
construction, engineering, and science.
Instruments designed for linear measurements can vary in their design and functionality based on the
specific requirements of the application. For example:
1. Micrometers: These precision instruments are commonly used for measuring small distances with
high accuracy, typically featuring a calibrated screw mechanism for fine adjustments and precise
readings.
2. Vernier Calipers: Offering both inside and outside measurement capabilities, vernier callipers
utilize a sliding jaw mechanism and a secondary scale (vernier scale) to achieve highly accurate
measurements.
3. Height Gauges: Used for measuring the vertical distance between two surfaces, height gauges
feature a vertical measuring spindle and a graduated scale for precise height measurements.
4. Dial Indicators: Employed for measuring linear displacements or deflections, dial indicators feature
a needle or pointer that moves across a calibrated dial to indicate dimensional changes.
5. Thickness Gauges: These instruments are specifically designed for measuring the thickness of
materials, such as sheet metal or paper, using various mechanisms such as spring-loaded probes or
digital sensors.
6. Non-Contact Measurement Devices: Utilizing technologies such as laser or ultrasound, non-
contact measurement devices enable precise measurements to be taken without physically touching
the object being measured, ideal for fragile or delicate materials.
Vernier Instruments
The vernier principle enhances measurement accuracy by leveraging the minute difference in size
between two scales or divisions. A vernier calliper comprises two steel rules sliding along each
other. The main scale, on a solid L-shaped frame, divides into 20 parts, each small division
representing 0.05 cm.
5. Retainer: Serves the purpose of securing the movable part, enabling seamless transfer of
measurements.
6. Main scale: Offers measurements in fractions, predominantly in inches.
7. Main scale: Provides measurements with precision up to one decimal place, typically in
centimeters.
8. Depth probe: Designed for measuring the depths of objects or holes.
Thread Measurement
Screw threads play a pivotal role in mechanical design across diverse applications, serving as vital
components for controlled translational motion and facilitating disengage able connections through
fasteners. The dimensional precision of screw threads is paramount, ensuring the reliable assembly of
threaded mating components, the interchangeability of corresponding threaded parts, and the consistent
correlation between rotational input and translational output. Furthermore, accurate thread dimensions
contribute to the mechanical robustness of threaded connections, reinforcing structural integrity and
enhancing overall performance.
Micro metres can be categorized into various types, including outside Micro metres, inside Micro
metres, screw thread Micro metres, and depth gauge Micro metres. Operating on the principle of a
screw and nut, Micro metres utilize the rotation of a screw through a nut to advance by a specific
distance corresponding to the pitch of the screw thread. By dividing the circumference of the screw
into equal parts, the minimum length that can be measured can be determined. This accuracy can be
further enhanced by reducing the pitch of the screw thread or increasing the number of divisions on
the circumference of the screw.
The pitch of the spindle screw divided by a number of spindle divisions gives the least count of a micro
metre. The Micro metre's sensitivity and precision are indicated by this value, which is the smallest
increment that can be measured with it. The outside diameter and length of small objects can be
measured with an accuracy of 0.01 mm using an outside micro metre, a precision tool.
Micro metres typically have a maximum opening of 25mm and are available in various measuring
ranges, such as 0 to 25mm, 25 to 50 mm, 125 to 150 mm, and up to 575 to 600 mm.
Angular Measurements
Definition of Angle:
An angle is the space between two intersecting lines that meet at a common point.
When a circle is divided into 360 equal parts, each part is known as a degree (°). Furthermore, each
degree is subdivided into 60 smaller parts called minutes (‗), and each minute is further divided into 60
parts known as seconds (‖). Additionally, the unit of measurement known as the radian is defined as the
angle formed by an arc of a circle with a length equal to the radius. For instance, if the length of arc AB
is equal to the radius OA, then the angle θ is said to be 1 radian.
The adjustable blade can freely slide along a groove and can be clamped at any desired length for
convenience. Additionally, it can rotate around the centre of the main scale engraved on the instrument's
body and can be securely locked in place using a clamping knob. The main scale is graduated in
degrees, while the vernier scale features 12 divisions on each side of the centre zero. These divisions are
marked from 0 to 60 minutes of arc, with each division representing 1/12th of 60 minutes, which is
equivalent to 5 minutes.
Furthermore, these 12 divisions occupy the same arc space as 23 degrees on the main scale, resulting in
each division of the vernier scale measuring (1/12) 23 = 1(11/12) degree.
Measurement of acute and obtuse angles is facilitated by the use of a vernier scale. When the zero
marking on the vernier scale aligns with a graduation on the main scale, the reading represents an exact
Gyanmanja ri Innova tive Universi ty
16
measurement in degrees. However, if the zero marking aligns with a different graduation on the vernier
scale, the number of vernier graduations multiplied by 5 minutes must be added to the main scale
reading.
Sine Bars
Sine bars (figure 1.8), crafted from high-quality, corrosion-resistant steel, boast excellent hardness,
ground surface finish, and stability. These bars feature two cylinders of equal diameter attached at the
ends. They are available in various lengths, such as 100, 200, and 300 mm, and are primarily utilized for
precise angle setting, often in conjunction with slip gauges and surface plates. The operational principle
of sine bars relies on the principles of trigonometry.
In the diagram depicted above, the standard-length AB (L) serves as a reference, and by adjusting the
stack of slip gauges (H), any desired angle (θ) can be obtained using the formula
θ = sin⁻¹(H/L)
To measure unknown angles of a component, a dial indicator is moved along the work surface, and any
deviation is noted. The slip gauges are then adjusted to ensure that the dial reads zero as it traverses
from one end to the other.
Gauges
Limit Gauge
A limit gauge is not a measuring gauge; it is primarily used as an inspecting gauge.
These gauges are utilized in inspection processes based on attributes.
They provide information regarding whether the products fall within the prescribed limits or not.
Control charts, such as P and C charts, are generated based on the data obtained from limit gauges to
monitor the consistency of products.
Limit gauges are primarily employed for checking cylindrical holes of identical components in mass
production.
Plug Gauges
Plug gauges as shown in figure 1.9, are precision instruments used for measuring the dimensional
accuracy of holes in mechanical components. They come in various types, each designed for specific
applications and ease of use. There are three common types of plug gauges:
Plug gauges are indispensable tools for ensuring the dimensional accuracy of holes in mechanical
components. Whether utilizing single-ended, double-ended, or progressive plug gauges, inspectors can
rely on these precision instruments to maintain quality standards and uphold the integrity of
manufactured parts.
Ring Gauges
Ring gauges (figure 1.11) are essential tools used for measuring the diameter of shafts with a central
hole. These gauges feature accurately finished holes achieved through grinding and lapping processes
following hardening treatments. Additionally, the periphery of the ring is knurled to enhance grip during
handling.
Two distinct types of ring gauges are commonly employed for shaft inspection: GO ring gauges and
NOGO ring gauges. The GO ring gauge is crafted with a hole set to the upper limit size of the shaft,
while the NOGO ring gauge corresponds to the lower limit. During shaft inspection, the GO ring gauge
should smoothly pass through the shaft, whereas the NOGO ring gauge should not.
To facilitate easy identification of the NOGO ring gauges, a red mark or small groove is typically
etched into the periphery. This visual indicator aids in distinguishing between the two types of gauges
during the inspection process.
Snap Gauges
Snap gauges (figure 1.12), also known as gap gauges, serve as essential tools for ins pecting external
dimensions in manufacturing processes.
These gauges come in various types, each tailored to specific measurement needs. There are 5 type of
snap gauges:
1. Double-Ended Snap Gauge
2. Progressive Snap Gauge
3. Adjustable Snap Gauge
4. Combined Limit Gauges
5. Position Gauge
Comparators
Comparators represent a type of linear measurement tool that offers rapid and convenient assessment of
numerous identical dimensions. Unlike some other measurement devices, comparators do not directly
display the actual dimensions of the workpiece; rather, they indicate only the deviation in size.
Essentially, when using a comparator, it provides information on how much the dimension deviates
from the specified dimension, rather than the exact measurement.
Various types of comparators are available; each designed to accommodate different conditions and
requirements. Regardless of type, all comparators incorporate a magnifying device to enhance the
visualization of the dimension's deviation from the standard size. The classification of comparators is
based on the principles utilized for achieving magnification.
The common types of comparators include 1. mechanical, 2. electrical, 3. optical, and 4. pneumatic
variants.
Mechanical Comparators
Mechanical comparators utilize mechanical mechanisms to amplify small deviations.
These devices employ levers, gear trains, or a combination of both to magnify the slight movement of
an indicator. They typically offer magnifications ranging from 300 to 5000 to 1, making them suitable
for inspecting small parts machined to precise tolerances.
The dial indicators, sometimes referred to as a dial gauge (figure 1.13), is a common type of mechanical
comparator. This instrument resembles a small clock, with a plunger protruding from the bottom. When
the plunger experiences even a slight upward movement, it triggers a corresponding motion of the dial
pointer, which is graduated into 100 divisions. A full revolution of the pointer corresponds to a 1 mm
travel of the plunger, meaning that each division represents a plunger travel of 0.01 mm.
The experimental setup typically includes a worktable, a dial indicator, and a vertical post.
The dial indicator is attached to the vertical post using an adjusting screw, allowing for vertical
adjustment. The vertical post is then affixed to the worktable, which features a finely finished top
surface. The dial gauge can be precisely adjusted vertically and secured in place using a locking screw.
Advantages:
1. Robust, compact, and user-friendly design.
2. Does not require external power sources such as electricity or air.
3. Simple mechanism resulting in cost-effectiveness.
4. Suitable for use in ordinary workshops and easily portable.
Disadvantages:
1. Accuracy relies heavily on the precision of the rack and pinion arrangement; any slackness in this
mechanism reduces accuracy.
2. Increased friction due to multiple moving parts can compromise accuracy.
3. Limited range of measurement due to the pointer's movement being confined to a fixed scale.
Electrical Comparator:
An electrical (figure 1.14) is a precision measuring instrument used for comparing the dimensions of
mechanical components with high accuracy. It comprises three main components:
1. Transducer: The transducer consists of an iron armature positioned between two coils, which are
supported by a leaf spring at one end. The other end of the armature is in contact with a plunger.
These coils function as two arms of an AC Wheatstone bridge circuit.
2. Amplifier: The amplifier is responsible for magnifying the input signal frequency received from the
transducer. It amplifies the signal to a level suitable for further processing and display.
3. Display Device or Meter: The amplified input signal is displayed on a terminal instrument,
typically a meter. This meter provides a visual indication of the measured displacement.
To verify the accuracy of a specimen or workpiece, a standard specimen is initially placed under the
plunger. The resistance of the Wheatstone bridge is adjusted until the meter reads zero. Subsequently,
the standard specimen is removed, and the workpiece to be measured is introduced. Any height
variation in the workpiece causes the plunger to move, which is then amplified and displayed on the
meter. The least count of this electrical comparator is typically 0.001mm (one micron), allowing for
precise measurements with high resolution.
Electronic Comparator:
The electronic comparator operates on the principle of transducer induction or the application of
frequency modulation or radio oscillation.
Electronic comparators offer advanced functionality and precision, making them valuable tools for
accurate measurements in various industrial applications despite their drawbacks.
Some of the commonly used direct measurement instruments along with their principles of
operation, construction, and advantages/disadvantages:
Working: A screw rotation moves the instrument across the surface, causing the lapped cylinder to
roll and induce movement in the stylus, which produces a trace on the smoked glass plate.
3. Profilometer
Description: An indicator and recorder for roughness measurement in microns.
Working: The stylus, mounted in a pickup, is displaced up and down by surface irregularities,
inducing movement in an induction coil. The resulting voltage is amplified and recorded.
4. Taylor-Hobson Talysurf
Principle: Utilizes a carrier-modulating principle to trace surface irregularities as shown in Figure
1.17.
Working: The movement of the stylus is converted into changes in electric current, which are then
demodulated to produce a numerical record. This record provides a direct numerical assessment of
surface features.
These direct measurement instruments offer valuable insights into the quality and characteristics of
workpieces, enabling manufacturers to maintain high standards of precision and accuracy in their
products. However, their effectiveness relies on skilled operators and carefull calibration to ensure
reliable measurements.
i) Moving lever cantilever arm type ii) Moving bridge type iii) Column type iv) Moving RAM
horizontal type v) Gantry type
Coordinate Measuring Machine (CMM):
A specific type of measuring machine known as the coordinate measuring machine (CMM) (figure
1.19) is utilized for contact inspection of parts.
In computer-integrated manufacturing setups, these machines are controlled through computer
numerical control (CNC).
General software is provided to facilitate the reverse engineering of complex-shaped objects.
Components are digitized using CNC and CMM, and then converted into computer models,
streamlining the process.
Automatic work part alignment on the table is a notable feature of these machines, enhancing efficiency.
Time savings in inspections range from 5 to 10 percent when compared to manual methods.
A taper probe tip is provided by the measuring head, seated in the first datum hole, and set to
zero.
Successive holes are measured, with the readout indicating the coordinates of the part print hole
relative to the datum hole.
Automatic recording and data processing units are integrated for complex geometric and
statistical analysis.
Special CMMs offer both linear and rotary axes for measuring features like cones, cylinders, and
hemispheres.
Advantages:
1. Increased Inspection Rate
2. Enhanced Accuracy
3. Error Minimization
4. Reduced Skill Requirements
5. Cost Savings
6. Time Efficiency
Disadvantages:
1. Alignment Issues
2. Probe Run out
3. Perpendicular Errors in Z-Axis
4. Non-Square Movements
5. Digital System Errors
Applications
Measurement, fundamental to metrology, is crucial for precision and consistency across industries:
1. Manufacturing: Ensures quality and process optimization.
2. Engineering and Construction: Guarantees proper fit and tolerance analysis.
3. Automotive: Verifies part quality and streamlines assembly.
4. Aerospace and Defence: Maintains precision and safety standards.
5. Medical and Pharmaceuticals: Ensures regulatory compliance and instrument reliability.
6. Research and Development: Supports experimentation and innovation.
Unit Summary
Introduction to Measurements serves as the cornerstone for grasping fundamental concepts and
techniques essential across various fields. This unit covers a wide range of topics, including the
definition and importance of measurement, measurement methods, standards, terms relevant to
measuring instruments, measurement errors, and an overview of various measuring instruments such
as thread gauges, angle measurement tools, gauges, comparators, surface finish assessment tools,
and coordinate measuring machines (CMM).
This unit offers a comprehensive introduction to measurements, laying the groundwork for
understanding advanced concepts and applications. Mastery of these principles is essential for
accurate and reliable measurement practices across industries and disciplines.
2
.
Introduction
Transducers and strain gauges are the essential building blocks of modern measurement systems.
Their ability to translate physical phenomena into quantifiable electrical signals underpins precision
across manufacturing, scientific research, and quality assurance. Transducers form the comerstone
of measurement by transforming various physical quantities into interpretable electrical signals. A
transducer transforms energy between various forms. It converts physical non-electrical amounts
(e.g., force, light, sound) into measurable electrical signals in measurement and control systems.
They serve two key functions:
1. Sensing: Transducers detect changes in physical quantities, enabling their measurement.
2. Signal Generation: Transducers enable us to quantify and analyze various measurements by
converting physical properties into proportional electrical signals.
Transducers
Transducers play a pivotal role in metrology by facilitating the measurement of phys ical quantities
like temperature, pressure, or force. They achieve this by converting these physical parameters into
electrical signals, which can be conveniently measured, transmitted, and recorded. Since electronic
instruments cannot often directly measure many physical quantities, the conversion process carried
out by transducers becomes indispensable. Transducers enable the seamless integration of precise
electronic circuits for measurement and analysis by translating physical properties into electrical
signals. Moreover, these electrical signals can undergo amplification and conditioning, thereby
mitigating the effects of noise and enhancing measurement accuracy. Furthermore, the electrical
nature of these signals allows for easy transmission over long distances, facilitating real-time remote
monitoring of physical quantities. This capability proves particularly beneficial in industrial
environments and applications related to environmental monitoring.
Classification of Transducers
Transducers are categorized using various criteria, including their application area, energy
conversion method, nature of output signal, electrical parameters, principle of operation, and typical
applications. Broadly, transducers can be classified based on the principle of transduction as follows:
Capacitive Transducers
Inductance Transducers
Resistive Transducers
Capacitive Transducers
Capacitive transducers are a type of sensor that excels at converting various physical quantities, such
as displacement and pressure, into electrical signals. Unlike a typical capacitor with a fixed plate
separation, these have one movable plate. This allows external forces, like pressure or movement, to
alter the spacing between the plates. The working principle relies on the fact that capacitance
changes with the distance between the plates and the material filling the gap, known as the dielectric
(which can be air, a specific material, gas, or liquid). As this distance or dielectric property changes
due to the applied force, the capacitance of the transducer changes as well. This variation in
capacitance is then directly measured as an electrical signal. One of the key strengths of capacitive
transducers is their ability to measure both static (unchanging) and dynamic (continuously varying)
quantities.. Additionally, the movable plate can be directly connected to the object being measured,
enabling it to operate in both contacting and non-contacting modes, making it highly versatile for
various applications.
Upon detecting changes in capacitance, the transducers translate them into electrical signals for
subsequent analysis or processing. Renowned for their elevated sensitivity, broad frequency
response, and minimal power consumption, capacitive transducers find applications in diverse fields
such as pressure sensing, proximity detection, and humidity measurement.
Inductance Transducers
An inductance transducer is a device designed to transform a physical parameter, such as displacement,
into an electrical signal by detecting variations in inductance. Inductance refers to a conductor's inherent
resistance to alterations in the current passing through it. This property relies on the coil's geometry and
the characteristics of the material, including its permeability, contained within the coil. Inductance
transducers function as either self- generating or passive types. Self-generating variants capitalize on the
principle of electrical generation, where the movement of a conductor within a magnetic field induces a
voltage. This motion can stem from alterations in the measured quantity. An inductance transducer, also
called an electromechanical transducer, serves as an electrical apparatus engineered to translate physical
motion into fluctuations in inductance.
Inductive transducers come in two primary types: simple inductance and two-coil mutual inductance.
The Linear Variable Differential Transformer (LVDT) is a notable example.
1. Simple Inductance
This type of inductive transducer uses a single coil as its primary element. When the measured
mechanical component moves, the strength of the magnetic field generated by the circuit changes.
As a result, the circuit's inductance and output are altered. This allows for easy adjustment of the
circuit's output based on the input value, making it simple to calculate the value of the measured
parameter.
When an inductive transducer operates on self-inductance, the inductance can be mathematically
related to the reluctance.
n2
L=
R
Where,
n - Number of turns of the coil R- the reluctance of the magnetic circuit The reluctance of the magnetic
coil (R) is expressed as,
1 μομ. Α R =
Where,
μ0 is the permeability of air, and ur is the relative permeability. A is the cross-sectional area of the coil.
Therefore, the inductance of a coil is expressed in terms of the permeability of material (µ) and the
geometric factor (K) since the inductance is a function of N, µ and K, i.e., L = f( Ν, μ, K). In the simple
inductance-type transducer, there are three primary constructional arrangements for the inductive coil:
Type I: The inductance coil is wound over a rectangular magnetic material.
Type II: The inductance coil is wound on a cylindrical magnetic material.
Type III: Two coils are employed in the setup.
Type I: The inductance coil is wound over a rectangular magnetic material.
An inductive transducer of this design employs a ferromagnetic core shaped like a rectangle around
which a single inductor coil with N turns is wound. This coil acts as the magnetomotive force (MMF)
source, driving the generated flux through the established magnetic circuit.
An armature element is positioned opposite to the wound inductive coil. Any movement in this
mechanical armature alters the permeability of the flux path, subsequently modifying the circuit's
inductance. This change in inductance corresponds to an output, which can be directly calibrated to
reflect the movement of the armature element
The two-coil self-inductance transducer comprises dual distinct coils organized in a specific
configuration. The primary coil receives excitation from an external power source, while the secondary
coil captures the output. Notably, both the mechanical input and output are directly proportional in this
arrangement.
Two separate coils, A and B, are wound opposite to each other on a rectangular magnetic material in
this setup. The excitation coil is denoted as A, while the output coil is represented as B. An armature is
positioned opposite to both the input and output inductive coils. Any alteration in the armature's position
changes the air gap between the rectangular inductive base material and the armature element.
Consequently, the inductance of the output coil B changes in proportion to the mechanical displacement
of the armature.
Advantages of two-coil self-inductance transducers include their non-contact operation, durability, and
reliability. They are unaffected by environmental factors such as dust, dirt, or moisture, making them
suitable for harsh industrial environments. Additionally, they can detect metallic objects regardless of
their surface properties, shape, or colour.
Resistive Transducers
Resistive transducers are electronic components designed to convert physical quantities, such as
temperature, pressure, force, or displacement, into changes in electrical resistance. This variation in
resistance facilitates easy measurement and subsequent conversion back into the corresponding physical
quantity. These transducers find extensive use across various applications owing to their simplicity,
cost-effectiveness, and precision.
Strain gauges: Consisting of small wire-based resistors, strain gauges exhibit changes in resistance
under mechanical strain, making them suitable for measuring force, pressure, or weight.
Resistance temperature detectors (RTDs): Employing the principle that the resistance of a metal wire
rises with temperature, RTDs serve as temperature sensors, often applied in industrial settings to
monitor high temperatures.
Thermistor: These temperature sensors exhibit either a positive or negative temperature coefficient of
resistance, causing their resistance to increase or decrease with temperature rise. Thermistors are
commonly employed in low-cost temperature measurement scenarios.
Piezoelectric Transducer
The term "piezoelectric" comes from the Greek word "piezen," which means pressing or squeezing. The
piezoelectric effect is a phenomenon where applying mechanical stress or force to a quartz crystal
generates electrical charges on its surface. This effect was first discovered by Pierre and Jacques Curie.
The amount of charge generated is directly proportional to the rate of change of the applied mechanical
stress, resulting in a higher voltage with increased stress levels.
Piezoelectric transducers, also known as piezoelectric sensors, are instruments designed to convert
various physical quantities into measurable electrical signals by hamessing the piezoelectric effect. A
transducer is a device that converts energy from one form to another, and piezoelectric material is a
specific type of transducer. When force or pressure is applied to this material, it induces a voltage that is
directly proportional to the applied stress. This voltage can be easily measured using standard volt age-
measuring equipment. The main advantage of piezoelectric transducers is the direct correlation between
the measured voltage and the applied stress. This inherent relationship makes it easier to determine
physical quantities such as mechanical stress or force based solely on voltage readings. As a result,
piezoelectric transducers provide a convenient and efficient way to directly measure various physical
phenomena, making them useful across a wide range of scientific and industrial applications.
\
Figure 2.6 Piezoelectric Effect
Piezoelectric actuators and sensors operate in opposite ways. While sensors convert mechanical stress
into an electrical signal, actuators use electric voltage to generate mechanical deformation in the
material. By regulating the voltage applied, the actuator's movement can be precisely controlled,
allowing for accurate positioning and actuation. Piezoelectric transducers are made up of a quartz
crystal of silicon and oxygen arranged in a crystalline structure known as SiO2. Although most crystals
have a symmetrical unit cell, piezoelectric quartz crystals do not. However, despite the lack of
symmetry, they maintain electrical neutrality. The arrangement of atoms inside the crystal may not be
symmetrical, but the positive and negative charges are balanced, resulting in a net neutral charge. When
mechanical stress is applied along a specific plane, quartz crystals generate an electrical polarity. This
stress can be in the form of compression or tension, and its magnitude and direction determine the
resulting deformation.
The piezoelectric effect is a fundamental phenomenon that occurs when certain materials generate an
electric charge when exposed to mechanical stress. An unstressed quartz crystal remains uncharged, but
subjecting it to compressive stress induces positive charges on one side and negative charges on the
opposite side. This polarity shift causes a dimensional alteration in the crystal, elongating it and making
it thinner. Applying tensile stress reverses this charge distribution, resulting in a contraction of the
crystal, making it shorter and thicker. Piezoelectric transducers operate on this principle. The effect is
reversible, meaning that applying an electric voltage induces a dimensional change along a specific
plane in the piezoelectric crystal. For example, placing a quartz crystal within an electric field causes
proportional deformation based on the field's strength. Reversing the electric field's direction leads to an
opposite deformation in the crystal.
Piezoelectric transducers serve as self-generating devices, obviating the need for an external electric
voltage source. They produce an electric voltage directly proportional to the applied stress or force,
making them highly sensitive and suitable for sensor applications.
Due to their exceptional frequency response, piezoelectric transducers are widely used in accelerometers
and find relevance across diverse fields. Applications of the piezoelectric effect exte nd to sound
production and detection, electronic frequency generation, and ignition systems for cigarette lighters.
Moreover, piezoelectric transducers are integral components in sonar technology and microphones,
facilitating the measurement of force, pressure, and displacement with remarkable precision and
reliability.
Strain Measurement
Strain measurement involves quantifying the deformation or alteration in the shape of an object when
exposed to external forces. It is a pivotal concept in engineering, material science, and specific areas of
physics due to its ability to evaluate the structural integrity of loaded objects, ensuring they function
within safe parameters and comprehend the mechanical characteristics of materials, including elasticity
and strength. It also helps to identify potential issues and monitor structures and machinery to prevent
major failures.
Strain Gauges
A strain gauge is a pivotal instrument for quantifying strain or deformation across diverse material
substrates and is crucial for monitoring mechanical stresses in engineering applications. Particularly
vital in solid mechanics, strain gauges ascertain the extent of deformation incurred by objects under
external forces. Typically fashioned from a thin wire or foil arranged in a grid or zigzag pattern, these
gauges exhibit alterations in electrical resistance commensurate with applied mechanical strain. This
resistance modification, directly proportional to the exerted strain, facilitates meticulous deformation
measurement.
Extensive applications of strain gauges encompass diverse fields and they play a vital role in:
1. Monitoring and analyzing the behaviour of structures under various loading conditions.
2. Determining material properties like elasticity and strength.
3. Assessing the structural integrity of components and structures.
4. Optimizing designs by providing valuable insights into material behaviour under stress.
When an external force is applied to an object, it induces deformation, altering its shape and potentially
causing variations in its length and cross-sectional area. These alterations affect the attached strain
gauge, resulting in discernible shifts in its electrical resistance. To precisely quantify this change, a
gauge indicator is affixed or soldered onto the surface of the object. As the object experiences
deformation in response to the applied force, the strain gauge undergoes corresponding shape changes,
thereby eliciting resistance alterations. This change in resistance directly indicates the object's response
to the applied force, offering valuable insights into its mechanical properties and structural integrity.
Through careful analysis of these resistance variations, engineers and researchers can glean onitical
information about the object's behaviour under stress, aiding in designing, testing, and optimising
various mechanical systems and structures.
undergoes deformation, inducing length and cross-sectional area alterations. These physical changes
directly influence the electrical resistance of the gauge, facilitating measurements of the object's
properties. In the realm of strain gauge systems, the assessment of resistance variation commonly
employs a Wheatstone bridge circuit. This circuit, comprising four resistive arms, incorporates one arm
housing the strain gauge while the remaining three arms contain fixed resistors. Upon the application of
strain, the resistance of the gauge undergoes modification, instigating an imbalance within the
Wheatstone bridge. This imbalance, in tum, yields a minute electrical output signal proportionate to the
applied strain. The meticulous analysis of this signal enables the determination of strain magnitude,
thereby facilitating the comprehensive evaluation of mechanical properties such as stress, load, and
deformation in structural components. The ubiquitous utilization of strain gauges spans a multitude of
industries, including civil engineering, aerospace, automotive, and materials testing. Within these
sectors, strain gauges serve as indispensable instruments for unraveling the intricate behavior of
structures and materials under various loads, thereby informing critical decision- making processes and
fostering advancements in engineering and technology.
When a force is applied to a metallic wire, it undergoes strain, causing an increase in its length. The
magnitude of strain experienced by the wire is directly related to the applied force. If the wire's initial
length is denoted by L₁ and the final length after the application of force is denoted by L₂, the strain (e)
can be calculated using the formula:
E= L2-L L
When subjected to stretching, a wire experiences elongation along its length, concomitant with a
reduction in diameter, thereby undergoing a transformation in shape that influences its electrical
resistance. Precisely, the elongation of the conductor leads to a decrease in its electrical resistance. This
alteration in resistance is amenable to quantification and correlation with the magnitude of the applied
force. Strain gauges fulfill the essential role of quantifying force, displacement, and stress within
structural components and materials. The relationship between the input, represented by the applied
strain, and the output, symbolized by the resultant change in resistance, is encapsulated by the term
"gauge factor" or "gauge gradient." This parameter denotes the ratio of the change in resistance (AR) to
the applied strain (c). In essence, the gauge factor provides a quantitative measure of the sensitivity of
the strain gauge to mechanical deformation, thereby facilitating precise and accurate measurements of
force, displacement, and stress in diverse engineering applications.
For instance, consider a wire strain gauge comprising a uniform conductor with resistivity (r), length
(1), and cross-sectional area (A). The resistance (R) is contingent upon its geometry, given by:
l
R=ρ
A
The rate at which the combined effects of changes in length, cross-sectional area, and resistivity
determine resistance changes.
ρ ρl l
= dR = dl- 2 dA+ dρ
A A A
dR dl dA d
R l A
When the strain gauge is properly attached and bonded to an object's surface, it is considered to deform
in conjunction with the object. The strain experienced by the strain gauge wire in the longitudinal
direction is equivalent to the strain experienced by the surface in the same direction.
dl
εl=
l
When a wire undergoes deformation, its Poisson's ratio influences its cross-sectional area. For a
cylindrical wire with an initial radius of r, any normal strain experienced in the radial direction is
affected accordingly. The normal strain in the radial direction (5) can be calculated using the following
formula:
dr dl
εy = v.ε l = -v
r l
The rate of change of the cross-sectional area is two times of the radial strain when the strain is small.
A
dl
= -2v
l
The rate of change of resistance is
The resistance sensitivity to the strain for a given material can be calibrated with an equation
SAR/R =1+2+ dp/p
Strain gauge vendors typically provide the sensitivity factor S, which can be used to calculate the
change in electric resistance and determine the average strain at the attachment point.
ARIR AR SR ន
Applications
1. Strain gauges play a vital role in structural monitoring, safeguarding structures like bridges and
dams by measuring strains and stresses to detect potential weaknesses.
2. In experimental studies, strain gauges analyze material behavior under various loads, offering
insights into material performance.
3. Aerospace relies on strain gauges to monitor aircraft structural integrity, detecting fatigue and stress
concentrations to ensure safety
4. Automotive testing utilizes strain gauges to assess component performance and durability,
optimizing designs and enhancing safety.
5. Strain gauges monitor ground movements, aiding in assessing slope stability, predicting landslides,
and monitoring structural performance.
6. Critical in infrastructure, strain gauges monitor the health of structures like bridges and tunnels,
detecting degradation and ensuring safety.
Strain gauges deliver continuous data, enabling engineers to monitor the behaviour of structures in
real-time. This is especially crucial during load testing, construction phases, or seismic activity,
allowing for immediate identification of any concerning strain levels or deformations. Compared to
other techniques like extensometers, strain gauges offer a more economical way to measure strain.
Their affordability and reusability make them ideal for conducting multiple measure-ments at
different points within a structure.
Every strain gauge is designed to function within a specific range, ensuring the accuracy and
reliability of the data it provides. When high strains are expected, such as during dynamic load
testing or extreme events, accuracy may decrease beyond this limit. Strain gauges are delicate and
susceptible to damage during construction or accidental impacts. Therefore, ensuring their protection
is essential to obtaining reliable and consistent measurements.
2. Electrical:
These gauges typically encompass slender, rectangular-shaped foil strips adorned with intricate
wiring patterns that ultimately converge onto a pair of electrical cables. When subjected to strain,
the monitored material imparts subtle bending to the foil strip, prompting the labyrinthine wires to
either undergo separation (resulting in slight thinning) or converge (leading to slight thickening).
Consequently, as the cross-sectional dimensions of the metal wire fluctuate, its electrical resistance
undergoes commensurate variations in response to the applied stress. Under conditions where the
applied forces remain within a minimal range, the ensuing deformation remains elastic, eventually
allowing the strain gauge to revert to its initial configuration. This characteristic highlights the
gauge's resilience to mechanical loading, ensuring its longevity and reliability in diverse
measurement applications.
3. Piezoelectric:
Piezoelectric sensors are a type of strain gauge that generates electrical voltages when compressed
or stretched, making them highly sensitive and reliable. This is because they exhibit
piezoelectricity, which is the ability of a material to generate electricity when subjected to
mechanical stress. By measuring the voltage output of these sensors, we can easily calculate the
amount of strain that the material is experiencing. Due to their accuracy and reliability,
piezoelectric strain gauges are widely used in various applications.
The Gauge Factor (GF) is a crucial parameter used in strain measurement, particularly in electrical
strain gauges. It represents the ratio of the relative change in electrical resistance of the strain gauge
to the mechanical strain experienced by the gauge. Mathematically, it is expressed as:
G.F.=(AR/RG)/ε
where, AR - Change in resistance.
RG - Resistance of the undeformed gauge, and ɛ - Mechanical strain.
Types of strain gauges based on the configuration
1. Quarter-bridge
2. Half-bridge
3. Full-bridge
1. Quarter bridge:
This setup features a single active strain gauge, making it the simplest configuration, albeit the least
sensitive. Typically, the rheostat arm (R2) is adjusted in the bridge circuit diagram to match the
strain gauge resistance when no force is applied. Both ratio arms (R₁ and R3) are set to equal values.
Consequently, without any force acting on the strain gauge, the bridge is symmetrically balanced,
resulting in zero voltage on the voltmeter, indicating zero force exerted on the strain gauge.
The strain gauge changes its electrical resistance when subjected to either compression or tension.
Specifically, when experiencing compression, the resistance decreases, whereas under tension, it
increases. This resistance alteration perturbs the bridge circuit's equilibrium, inducing an imbalance
that results in a voltage reading on the connected voltmeter. This configuration, in which a single
element within the bridge circuit exhibits a change in resistance proportional to the measured
variable (mechanical force), is commonly referred to as a quarter-bridge circuit. The strain gauge is
pivotal in this circuit arrangement, serving as the primary sensing element that converts mechanical
deformation into discernible electrical signals. This configuration can obtain precise measurements
of applied forces, facilitating accurate analysis and evaluation of structural integrity and
performance.
3. Full bridge: The Full Bridge Strain Gauge configuration entails the utilization of all four resistors
within the Wheatstone bridge circuit as strain gauges. This configuration compared to half-bridge or
quarter-bridge setups. It finds widespread application in sectors necessitating high sensitivity and
precision, such as aerospace, automotive, and civil engineering industries. In this configuration, the
two strain gauges on one arm of the bridge are connected in series, while those on the opposing arm
are connected in parallel.
strain gauge (Unstressed)
strain gauge (stressed)
This arrangement effectively balances the resistance and temperature sensitivity of the circuit,
thereby enhancing measurement accuracy. Integration of a signal conditioning amplifier with the full
bridge strain gauge circuit is common practice to amplify the output voltage to a level suitable for
accurate measurement by a data acquisition system or other measurement instruments. Ensuring
compatibility between the amplifier's input impedance and the output impedance of the bridge is
imperative to mitigate signal loss or distortion.
potential, delivering precise and reliable strain measurements with high levels of accuracy. Despite
their limited application in industrial contexts, optical sensors remain indispensable tools in research
and development settings where precision and accuracy are paramount.
3. Semiconductor strain:
Piezo-resistive strain gauges, also known as semiconductor gauges, are preferred for measuring
small strains over foil gauges. They rely on the piezo-resistive properties of materials like silicon or
germanium to detect changes in resistance under stress rather than directly measuring strain.
Typically constructed from a wafer with a resistance element diffused into a silicon substrate, these
gauges lack a backing and require careful bonding to the strained surface using a thin layer of epoxy.
Precise bonding is crucial while semiconductor gauges are smaller and less expensive than metallic
foil sensors. The same epoxy adhesives used for foil gauges are used for bonding semiconductor
gauges. However, semiconductor strain gauges are more susceptible to temperature variations and
tend to drift more than metallic foil sensors. Additionally, their resistance -strain relationship is
nonlinear, although this limitation can be addressed through software compensation techniques.
Consequently, the wire experiences a corresponding alteration in length, resulting in a change in its
electrical resistance. This change in resistance is directly proportional to the applied strain,
facilitating precise measurement of mechanical deformation. Unbonded strain gauges prove
particularly beneficial in scenarios where direct bonding to the surface is impractical or where high
flexibility and dynamic response are essential, as observed in the aerospace and automotive
industries.
Based on the applications, the strain gauges are classified into four types
1. Electrical Resistance Strain Gauges
The most commonly used strain gauges typically consist of a finely crafted metallic grid firmly
bonded to a backing material. These strain gauges function by detecting alterations in the wire's
resistance in response to applied strain, a principle exploited in their measurement using Wheatstone
bridge circuits. Renowned for their exceptional attributes including high sensitivity, accuracy, and
stability, these strain gauges are extensively employed for monitoring minute strains in various
structural components such as bridges, dams, and buildings. Their reliability and precision make
them indispensable tools in ensuring the structural integrity and safety of critical infrastructure.
Preparation
The first step in the installation process involves cleaning the surface of the test specimen where the
gauge is to be bonded. To establish a clean, shiny metallic surface, it is critical to eliminate all traces
of grease, rust, paint, and any other contaminants. To achieve this, it is recommended to use abrasive
paper to uniformly and finely abrade an area larger than the bonding area. This will ensure the
bonding surface is smooth and free from impurities. Next, clean the region with an industrial tissue
or cloth soaked in chemical solvent until it is entirely free of contamination. Ensure that the solvent
used is suitable for the material being cleaned. This will help to remove any remaining dirt, dust, or
other residues that may interfere with the bonding process. After cleaning the surface, it is essential
to let it dry completely before installing. This can be done by using a clean, dry cloth or by air-
drying the surface for a few minutes.
Additionally, it is crucial to allow adequate curing time for the adhesive to be fully set before
subjecting the assembly to any strain or testing procedures. This procedure ensures optimal bonding
strength and reliability of the strain gauge installation, which is essential for accurate strain
measurement and analysis in various engineering applications.
terminals, ensuring cautious application to prevent overheating the terminal and potential
detachment of the metal foil.
Moreover, verifying the integrity of the solder joint and the electrical connection is imperative to
guarantee accurate signal transmission during strain measurement. This meticulous soldering
process is essential for maintaining the reliability and performance of the strain gauge system in
various engineering applications.
Pre-coating
Before bonding the strain gauge, surface preparation is crucial to establish a barrier against any
potential moisture released from the concrete or mortar surface. This barrier aims to prevent
moisture absorption by the underside of the strain gauge. Initially, cut the gauge binder provided
with the strain gauge approximately 5 mm inward from the fold. Next, apply packing tape around
the perimeter of the binder, effectively masking an area roughly 10mm larger than the binder on
each side. Subsequently, the adhesive must be applied thoroughly onto the mortar or concrete
surface. Ensure that the adhesive is applied to form a layer measuring 0.5 mm to 1 mm thick on the
installation surface. This meticulous surface preparation is essential to optimize the bonding strength
and reliability of the strain gauge installation, ensuring accurate strain measurement and analysis in
concrete or mortar structures.
in concrete or mortar structures, enabling reliable mechanical behavior and structural integrity
assessment.
Preparation
Preparing the surface for installation involves the straightforward removal of any dirt and oil using a
surface preparation agent to achieve a clean surface. Upon receiving the weldable strain gauge, it
comes equipped with a metal ribbon intended for trial welding. This ribbon includes a securing
sleeve and an MI cable. The trial welding process is initiated to adjust the welling power of the spot
welder. During this process, if cracks or holes appear in the ribbon, it indicates that the welding
power should be reduced. Conversely, if the ribbon remains unmarked, it suggests that the power
should be increased accordingly. This iterative adjustment ensures optimal welding conditions for
secure, reliable strain gauge attachment to the metallic surface.
Welding Process
Before initiating the welding process, it's essential to precisely align the strain gauge at the centre of
the installation area. Utilize a spot welder and metal ribbon to apply pressure on both sides of the
gauge. During the installation, it's critical to carefully plan the number and sequence of welding
points to ensure they do not form a crisscross pattern. This precaution is vital to prevent the
inclusion of any mechanical stresses in the steel substrate. Secure the MI cable with the metal ribbon
to alleviate any strain on the secured sleeve. Additionally, gently curving the cable between the
gauge and the connecting terminal can help avoid undue strain on the MI cable. It's worth noting that
various types of strain gauge installations exist, depending on the connection technique and the
properties of the installation surface. Selecting the appropriate installation method is cruci al to
ensure the integrity and accuracy of the strain measurement system.
Gyanmanja ri Innova tive Universi ty
53
A key benefit of strain gauge rosettes is their ability to capture strain in multiple directions at once.
This is particularly valuable when the strain distribution across the material's surface is uneven. With
strategically placed gauges in the rosette configuration, engineers gain a comprehensive
understanding of how the material behaves under load by measuring strain variations along different
axes. The applications of strain gauge rosettes extend across various industries. In aerospace
engineering, they monitor the structural integrity of aircraft components under dynamic flight forces.
In automotive engineering, they assess the performance of vehicle chassis and suspension systems
during diverse driving conditions. Similarly, civil engineers use them to evaluate the behaviour of
structural elements in buildings, bridges, and other infrastructure projects. In addition to single-
element strain gauges, a combination of strain gauges called rosettes is available in many
combinations for specific stress analysis.
Two-element rosettes
Two-element rosettes are a type of strain gauge rosette consisting of two strain gauges positioned at
a 90-degree angle. They are typically used when the principal directions of strain (the highest and
lowest strains experienced by the material) are already known. By measuring the strain in each
gauge, the normal strains (strains in the direction of the gauge) in the x and y directions can be
determined.
Three-element rosettes
determination of all three strain components- normal strains along both axes and the shear strain. These
equations are,
Any three gages used together at one location on a stressed object is called a strain rosette.
Large angles are used to increase the accuracy of a strain rosette. A common rosette of three gauges
separates the gages by 45∘ , or θa = 0∘ , or θb = 45∘ , or θc = 90 ∘ . The three equations can then be
simplified to
εx + εy εx − εy
εa = +
2 2
εx + εy γxy
εb = +
2 2
εx + εy εx − εy
εc = −
2 2
Similarly, if the angles between the gages are 60∘ , or θa = 0∘ , or θb = 60 ∘, or θc = 120 ∘ ., the unknown
strains, for εx , εy and γ xy will be,
εx = εa
2εb + 2εc − εa
εy =
3
2εb − 2εc
γxy =
3
1 Force Measurement
i) Spring Balance
ii) Proving Rings
iii) Load Cells
2 Torque Measurement
i) Prony Brake Dynamometer
ii) Eddy Current Dynamometer
iii) Hydraulic Dynamometer
3 Pressure Measurement
i) Mcleod Gauge
Force Measurement
Force is a fundamental concept in physics that describes the push or pull that can cause an object to
change its state of motion. Measuring force accurately is crucial in various scientific and engineering
disciplines. There are two main approaches to force measurement such as direct and indirect.
Direct methods involve a head-to-head comparison between the unknown and known gravitational
forces acting on a standard mass. This leverages the principle that any object with mass experiences
an attractive force due to Earth's gravity, also known as weight. The weight (W) can be calculated
using the following equation: W = mg
W- Weight of the thing (force due to gravity)
m-Mass of the thing (standard mass)
g-Acceleration due to gravity (constant value, approximately 9.81 m/s²)
Indirect Method
Indirect methods involve converting the effect of the unknown force into a measurable quantity
using various transducers or sensors. These sensors translate the force into a secondary effect, such
as deformation or a change in electrical properties, that can be readily measured and correlated back
to the force using established principles.
1. Spring Balances: Spring balances operate according to Hooke's Law, which dictates that the
elongation of an elastic material is directly proportional to the applied force within the material's
elastic limit. These devices typically utilize a spring with a known spring constant (k). The
spring constant represents the force required to stretch the spring by a specified unit length. The
force applied can be calculated by measuring the displacement caused by an unknown force
acting on the spring and using the known spring constant. Spring balances are favoured for their
simplicity, portability, and capacity to measure a wide range of forces. However, it's important to
note that they may exhibit lower accuracy compared to direct measurement methods, especially
for highly precise applications.
2. Strain Gauges: Strain gauges are electrical resistance-based sensors that are securely attached to
a material. When an external force is applied to the material, it undergoes deformation, resulting
in a change in the electrical resistance of the strain gauge. This alteration in resistance can be
accurately measured and subsequently converted back to the force applied using the gauge's
calibration factor. Strain gauges are renowned for their high sensitivity, making them particularly
suitable for applications where intricate stress distributions must be measured precisely. By
detecting minute changes in resistance, strain gauges provide valuable insights into the
mechanical behaviour of materials under varying loads, facilitating the optimization of structural
designs and ensuring the integrity and safety of engineering systems.
3. Piezoelectric Sensors: These sensors utilize the piezoelectric effect, where certain materials
generate a measurable voltage proportional to the applied force. Piezoelectric sensors are well -
suited for dynamic force measurements due to their fast response times.
Spring Balance
The spring balance serves as an effective device for measuring force or tension. Comprising a
coiled spring enclosed within a metal or plastic shell, it features a hook or loop on one end for
attaching the object under measurement and a pointer or scale on the opposite end for reading the
applied force.
The core component of the spring balance, the coiled spring, is calibrated with a known spring
constant, dictating the extent of expansion or contraction in response to the applied force. These
springs are typically crafted from materials like steel with high tensile strength and ensure
precise and reliable measurements. The pointer or scale located at the opposite end allows for the
direct reading of the applied force. Graduated with force units such as pounds or Newtons, the
scale enables straightforward and accurate interpretation of the recorded force. Spring balances
offer versatility in force measurement, capable of handling forces ranging from small increments
to several kilograms or more. This broad range accommodates various applications, from
precision tasks to heavy-duty operations. The working principle of a spring balance is based on
Hooke's law, which states that the elongation or compression of a spring is directly proportional
to the force or load exerted on it. Consequently, the scale markings on the spring are equally
spaced to reflect this proportionality.
According to Hooke's law, if the load applied to the spring is doubled, the deformation of the spring
(elongation or compression) and the load attached to it will also double. This direct relationship between
the load and the spring deformation forms the basis for the operation of spring balances, allowing for
the measurement of forces by observing the extent of spring displacement.
Proving Rings
The proving ring stands as one of the foremost devices for force measurement. A displacement
transducer links the ring's top and bottom to gauge the displacement prompted by applied pressure.
Measuring the relative displacement yields the applied force magnitude. Various methods can measure
deflection, such as precise micrometers, linear variable differential transformers (LVDTs), or strain
gauges. Compared to alternative devices, proving rings exhibit heightened strain due to their
construction. Crafted from steel, proving rings find utility in static load measurement and calibration of
tensile testing machines. Their load range spans from 1.5 kN to 2 MN. A typical proving ring features a
circular ring with a rectangular cross-section, depicted in Fig 2.23 where the thickness (t), radius (R),
and axial width (b). Capable of enduring tensile or compressive forces across its diameters, the ring's
ends are attached to structures for force measurement. Four strain gauges are affixed to the ring's walls:
two on the inner walls and two on the outer walls. Application of force triggers compressive strain (-e)
in gauges 2 and 4, while gauges 1 and 3 undergo tension + ε.
The four strain gauges are integrated into a bridge circuit, enabling the measurement of the unbalanced
voltage resulting from the applied force. This voltage, calibrated in terms of force, directly indicates the
force magnitude. The following expression determines the strain's magnitude:
e = 1.08FR/Ebt²
The relationship between the applied force and the deflection caused by the applied force is described
by the following expression: бу = π
4 Fd 16 ΕΙ
where, E-Young's modulus, / moment of inertia, F-force, d- outside diameter of the ring, and dy is the
deflection.
Load Cells
Elastic members play a crucial role in force measurement systems by facilitating displace ment
assessment. An elastic member transforms into a load cell when integrated with strain gauges to
measure force. In load cells, elastic members are primary transducers, while strain gauges are secondary
transducers. Load cells adopt an indirect method for force measurement, wherein force or weight is
converted into an electrical signal. These devices are extensively utilized across various industries for
tasks involving force measurement.
A load cell typically comprises four strain gauges, with two dedicated to measuring longitudinal strain
and the other two for transverse strain. These strain gauges are strategically positioned at 90° angles to
each other. In this configuration, two gauges experience tensile stresses while the remaining two endure
compressive stresses. Under no- load conditions, the resistance across all four gauges is uniform,
resulting in equal potentials across terminals B and D. Consequently, the Wheatstone bridge achieves
balance, yielding zero output voltage.
The strain gauges measure the induced strain when the specimen is stressed due to an applied force.
Gauges R1 and R4 gauge the longitudinal (compressive) strain, while gauges R2 and R3 assess the
transverse (tensile) strain. As a result of this strain, voltage discrepancies arise across terminals B and D,
causing the output voltage to fluctuate. This variation serves as an indicator of the applied force after
calibration. The following relation can express the compressive longitudinal strain within the load cell:
F
AE
Strain gauges 1 and 4 undergo this particular strain, while strain gauges 2 and 3 experience a strain
described by the subsequent equation:
E2
yF
AE
Here, y is the Poisson's ratio.
This arrangement of mounting gauges effectively compensates for the effects of bending and
temperature variations. Symmetric mounting of the gauges ensures complete compensation, providing
accurate and reliable measurements across different operating conditions.
2.4 TORQUE MEASUREMENT
Torque (T) provides essential load information for analysing mechanical systems' stress or deflection.
Torque measurement is crucial in engineering applications, providing essential load information for
analyzing stress and deflection in mechanical systems. Torque (T) is calculated by multiplying the
applied force (F) by the known radius (r), expressed as T = Fr (in N m). Moreover, torque measurement
is vital for determining mechanical power, which denotes the power required to operate or develop a
machine. Mechanical power (P) is calculated using the formula P = 2ㅠNT, where N represents the
angular speed in revolutions per second. Devices used for torque measurement, known as
dynamometers, find widespread application in various machinery, including internal combustion
engines, steam turbines, pumps, compressors, and other rotating equipment. The selection of a
dynamometer depends on the nature of the machine being tested. Absorption dynamometers are suitable
for machines that can absorb the produced power or torque. Conversely, driving dynamometers are used
for machines that function as power absorbers and are capable of driving the machine. Transmission
dynamometers, positioned within or between machines, sense torque at specific locations and are also
known as torque meters. Each type of dynamometer offers distinct advantages tailored to specific torque
measurement requirements.
This mechanical device relies on dry friction to convert the engine's mechanical energy into heat. The
Fig shows two wooden blocks mounted on opposite sides of the engine's flywheel. The flywheel is
connected to the shaft whose power is being measured. The Prony brake depicted above is composed of
several components, including a wooden block, frame, rope, brake shoes, and a flywheel. It functions on
the principle of converting power into heat through dry friction. The frictional resistance between the
brake shoes and the flywheel amplifies as the rope is tightened, thereby increasing the braking effect. To
further augment the frictional force, spring-loaded bolts are integrated to tighten the wooden block
against the flywheel. This arrangement enhances the braking performance of the Prony brake by
maximizing the contact between the brake components and the flywheel, effectively dissipating the
kinetic energy as heat through friction.
All the power absorbed by the Prony brake is converted into heat, necessitating cooling measures. The
formula to calculate brake power (Pb) is given by:
Brake Power (Pb)=2NT
Where, T-Weight applied (W)xdistance (1)
The Prony brake, while cost-effective, suffers from inherent instability, posing challenges in adjusting
or maintaining specific loads. Several limitations associated with the Prony brake dynamometer include:
1. Variation in Coefficients of Friction:
As the wooden blocks undergo wear over time, the coefficients of friction between the blocks and
the flywheel can fluctuate. This necessitates frequent tightening of the clamps to maintain stability,
especially during prolonged periods of measuring large powers.
Elevated temperatures can lead to a decrease in friction coefficients, posing a risk of brake failure. It
is crucial to implement cooling measures to mitigate temperature rises. One common method
involves supplying water into the hollow channel of the flywheel to facilitate cooling and maintain
friction coefficients within safe limits.
Unlike mechanical counterparts, the eddy current dynamometer minimizes losses by eliminating
physical contact between windings and excitation.
Its compact size and compatibility render it suitable for a myriad of applications. In certain
scenarios, such as testing the performance of internal combustion engines, the eddy current
dynamometer serves as a load. This article provides an overview of the functionality and
applications of an eddy current dynamometer.
Construction:
The eddy current dynamometer comprises an outer frame, known as the stator, which serves as the
stationary component of the device. The stator houses windings placed within stator slots.
Energizing these stator windings generates a magnetic field within the coils, termed the stator
magnetic field. In high-rated machines, three-phase windings are commonly employed in the stator
slots. The stator windings, typically composed of copper, are enveloped by a magnetic material like
cast iron or silicon steel for delicate applications. Positioned beneath the stator coils is the rotating
member, referred to as the rotor, mounted on a shaft to facilitate rotation. Rotor windings are housed
within rotor slots, with three-phase configurations utilized in heavy-duty machines.
The rotor must be coupled to the prime mover to receive mechanical input. A DC supply energizes
the stator windings, with rectifier units employed for larger machines. Cooling and insulation of the
stator windings in heavy machines are accomplished using oil to dissipate heat effectively. A current
meter integrated into the system measures the produced current and induced torque. A pointer,
linked via an arm to the stator, gauges the torque generated in the rotor. Leveraging this torque value
and the known speed, the power generated in the machine can be calculated.
Working
The functioning of an eddy current dynamometer hinges on Faraday's Law of electromagnetic
induction. As per this principle, when there is movement between conductors and a magnetic field, it
induces an electromotive force (emf) in the conductors. This emf, referred to as dynamically induced
emf, is utilized within the dynamometer by exciting the stator poles with a direct current (DC)
supply.
Gyanmanja ri Innova tive Universi ty
65
Upon the activation of the DC supply, the stator coils receive energy, establishing a magnetic field
within the stator. In a three-phase setup, this excitation creates a three-phase rotating magnetic field
within the stator coils. Meanwhile, as the prime mover rotates, the rotor coils interact with this
magnetic field. It's noteworthy that the stator magnetic field remains fixed in this arrangement, as the
DC excitation induces a static magnetic field. Consequently, an emf is induced as the rotor coils
intersect the static stator magnetic field. This induction arises from the static nature of the magnetic
field while the conductors undergo rotation, leading to a relative displacement between the magnetic
field and the conductors.
Hydraulic Dynamometer
The hydraulic dynamometer functions as an absorption-type dynamometer, relying on fluid friction for
its operation, thereby dissipating mechanical energy. This characteristic leads to its alternative
designation as a fluid friction dynamometer. Hydraulic dynamometers feature semicircular vanes
positioned within both the rotor and stator components. Water circulation induces a toroidal vortex
around these vanes, generating a torque reaction within the dynamometer casing. This reaction is
counteracted by the dynamometer and quantified using a load cell. Structurally, hydraulic
dynamometers closely resemble fluid flywheels designed to gauge the frictional force between impeller
vanes and a moving fluid.
The hydraulic dynamometer comprises a rotating disk connected to the driving shaft of the test machine.
The disk features semi-elliptical grooves through which water flows. A stationary casing, mounted on
antifriction bearings or trunnions, houses a braking arm and a balance system that allows the casing to
revolve freely within limits set by the braking arm.
Similarly, casing also contains semi-elliptical grooves or recesses. These two components are arranged
so that the rotating disk rotates within the casing. The schematic of the hydraulic dynamometer is
depicted in Fig 2.27. The semi-elliptical grooves on the disk align with corresponding semi-elliptical
recesses on the casing, forming a chamber through which liquid flows. As the driving shaft of the pr ime
mover rotates, the liquid follows a helical path in the chamber, creating vortices and eddy currents.
These currents cause the casing of the dynamometer to rotate in the direction of the shaft.
The braking action is adjusted and regulated by altering the distance between the casing and disk or by
modifying the water amount and pressure. Maximum power absorption occurs when the casing is full,
while minimum absorption is achieved with minimal liquid. The total power absorption of this device
varies as follows:
1. The cube of the rotational speed
2. The fifth power of the rotating disk diameter
The absorbing element incorporates a force-sensing component, such as a load cell, positioned at the
end of the arm with a radius "r". The formula determines the exerted torque:
Torque (T)=F × r
Pressure Measurement
Pressure is a foundational element in numerous facets of daily life, influencing phenomena ranging from
atmospheric pressure to blood pressure, gauge pressure, and vacuum conditions. A comprehensive
comprehension of pressure and its quantification proves indispensable across diverse domains. At its
core, pressure denotes the force exerted by a medium, typically a fluid, per unit area. In instrumentation,
pressure measurement often entails assessing differential pressure, commonly known as gauge pressure,
which signifies the force exerted per unit area by liquids, gases, or solids.
Expressing mathematically, pressure (P) is derived from the formula:
P = F/A
A and F signify area and force. Pressure can be quantified using various units such as atmospheres and
bars or by referencing the height of a liquid column. Standard atmospheric pressure, typically measured
at sea level, is conventionally standardized as 760 mmHg. It is worth noting that atmosphe ric pressure
diminishes with increasing altitude.
Measurement of pressure is significant for several reasons:
1. It is a descriptive quantity of a system.
2. It is a crucial process parameter.
3. Pressure difference is often used to measure fluid flow rate.
4. The range of pressure encountered in practice spans nearly 18 orders of magnitude, from the lowest to
the highest pressures.
solely focus on deviations from this baseline, providing essential data for tasks such as fluid system
monitoring, tire pressure assessment, and hydraulic system operation.
3. Differential Pressure:
Differential pressure is a fundamental concept in fluid mechanics and engineering, denoting the
difference in pressure between two distinct points within a system. This measurement scale is
pivotal in assessing flow rates, detecting obstructions or blockages, and determining the efficiency
of various mechanical systems. Differential pressure sensors are commonly employed in
applications such as HVAC systems, filtration processes, and industrial automation, where precise
pressure differentials are crucial for optimal performance and safety.
Mcleod Gauge
Developed by Herbert McLeod in 1874, the McLeod gauge stands as a cornerstone in vacuum
measurement, particularly within the pressure range of 10 to 10-4 Torr (1 Torr = 133.322 Pa).
Renowned as an absolute standard, this device, also referred to as a compression gauge, operates by
compressing the low-pressure gas whose pressure is under assessment. The essence of its operation lies
in compressing the gas within a capillary tube, subsequently measuring the resulting height of a mercury
column to determine the vacuum level.
Functioning in accordance with Boyle's law, the McLeod gauge underscores the principle that
compressing a known volume of low-pressure gas to a higher pressure facilitates the calculation of the
initial pressure by quantifying the resultant volume and pressure relationship. This foundational
technique has positioned the McLeod gauge as an indispensable tool in various scientific and industrial
applications requiring precise vacuum measurements.
The following fundamental relation represents Boyle's law:
P2V2
P₁= V
The McLeod gauge, a fundamental instrument in vacuum measurement, features a distinctive structural
design comprising a capillary tube A, sealed at its upper end, and two interconnected limbs B and C that
are integrated into the vacuum system. Limbs A and B are characterized by capillary tubes of identical
diameters, ensuring uniformity, while limb C possesses a wider diameter to mitigate capillary errors and
enhance accuracy. During operation, the movable reservoir is initially lowered, allowing the mercury
column to descend below the opening level O, establishing a connection between all capillaries and
limbs with the unknown pressure source. Subsequent elevation of the movable reservoir results in
mercury filling the bulb, causing an upward displacement of the mercury level within capillary tube A.
This action compresses the gas confined within the system, adhering to Boyle's law. Practically, the
mercury level in capillary tube B is adjusted to align with that of limb C, serving as the zero level
reference on the scale. The disparity in levels between the two mercury columns in limbs A and B
directly reflects the trapped pressure, facilitating straightforward readings from the scale. Through this
meticulously designed mechanism, the McLeod gauge provides precise and reliable measurements of
vacuum pressures essential for various scientific and industrial applications.
This experiment leverages Boyle's Law to ascertain the unknown pressure (P1) of a gas within a sealed
system. At constant temperature, the product of pressure and volume for an ideal gas remains constant.
The equation expresses this relationship:
P 1V1 = P 2V2
V1 represents the volume of gas contained in capillary tube A above level O before compression.
P₁ signifies the unknown pressure of the gas within the system.
P 2 denotes the pressure of the gas confined in the compressed limb, typically limb B. V₂ st ands for the
volume of the gas in the sealed limb after compression. The volume of the gas after compression (V2)
can be calculated using the following equation:
V2 = ah
h = P 2 – P1
By substituting the expression for h into the equation for V2, we can establish a relationship between
the known variables (a, V₁, P₂) and the unknown pressure (P1). This will lead to the final equation(s)
used to solve for P1.
PV = Pah
PV (h+P₁)ah
PV₁ = ah² + ahP
P₁(V₁ah) = ah
Hence
P1 ah V₁ah
P₁= ah V₁ when ah <<< V₁
McLeod gauges excel at measuring low pressures. This is achieved by designing them with a large bulb
volume (V1) compared to the cross-sectional area (a) of the capillary tube.. The ratio of V₁ to a is called
the compression ratio. However, there are trade-offs to consider:
1. Capillary Diameter: A minimal capillary diameter (a) can lead to mercury sticking to the walls,
limiting the achievable compression ratio.
2. Bulb Size: While a larger bulb (V1) allows for measuring lower pressures, it also increases the
weight of the mercury column, potentially limiting the compression ratio as well.
Despite their usefulness in calibrating other high-vacuum gauges, McLeod gauges have significant
limitations. The presence of condensable vapours in the gas being measured can introduce errors.
This is because Boyle's Law, which forms the basis of the gauge's operation, may not apply
accurately to these vapours.
Applications
Transducers and strain gauges are integral to force, torque, and pressure-measuring instruments across
industries. Capacitive transducers find extensive application in touch screens integrated into electronic
devices such as smart phones, tablets, and touch- sensitive displays. These transducers detect alterations
in capacitance induced by user touch, facilitating precise and responsive interaction with the device
interface. In proximity sensing applications, capacitive transducers play a vital role in detecting the
presence or absence of objects without physical contact. They are deployed in various devices,
including automatic faucets, motion-activated lighting systems, and proximity switches, which are
utilized in industrial automation setups. Utilizing capacitive transducers, humidity sensors accurately
measure relative humidity levels in diverse environments. Fluctuations in humidity induce variations in
capacitance, enabling precise determination of humidity levels crucial for applications such as weather
monitoring, HVAC systems, and industrial process control.
Inductance transducers are prominently employed in non-destructive testing methodologies like eddy
current testing. They gauge alterations in inductance triggered by the interaction between
electromagnetic fields and conductive materials, facilitating the detection of surface defects, cracks, or
material thickness variations in metal components.In position and displacement measurement
applications, inductance transducers are utilized, notably in linear and rotary encoders. They detect
changes in inductance stemming from the movement of a conductive target, providing precise and
dependable position feedback essential for machinery, robotics, and automotive systems. Inductance
transducers are integral components of metal detectors utilized across various sectors including security
screening, mining operations, and manufacturing quality control processes. They enable the detection of
metal objects by analyzing variations in inductance.
Resistive transducers, such as resistance temperature detectors (RTDs) and thermistors, are extensively
utilized for temperature sensing in industrial, automotive, and consumer electronics domains.
The load cells are essential in weighing systems from labs to factories, converting force into electrical
signals for accurate measurement. Materials testing machines utilize them to assess mechanical
properties precisely, aiding quality control and R&D. Additionally, force feedback systems in robotics
rely on them for precise environmental interaction. Torque- measuring instruments employ transducers
and strain gauges for rotational force measurement. In automotive engineering, dynamometers use them
for engine torque measurement, while industrial machinery benefits for monitoring and adjusting
rotational forces. Prony brake dynamometers measure engine torque output by applying resistance to
assess brake performance metrics. Eddy Current Dynamometer measures torque, speed, and power
output in high-speed electric motors. It is used to characterize material properties like strength and
stiffness. Hydraulic dynamometers simulate road loads to evaluate vehicle performance. They assess
torque, speed, and power output in hydraulic machinery.
McLeod gauges measure ultra-low pressures in scientific research and semiconductor manufacturing.
They monitor gas pressures in applications like gas chromatography and semiconductor processing.
Unit Summary
This unit comprehensively examines transducers and their crucial role in strain. measurement, covering
a wide range of topics necessary for understanding and effectively applying these devices. The
exploration commences with an exhaustive analysis of the characteristics and classifications of
transducers, delving into the intricacies of various types, such as two -coil self-inductance and
piezoelectric transducers. Through detailed discussions and illustrative examples, leamers gain
comprehensive insights into the principles, functionalities, and applications of transducer variants.
Following the exploration of transducers, the unit delves into strain measurement, offering an extensive
overview of strain gauges. Topics covered include the classification of strain gauges, mounting
techniques, and the configuration of two-element and three-element rosettes. The learners thoroughly
understand the principles and methodologies underlying strain measurement, empowering them to
effectively utilize strain gauges in various applications. The unit progresses to explore the applications
of transducers in measuring force, torque, and pressure. Engaging discussions encompass common
instruments such as spring balances, proving rings, load cells, prony brakes, eddy current
dynamometers, hydraulic dynamometers, and McLeod gauges that helps the learners in gaining valuable
insights into the principles, operation, and applications of these instruments in diverse engineering
contexts.
3
.
Speed Measurement
Speed, a fundamental aspect of motion, represents the pace at which an object shifts its position over
time. Notably, the assessment of rotational speed has gained prominence over linear speed
measurement. It is measured in a variety of ways. Common formats include linear speed, which is
commonly represented in meters per second (m/s), and angular speed, which is commonly expressed
in radians per second (rad/s) or, for rotating systems, rotations per minute (rpm).
Continuous linear speed measurement mostly depends on angular speed measurement. Determining
the linear speed of the reciprocating components in mechanical systems is made possible by having
a thorough understanding of rotational velocity. Rotational speed measurement is important in
engineering and related industries because of the angular and linear speed interdependence.
Tachometer
Angular measurements are facilitated by a tool known as a tachometer. The definitions attributed to
a tachometer encompass its pivotal role in measurement:
i. A device for measuring angular velocity, usually of a shaft, measures the number of revolutions in
a specified amount of time or shows the number of rotations per minute.
1. A device that shows rotational speed constantly or gives a consistent average speed reading at
quickly repeated intervals of time.
Classification: Tachometers are broadly categorized into two main types: Mechanical and Electrical
variants. The selection of the appropriate tachometer hinges on several factors including cost
considerations, the necessity for portability, desired accuracy levels, the range of speeds to be
measured, and the dimensions of the rotating component.
1. Mechanical Tachometer
Mechanical Tachometers rely solely on mechanical components and movements to gauge speed.
These devices, often known as revolution counters or speed counters as shown in figure 3.1,
operate with a simple yet effective mechanism. They utilize a worm gear, serving as both the
connection to the shaft and the conduit for speed transmission.
When the shaft rotates, it drives the worm gear, which in turn moves a spur gear. This spur gear
is connected to a pointer on a meticulously calibrated dial. As the gears rotate, the pointer
indicates the number of revolutions the input shaft completes within a specific time frame.
It's important to note that this method requires a separate timer to precisely measure time
intervals. Consequently, the revolution counter provides an average rotating speed rather than
real-time updates. However, with proper design and manufacturing, these counters can offer
satisfactory speed measurements, typically accurate up to speeds of 2000-3000 revolutions per
minute (rpm).
The need to synchronize the starting of a watch and a counter gave rise to the invention of the
tachoscope as shown in Figure 3.2. This device combines a revolution counter with a timing
mechanism, allowing both to start together. As the contact point makes contact with the rotating
shaft, both parts move simultaneously. As long as the contact point is attached to the shaft, the
tachoscope will continue to function. The rotation speed can be ascertained by examining the
counter and timer readings. Even at 5000 revolutions per minute (rpm), the tachoscope can
measure speeds with accuracy.
A stopwatch can also start by pressing the starting button. The revolution counter stops
automatically after a set time, typically three or six seconds. The dial accurately displays the
rotational speed in revolutions per minute (rpm), and the device shows the average speed over a
short period. These speed-measuring tools are used for speeds ranging from 20.000 to 30,000
Gyanmanja ri Innova tive Universi ty
76
rpm, with an accuracy of about 1% of the full scale. By observing the pointer's position, the
speed of the shaft can be measured. with ease.
These devices, known as centrifugal tachometers, can also gauge linear speed by adding specific
attachments to the spindle. To cover a wide range of speeds.
Manufacturers often produce them with multiple range options. The device can smoothly switch
between these ranges by utilizing a gear train between the fly ball shaft and the spindle.
However, it's crucial to select the appropriate speed range carefully, as exceeding the device's
capacity can result in significant damage. It's also important to note that altering the range while
the instrument is in use is not advisable. Centrifugal tachometers are highly favored for their
accuracy, typically around ±1%, and are commonly used to monitor rotational speeds of up to
40,000 rpm. Centrifugal tachometers surpass revolution-counter-stopwatch mechanisms in this
regard, as the latter cannot provide real-time speed information.
Eddy current tachometers excel in measuring rotational speeds, accurately capturing speeds
up to 12,000 rpm with a precision of ±3%.
Displacement Measurement
In the world of measurements, a key tool for figuring out how far something moves in a straight
line is called a displacement transducer, or DT. Picture following an object as it moves along a
straight path-that's what we mean by linear displacement.
The main job of a displacement sensor, also called a displacement gauge, is to tell us how far
something moves compared to a fixed point. These sensors are used for measuring dimensions like
width, height, and thickness.
Displacement is a really important thing because it affects force, acceleration, torque, and speed. To
measure displacement, transducers is used, which come in different types like electrical, optical,
pneumatic, and mechanical. Sometimes, combined techniques are used together to get an electrical
output.
For instance, optical methods utilize photo-detectors to convert what they observe into electrical
signals, such as voltage or current. This is a reason for the combination of mechanical and optical
techniques is common. Displacement measurement can be done directly or indirectly, but the
indirect method is used widely, especially when seeking related factors like force or acceleration.
Various methods exist for displacement measurement, though electrical signals from these
transducers typically rely on displacement as a fundamental parameter. Some commonly utilized
methods include:
Linear Potentiometer Transducer
Linear Motion Variable Inductance Transducer
Proximity Inductance Transducer
Capacitive Transducer
Linear Variable Differential Transformer (LVDT)
Piezoelectric Transducer
Photo-Electric Transducers
Each method has its strengths and applications, contributing to the diverse toolkit of metrology and
measurement.
Its name, LVDT, highlights its unique function: it measures the variation or difference in output
across its secondary coil. Compared to other types of inductive transducers, the LVDT stands out for
its exceptional precision and reliability.
Construction of LVDT
The transformer consists of a primary winding (P) and two secondary windings (S1 and S2) wound
around a hollow cylindrical former containing a core as illustrate in Figure 3.6.
Both secondary windings, S1 and S2, are positioned on either side of the primary winding and
contain an equal number of turns.
When an alternating current (AC) source is connected to the primary winding, it generates a flux in
the air gap, inducing voltages in the secondary windings.
A movable soft iron core is placed within the former, and the displacement to be measured is linked
to this core. Typically, the iron core possesses high permeability, aiding in reducing harmonics and
enhancing the LVDT's sensitivity.
To shield from electromagnetic and electrostatic interference, the LVDT is often housed within a
material like stainless steel.
The output of the LVDT is obtained by measuring the voltage difference between the two secondary
windings.
Working Principle
The primary of an LVDT is linked to an AC power source, resulting in the generation of alternating
currents and voltages in its secondary coils.
Two secondary coils, S1 and S2, produce voltages e1 and e2 respectively. The differential output,
out, is calculated as the difference between e1 and e2, expressing the LVDT's operational principle
(Figure 3.7).
Three distinct cases elucidate the functioning of the LVDT based on the position of its core:
CASE I: Null Position (No Displacement)
When the core is in its null position, equal flux links both secondary windings, inducing equal emf
in both coils. Consequently, out equals zero, signifying no displacement.
The relationship between output voltage and core displacement follows a linear curve, demonstrating
that the output voltage varies proportionally with the core's movement.
Noteworthy points regarding the magnitude and polarity of induced voltage in an LVDT:
The voltage change, whether positive or negative, correlates directly with the core's linear motion.
Monitoring the output voltage's increase or decrease enables the determination of the direction of
motion.
The output voltage of an LVDT maintains a linear relationship with core displacement.
Advantages
Extensive Measurement Range: LVDTs boast an impressive range for displacement measurement,
spanning from 1.25 mm to 250 mm, making them versatile for various applications.
Friction-Free Operation: Due to the core's movement within a hollow former, LVDTs experience
minimal frictional losses, ensuring accurate displacement measurement.
High Input and Sensitivity: LVDTs deliver a robust output without requiring additional
amplification, thanks to their high sensitivity, typically around 40V/mm.
Minimal Hysteresis: LVDTs exhibit low hysteresis, resulting in excellent repeatability across
different operating conditions.
Efficient Power Usage: With power consumption around 1W, LVDTs are notably energy-efficient
compared to other transducers.
Seamless Electrical Signal Conversion: LVDTs effortlessly convert linear displacement into
electrical voltage, simplifying signal processing.
Disadvantages
Shielding Against Stray Magnetic Fields: LVDT (Linear Variable Differential Transformer) is
highly sensitive to stray magnetic fields, necessitating the implementation of a protective setup to
shield it from such interference.
Susceptibility to Vibrations and Temperature: The performance of LVDT can be significantly
influenced by vibrations and temperature variations.
Despite these challenges, LVDTs offer distinct advantages over other types of inductive transducers,
making them a preferred choice in many applications.
Application of LVDT
LVDT finds its utility in measuring displacements spanning from fractions of millimeters to several
centimeters.
It serves as a primary transducer, directly transforming displacement into an electrical signal.
In certain scenarios, LVDT assumes the role of a secondary transducer.
For instance, consider the Bourdon tube, which initially converts pressure into linear displacement.
Subsequently, the LVDT translates this displacement into an electrical signal.
Following calibration, this signal yields accurate readings of the fluid pressure.
Flow Measurement
In pressurized pipes, it's important to accurately measure the flow rate of fluids for various purposes
such as controlling industrial processes and monitoring the rate of flow within the pipes. One
commonly used method for this is through a type of instrument called a differential pressure flow
meter. These meters come in different forms like venturi, flow nozzle, and orifice meters.
Each of these meters works by measuring the pressure difference between the natural flow of the
fluid and the flow through a narrowed section in the pipe. By detecting this pressure difference, the
flow rate of the fluid can be calculated. Essentially, a flow meter is a tool that helps us understand
how much or how fast a fluid is moving through a pipe, whether the pipe is open or closed. There
are four main types of flow meters that we use to classify these measuring devices.
Rotameters
Orifice meters, venturi meters, and flow nozzles are instruments used for measuring fluid
flow. They work by maintaining a constant obstruction area while allowing the pressure drop
to change according to the flow rate.
In simpler terms, these devices keep the blockage size constant while the pressure loss varies
based on how fast the fluid is flowing.
On the other hand, the rotameter(as shown in Figure 3.9) functions differently. It acts as a
variable area meter, where the obstruction area changes as the fluid flows through it.
However, for accurate measurement, rot meters require vertical pipelines.
Gyanmanja ri Innova tive Universi ty
84
The functioning of a rotameter relies on fundamental principles such as buoyancy, drag, and
gravity acceleration to measure fluid flow.
A typical rotameter consists of a tapered glass tube filled with liquid and a floating device.
When the setup is introduced into a pipeline and fluid starts flowing, two main changes
occur: the pressure interval (AP) shifts, and the float moves.
According to the drag equation, AP changes as the square of the fluid flow rate.
To maintain a constant pressure interval despite this change, the meter's area is adjusted,
resulting in the tapered design of the rotameter.
As the float moves upward, it eventually reaches a point of balance.
The scale on the glass, which measures the float's displacement, directly correlates with the
fluid flow rate, following the equation: (Q = K(At - Af)).
Some Rotameters have flow rate values directly marked on the glass, enabling immediate
measurement.
Applications:
1) Measurement of Corrosive Fluid Flow Rates: Useful for determining flow rates of corrosive
liquids, gases, or vapors.
2) Ideal for Low Flow Rates: Particularly effective in measuring low flow rates accurately.
Advantages:
1) Visual Flow Conditions: Flow conditions are easily observable, aiding in monitoring and
assessment.
2) Linear Flow Rate Functionality: Flow rate corresponds directly to the position of the float,
facilitating uniform flow scales.
3) Versatile Fluid Measurement: Capable of measuring flow rates of liquids, gases, and vapors with
precision.
4) Adjustable Capacity: Modification of the float, tapered tube, or both allows for customization of the
rotameter's capacity.
Gyanmanja ri Innova tive Universi ty
85
Limitations:
1) Vertical Installation Required: Installation must be vertical for accurate measurements.
2) Impractical for Moving Objects: Unsuitable for measuring flow in moving objects or environments.
3) Visibility Issues with Colored Fluids: Float may not be visible when opaque or colored fluids are
used.
4) Costly for High Pressure/Temperature Fluids: Expense increases for measurements involving high-
pressure or high-temperature fluids.
5) Incapability with Solid-Containing Fluids: Unsuitable for fluids with a high percentage of solids in
suspension due to potential obstruction issues
Turbine Meter
Gases with very low flow rates and liquids can be effectively measured using the turbine flow
meter principle.
The turbine flow meter (as shown in figure 3.10) operates based on a simple principle: a turbine
wheel, or multiplied rotor, positioned at a 90-degree angle from the flow of liquid or gas.
A shaft support portion ensures stability within the flow meter housing, while ball or sleeve
bearings support the rotor, allowing it to freely spin on its axis.
As the liquid or gas flows, it hits the turbine blades (rotor), exerting force that drives the rotor's
rotation.
The rotational speed of the rotor is directly proportional to the fluid velocity, hence providing a
measure of the volumetric flow rate.
Monitoring the speed of rotation is achieved through a magnetic pickup fitted on the outside of
the meter housing.
The magnetic pickup consists of a permanent magnet with coil windings, placed close to the
rotor within the fluid channel. Each passing rotor blade generates a voltage pulse, proportional to
the flow rate.
Digital techniques allow for manipulation, totalization, and difference of the electrical voltage
pulses, ensuring minimal error from pulse generation to final reading.
The K factor, representing the number of pulses per volume unit, along with the time constant (Tk),
frequency (f), and volumetric flow rate (Q) are essential parameters for calibration and
measurement.
Turbine flow meters offer exceptional precision and reproducibility, with accuracies ranging from ±
0.25 to ± 0.5%, and precision as fine as ± 0.02%.
Typically measuring between 10:1 and 20:1, turbine meters can exceed 100:1 range in military
applications.
Available in various sizes, from 6.35 to 650 mm, with liquid flow ranges spanning from 0.1 to
50,000 gallons per minute.
Primarily utilized in military applications, turbine flow meters also find applications in petrole um
blending systems, aerospace, and airborne operations for energy fuel and cryogenic flow
measurements.
Advantages
1. Precision: Turbine flow meters offer high accuracy in measuring flow rates.
2. Consistency: They provide excellent repeatability and can measure a wide range of flow rates
reliably.
3. Low pressure drop: These meters maintain a fairly low pressure drop, minimizing energy loss
in the system.
4. Easy installation and maintenance: Turbine flow meters are straightforward to install and
require minimal maintenance, reducing operational hassles.
5. Versatility: They exhibit good temperature and pressure tolerance, making them suitable for
various operating conditions.
6. Viscosity compensation: Turbine flow meters can be adjusted to account for changes in fluid
viscosity, ensuring accurate readings across different fluid types.
Disadvantages:
Costly investment: Turbine flow meters come with a higher initial cost, which might be prohibitive
for some applications.
Limited suitability for slurry: These meters are not ideal for measuring flow rates of slurry
applications due to potential accuracy issues.
2) Challenges with non-lubricating fluids: Turbine flow meters may encounter operational problems
when used with fluids that lack lubricating properties, potentially affecting accuracy and lifespan.
3.4 TEMPERATURE MEASUREMENT
Temperature stands out as one of the most frequently monitored and controlled variables in
industrial processes due to its significance.
Its importance is highlighted by its involvement in various chemical processes, heat transfer
mechanisms, and principles of thermodynamics.
One straightforward definition of temperature is "the level of heat or coldness of an object or its
surroundings, measured using a specific scale."
Regardless of the scale or scope of a system, temperature remains a crucial parameter to consider.
Achieving thermodynamic equilibrium between the system and the temperature- measuring device is
essential for accurate temperature measurement.
The physical properties of the sensor are influenced by temperature fluctuations, and these
alterations are utilized to determine the temperature accurately.
Resistance Thermometers
A resistance thermometer is a tool utilized for gauging changes in temperature within a control room.
The resistance of metal conductors changes with temperature fluctuations. By observing these resistance
changes, it is possible to determine temperature changes. Instruments that utilize this principle are
known as resistance thermometers.
Construction
The diagram illustrates the structure of a resistance thermometer detector (RTD as illustrate in
Figure 3.11), which is commonly used for measuring temperature.
RTDs utilize materials such as copper, nickel, or platinum as their resistance elements.
Platinum wire is often wound around a ceramic bobbin to create the resistance element.
This resistance element is enclosed within a protective tube, typically made of carbon steel or
stainless steel.
Internal lead wires are used to connect the resistance element to external terminals.
The lead wires are covered with insulation to prevent short circuits, with fiberglass used for low and
medium temperatures and ceramic for high temperatures.
A protection tube shields the resistance element and internal lead wires from the surro unding
environment.
The protection tube is equipped with mounting attachments for installing the RTD at the
measurement point.
RO (1+Dt)]
Where:
(Rt) is the resistance at a certain temperature (C).
(R0) is the resistance at room temperature.
(Dt) is the temperature difference.
(x) is the temperature coefficient of the RTD material.
[Dt = frac{{(Rt/RO) - 1}}{x}]
By plugging in the values of (Rt), (R0), and (x), we can easily calculate the temperature difference.
This allows us to accurately measure changes in temperature using the RTD.
Advantages:
Higher Accuracy: Provides more precise measurements.
Linear Output: Shows a smoother, more predictable response compared to thermocouples.
No Need for Temperature Compensation: Eliminates the requirement for additional adjustments
based on temperature changes.
Long-Term Stability: Maintains consistent performance over extended periods.
Disadvantages:
Costly: Generally, these instruments are expensive to procure.
Limited Temperature Change Sensitivity: Even significant changes in input temperature result in
only minor alterations.
External Power Requirement: Requires an external power source for operation.
Low Sensitivity: Exhibits a reduced ability to detect subtle changes.
Optical Pyrometer
Working Principle
In optical pyrometry, the principle of temperature measurement through brightness comparison is
used. This method relies on observing changes in color as temperature increases, which serves as an
indicator of temperature.
An optical pyrometer compares the brightness of an image generated by a heat source with that of a
reference lamp set at a known temperature. By adjusting the current flowing through the lamp until
its brightness matches that of the image produced by the heat source, we effectively gauge the
temperature of the source.
This process hinges on the fact that the intensity of light emitted at any wavelength is contingent
upon the temperature of the object emitting it. Consequently, once calibrated, the current passing
through the lamp provides a reliable measure of the temperature of the heat source.
Construction
In one end of the instrument, as depicted in Figure 3.13, there's an eyepiece, and on the other end,
there's an objective lens. It's powered by a battery, and there's a rheostat and a millivoltmeter
connected to a reference temperature bulb to measure current. Between the objective lens and the
reference temperature lamp, there's an absorption screen. This screen helps widen the temperature
range that the instrument can measure. Additionally, there's a red filter between the eyepiece and the
lamp, which only allows a specific narrow range of light wavelengths, around 0.65 micrometers.
Operation
To measure the temperature of a source, its radiation is directed onto a filament of a reference
temperature lamp using an objective lens.
The eyepiece is adjusted until the filament of the reference temperature lamp is in clear focus and
appears superimposed on the image of the temperature source.
The observer adjusts the lamp current. If the filament appears dark (as in Figure A), it means it's
cooler than the temperature source. If it appears bright (as in Figure B), it's hotter than the
source. If it's not visible (as in Figure 3.13), it's at the same temperature as the source.
The observer adjusts the lamp current until the filament and the temperature source have the
same brightness, indicated by the filament disappearing in the superimposed image.
At this point, the current flowing through the lamp, indicated by the mill voltmeter connected to
it, becomes a measure of the temperature of the source, once calibrated.
Miscellaneous Measurements
Humidity Measurement
Humidity measurements trace back over 2000 years to ancient China, where the first attempts
were made.
The 15th century saw significant advancements, culminating in Leonardo Da Vinci's gravimeter
hygrometer design.
By the late 17th century, dew-point meters emerged, utilizing ice cooling to condense water
vapor for measurement.
The late 18th century marked progress towards understanding relative humidity, with the
development of hygrometers employing hair.
In 1803, L.W. Gilbert established the concept of relative humidity as a ratio of present water
vapor to maximum water vapor at the same temperature.
Mechanical hygrometers, relying on hair stretching, and psychrometers were commonly used
before electronic innovations.
Finland's Prof. Vilho Väisälä pioneered the first electronic humidity sensor and radiosonde in
1934, followed by Dr. Dunnmore's resistive hygrometer in 1938.
Post-World War II, sensor technology surged, introducing advanced sensors and new
measurement methods like chilled mirror dew-point meters and optical hygrometers by the late
20th century.
Initiatives for a national humidity standard began in 1991, with the establishment of the
Technical Inspection Centre and later the Centre for Metrology and Accreditation.
The first primary standard for humidity debuted in 1993 after international comparisons.
4. Relative Humidity:
Relative humidity (RH) represents the amount of moisture in the air compared to the
maximum moisture it can hold at a given temperature. It's expressed as a percentage.
The formula for calculating relative humidity is:
RH = (Actual vapor pressure/Saturation vapor pressure) x 100%
The saturation vapor pressure can vary depending on whether it's with respect to water or
ice. So, the formula can be:
For water: RH = (Actual vapor pressure / Saturation vapor pressure with respect to water) x
100%
For ice: RH = (Actual vapor pressure / Saturation vapor pressure with respect to ice) x 100%
Hair Hygrometer
The hair hydrometer (Figure 3.14), a specific variant of absorption hydrometers, employs the
principle of mechanical moisture detection. This device utilizes the unique pro perties of human
or animal hair to gauge atmospheric moisture levels with precision.
Principle of Measurement
The hair hygrometer (as shown in Figure 3.15) capitalizes on the unique property of hair, which
expands or contracts in response to changes in relative humidity. This principle stems from the
fact that the dimensions of organic materials, including human hair, fluctuate with variations in
moisture content. As humidity levels shift, so does the moisture content within these materials,
consequently affecting their length.
When subjected to varying relative humidity levels ranging from 0 to 100%, the length of human hair,
after moisture removal, typically increases by 2 to 2.5%. It's important to note that different types of
human hair exhibit distinct responses, yet there remains a consistent correlation between hair length and
relative humidity.
The hair hygrograph, a type of hair hygrometer, incorporates a clock- driven drum mechanism to record
humidity levels on a chart accurately. Here's how it operates:
3. Cam Interaction:
Two specialized cams, intricately designed and jointed by a spring mechanism, play a pivotal role in
the hygrometer's precision.
The interaction between the main and sub cams determines the extent of movement exhibited by the
pen arm.
4. Proportional Measurement:
By carefully calibrating the cams, the hair hygrometer ensures that the pen arm's movement
accurately reflects changes in humidity. This calibration is essential, particularly because hair length
increases logarithmically with rising humidity, necessitating a proportional recording mechanism.
5. Recording Chart:
The hygrometer is equipped with a recording chart featuring a humidity scale divided into 100 equal
segments, each representing 1%. This design enables direct and precise reading of humidity levels
based on the chart's markings.
Applications
Hair hygrometers are employed within the temperature spectrum of 0°C to 75°C.
They are effective within a relative humidity range of 30% to 95%.
Limitations
Slow response time is a characteristic drawback of these hygrometers.
Continuous usage may lead to calibration drift in hair hygrometers.
Density Measurement
Density is a crucial aspect of measurement and instrumentation, serving two key purposes:
i. Determining the mass and volume of products.
ii. Assessing the quality of the product, particularly in industrial applications where density
measurement indicates product value.
Density is defined as the mass of a substance per unit volume under specific conditions, but it
varies with pressure and temperature, especially noticeable in gases.
Modern density measurement often relies on sampling techniques, employing two primary
approaches:
i. Static density measurement.
Gyanmanja ri Innova tive Universi ty
95
ii. Dynamic (on-line) density measurement, each utilizing various methods based on distinct physical
principles.
Selection of the most suitable method depends on the application and process characteristics.
Static methods are typically cost-effective and accurate, while dynamic methods offer
automation and advanced signal processing.
Despite advancements, there's no universal density measurement technique. Different methods
are used based on the product and material, often normalizing density under reference
conditions.
Specific gravity (SG) is a vital indicator, calculated by dividing the density of a substance by that
of a standard substance under identical conditions. For liquids and gases, specific gravities under
reference conditions are expressed as ratios to the density of water and air, respectively.
Hydrometers
Hydrometers (as shown in Figure 3.18) are widely utilized tools for measuring the density of liquids
and are governed by national and international standards like ISO 387.
These devices operate on the buoyancy principle, where the volume of a fixed mass is converted into
a linear distance using a sealed bulb-shaped glass tube with a measurement scale.
The bulb contains lead shot and pitch for ballast, with the mass varying depending on the density
range of the liquid being measured.
To measure density, the hydrometer is simply immersed in the liquid, and the density reading is
obtained from the scale, typically calibrated in units such as kg m-3.
Manufacturers often provide alternative scales including specific gravity. API gravity, Brix, Bri ne,
etc., catering to various industries and applications.
Hydrometers can be calibrated for different ranges of surface tensions and temperatures, with
temperature corrections available for standard temperatures like 15°C, 20°C, or 25°C.
ISO 387 standardizes hydrometers for a density range of 600 kg m-3 to 2000 kg m³, ensuring
consistency and accuracy in measurements.
While hydrometers offer advantages such as ease of use and versatility, they also have limitations
and drawbacks that should be considered in their application.
Advantages:
Cost-effective and user-friendly
Provides good resolution within a small range
Traceable to both national and international standard
Disadvantages:
Limited span necessitates multiple meters to cover a significant range
Fragility due to glass construction; metal and plastic versions sacrifice accuracy
Requires an offline sample of the fluid, which may not accurately represent process conditions
Gyanmanja ri Innova tive Universi ty
96
Pressure hydrometers for low vapor pressure hydrocarbons require precise pressure determination
Achieving high precision can be challenging, requiring corrections for surface tension and
temperature
Additional corrections may be needed for opaque fluids.
Advantages:
Simplicity: These gauges offer a straightforward solution for level measurement.
Cost-effectiveness: They are relatively inexpensive compared to more complex methods.
Disadvantages:
Manual Operation: Not suitable for automated control systems, requiring manual monitoring.
Maintenance Needs: Regular cleaning is necessary for optimal performance.
Fragility: These gauges can be easily damaged, requiring careful handling.
Applications
While these gauges may not be ideal for industrial automation due to their manual operation, they
find utility in various settings. Common applications include tanks for storing lubricating oils or
water. They provide a simple means of obtaining level information, streamlining the process of
visually inspecting or dipping a tank. However, their use is typically limited to operator inspection.
In conclusion, while sight glasses and similar level gauges offer simplicity and affordability, they
require manual oversight and maintenance. Understanding their principles and limitations is crucial
for selecting the appropriate method for liquid- level measurement in different applications.
Biomedical Measurement
Biomedical measurement refers to the process of quantitatively assessing various physiological
parameters and phenomena within the human body using specialized instruments and techniques.
It plays a crucial role in clinical diagnosis, patient monitoring, medical research, and the
development of therapeutic interventions.
The field has witnessed significant advancements driven by advancements in technology, leading to
the development of highly accurate, reliable, and sophisticated measurement devices.
Electrocardiogram (ECG): ECG is used to measure the electrical activity of the heart. It provides
valuable information about heart rate, rhythm, and abnormalities such as arrhythmias.
Arterial Blood Pressure: Monitoring blood pressure helps assess cardiovascular health and detect
conditions such as hypertension or hypotension.
Respiratory Airflows: Measurement of respiratory parameters, including airflow rate and volume,
aids in diagnosing respiratory disorders such as asthma or chronic obstructive pulmonary disease
(COPD).
Sphygmomanometer
Definition: A sphygmomanometer, also known as a blood pressure meter or gauge, is a device utilized
for measuring blood pressure.
The term "sphygmomanometer" originates from the Greek words "sphygmos" (meaning "heartbeat" or
"pulse") and "manometer" (referring to a device for measuring pressure or tension).
Samuel Siegfried Karl Ritter von Basch introduced the sphygmomanometer in 1881, while Scipione
Riva-Rocci refined it into a more compact form in 1896.
Functionality
The primary function of a sphygmomanometer is to determine an individual's blood pressure, which
is a crucial physiological parameter.
It operates by temporarily obstructing the flow of blood through an artery, typically the brachial
artery in the arm, using an inflatable cuff.
Pressure within the cuff is gradually released while a stethoscope is used to detect the return of
blood flow, indicated by the characteristic sounds known as Korotkoff sounds.
Components
A typical sphygmomanometer consists of three main components: an inflatable cuff, a pressure
gauge or manometer, and a mechanism for inflation and deflation.
The cuff is wrapped around the upper arm and inflated to a pressure exceeding the systolic blood
pressure to occlude arterial blood flow temporarily.
The pressure gauge displays the pressure within the cuff, typically in millimeters of mercury
(mmHg), allowing the healthcare provider to accurately read the blood pressure.
Types
Sphygmomanometers come in various types, including mercury, aneroid, and digital models.
Working Mechanism
Figure 3.17 showcases a transmission mechanism commonly employed in various measuring
instruments. In this setup, a sturdy rod denoted as R is firmly affixed to a toothed sector, labeled as
S, positioned at point T. This toothed sector meshes with the pointer pinion, identified as P,
establishing a linkage for transmitting motion. It's crucial to note that the precision of this
mechanism is vital for accurate measurement outcomes.
The contact interface between the mechanism and the measurement element is facilitated by the
diaphragm capsules, represented by C. Theows capsules play a pivotal role in translating physical
phenomena, such @Sactivate pressure or displacement, into measurable signals. Ensuring
consistent and reliable contact between the mechanism and the diaphragm capsules is essential for
maintaining measurement accuracy and repeatability.
This transmission mechanism design is widely utilized across various biomedical measurement
instruments, where precise and reliable measurement of physiological parameters is paramount.
Displacement Measurement
Manufacturing: Linear Variable Differential Transformers (LVDT) are extensively used for quality
control in machining processes, ensuring precise positioning and dimensional accuracy.
Robotics: LVDTs find application in robotic arms for accurate positioning and control, enhancing
automation efficiency in industries such as automotive assembly.
Flow Measurement
Chemical Industry: Rotometers and turbine meters are employed for measuring flow rates of liquids
and gases in chemical processing plants, facilitating precise control of ingredient proportions and
process efficiency.
Water Management: Turbine meters are utilized in water treatment plants and distribution networks
for monitoring water flow, aiding in conservation efforts and leak detection.
Temperature Measurement
Food Industry: Resistance thermometers are utilized in food processing to monitor and control
temperature during cooking, preserving food quality and safety.
Energy Sector: Optical pyrometers are used in power plants for measuring high temperatures in boilers
and fumaces, ensuring operational safety and efficiency.
Miscellaneous Measurements
Climate Control: Humidity measurement with hair hygrometers is crucial in HVAC systems for
maintaining optimal indoor air quality and comfort.
Beverage Industry: Hydrometers are utilized in breweries and distilleries for measuring the density of
liquids during fermentation and distillation processes, ensuring product consistency and quality.
Chemical Processing: Sight glass float gauges are employed in tanks and vessels for level measurement,
enabling precise monitoring and control of chemical processes.
Biomedical Measurement
Healthcare: Sphygmomanometers are indispensable devices in healthcare facilities for measuring blood
pressure, aiding in the diagnosis and management of cardiovascular diseases.
Applied mechanical measurements find extensive application across various industries and sectors,
contributing to enhanced efficiency, safety, and quality in processes ranging from manufacturing to
healthcare. By employing precise measurement techniques and instruments, industries can achieve
higher levels of productivity, reliability, and regulatory compliance.
Unit Summary
This unit explores techniques and instruments employed for quantifying key parameters in mechanical
systems, facilitating accurate analysis and control. The unit encompasses diverse aspects such as speed
measurement, displacement measurement, flow measurement, temperature measurement, and several
miscellaneous measurements crucial for engineering and scientific endeavors.
1. Speed Measurement:
Classification of Tachometers: A comprehensive overview of different types of tachometers used for
measuring rotational speed in mechanical systems.
Revolution Counters: Examination of devices utilized for counting revolutions per unit time, aiding
in assessing the performance of rotating machinery.
Eddy Current Tachometers: Insight into the principle and application of eddy current- based
tachometers for high-precision speed measurement.
2. Displacement Measurement:
Linear Variable Differential Transformers (LVDT): In-depth discussion on LVDTs, which high
accuracy and Windows are widely employed for measuring linear displacement with high accuracy
and reliability.
a) Revolution counters
b) Optical Pyrometer
c) Linear Variable Differential Transformers (LVDT)
d) Turbine meter
3. Which instrument is commonly used for flow measurement in industrial applications?
a) Eddy current tachometers
b) Optical Pyrometer
c) Rotameters
d) Resistance thermometers
4. Which type of thermometer measures temperature by sensing changes in electrical resistance?
a) Optical Pyrometer
b) Resistance thermometers
c) LVDT
d) Hydrometer
5. What is the purpose of a hair hygrometer?
a) Density measurement
b) Humidity measurement
c) Temperature measurement
d) Liquid level measurement
6. Which instrument is used to measure the density of a liquid?
a) Hydrometer
b) Rotameter
c) LVDT
d) Optical Pyrometer
7. What type of measurement does a float gauge assist with?
4 Introduction to Measurements
.
acceptable shaft can fit into the largest acceptable hole while maintaining the desired fit and
clearance.
Limits
Limits refer to the permissible range of dimensions assigned to a specific component, defining the
lower and upper thresholds within which the component's dimensions must fall to meet desired
specifications. To illustrate this concept, let's consider a cylindrical shaft with a design specification
requiring a diameter of 50 mm, with a tolerance of ±0.2 mm. Calculating the limits involves:
Lower Limit (LL): This is obtained by subtracting the tolerance from the desired dimension. In this
case, LL = 50 mm-0.2 mm = 49.8 mm.
Upper Limit (UL): Determined by adding the tolerance to the desired dimension. Here, UL = 50 mm
+ 0.2 mm = 50.2 mm.
In summary, the limits for the shaft's diameter in this example are 49.8 mm (LL) and 50.2 mm (UL).
These limits are crucial for ensuring that the actual diameter of the sha rf remains within the
predefined range during manufacturing. Deviating below 49.8 mm or exceeding 50.2 mm would
render the shaft out of tolerance and fail to meet specified requirements.
Fits
"Fits" refer to the relationship between two components when joined during assembly. dictating the
degree of tightness or looseness and influencing the presence of clearance or interference. Engineers
select fit types based on factors such as assembly function, required precision, ease of assembly, and
environmental conditions.
Types of Fits
Clearance Fit: The maximum dimension of the hole exceeds the minimum dimension of the shaft in
a clearance fit, resulting in a gap or clearance upon assembly is illustrated
in Figure 4.1.
Example: A bolt inserted into a nut demonstrates a classic clearance fit, allowing easy insertion and
removal.
Interference Fit: In an interference fit, the maximum dimension of the hole is smaller than the
minimum dimension of the shaft, leading to a tight connection upon assembly.
Example: Press-fitting a bearing into a housing showcases an interference fit, where force or
temperature manipulation is required for assembly.
Transition Fit: Transition fits provide a balance between clearance and interference, offering slight
clearance for assembly ease while providing some interference for stability.
Example: Assembling a piston into a cylinder represents a transition fit, allowing for easy assembly
while ensuring proper sealing and stability.
2. Shaft basis system: In this system, the different clearances and interferences are obtained by
associating various holes with a single shaft whose upper deviation is zero. The clearances and
interferences are determined by associating various holes with a single shaft. Here, the shaft acts as
the reference feature, and its upper deviation is set to zero. The clearance or interference is then
calculated based on the relationship between the shaft and the hole. This system is beneficial when
the priority is to ensure a specific fit for a range of holes with a single shaft, offering flexibility and
consistency in assembly processes.
Selection of Fits
Various factors, including manufacturing processes, tooling capabilities, and functional
requirements, influence the selection of fits in engineering applications. The hole basis system is
frequently preferred due to practical considerations associated with hole productio n tools. Producing
holes with odd sizes using fixed character tools is challenging, making the hole basis system more
useful and widely utilized. Table 4.1
provides a comprehensive overview of commonly used types of fits, categorized based on shaft sizes
and their resulting fits.
Tolerance
Tolerance denotes the allowable degree of variation in the dimensions of a component from its specified
or nominal dimension. This critical specification ensures that even when absolute precision is lacking in
the manufacturing process, the component remains functional and seamlessly integrates into the
designated assembly.
Let's consider a cylindrical shaft with an intended diameter of 50 mm and a tolerance of ±0.1 mm. In
this case, the nominal dimension is 50 mm, and the specified tolerance is ±0.1 mm, indicating that the
actual diameter of the shaft may fluctuate within a range of 50 mm ± 0.1 mm.
Lower Limit (LL): Calculated by subtracting the tolerance from the nominal dimension, the lower limit
is LL = 50 mm-0.1 mm = 99.9 mm.
Upper Limit (UL): Conversely, the upper limit is computed by adding the tolerance to the nominal
dimension. In this case, UL = 50 mm +0.1 mm = 50.1 mm.
Therefore, for this specific example, the tolerance range for the shaft's diameter spans from 49.9 mm to
50.1 mm. Tolerance is pivotal in ensuring that the dimensions of the manufactured component reside
within this predefined range. If the actual diameter of the shaft measures below 49.9 mm or exceeds
50.1 mm, it would be considered out of tolerance.
Types of Tolerance
Various tolerance types are employed in engineering and manufacturing to precisely define the
acceptable degree of variation in a component's dimensions. These tolerance categories offer precise
insights into intended functionality and manufacturing requisites. Unilateral Tolerance specifies
allowable variation solely on one side of the nominal dimension, which proves invaluable when a part's
functionality relies on a specific direction of variation. Bilateral tolerance deline ates allowable variation
on both sides of the nominal dimension, which is applicable when no preference for variation direction
exists. Limit Tolerance establishes allowable variation by specifying lower limit (LL) and upper limit
(UL) values for a given dimension. It is typically employed when adherence to a prescribed range is
paramount. Geometric tolerance dictates acceptable variation in geometric aspects such as form,
orientation, location, and profile, commonly denoted using specific symbols to contro l geometric
properties essential for functionality and assembly.
Standard Tolerances
Standard tolerances are crucial parameters defined by the Bureau of Indian Standards (BIS) to ensure
uniformity and precision in engineering and manufacturing processes. BIS outlines 18 standard grades
of tolerances, each designated with specific classifications from ITO1to IT16. These designations
provide engineers and manufacturers with standardized guidelines for determining acceptable levels of
dimensional variation in components and products.
The classification system begins with ITOI, which represents the most precise tolerance grade, and
progresses sequentially to IT 16, indicating a broader tolerance range. Each designation signifies a
predetermined level of permissible deviation from the nominal dimension, allowing for consistent
quality control and reliable performance across diverse applications.
Standard tolerance, i=0.45 D2/3 +0.0010
Where i is the standard tolerance unit in um and D is the diameter in mm.
Grade IT5 IT6 IT7 IT8 IT9 IT10 IT11 IT12 IT13 IT14 IT15 IT16
Value 7i 10i 16i 25i 40i 64i 100i 106i 200i 400i 640i 1000i
Selective Assembly
Selective assembly refers to a strategic concept in manufacturing where subcomponents are carefully
chosen and assembled to achieve a final assembly that meets the highest tolerance specifications. This
approach involves meticulously selecting and matching individual parts based on the ir dimensional
accuracy and other critical factors to ensure the overall assembly conforms precisely to the desired
specifications. By employing selective assembly techniques, manufacturers can optimize the quality and
performance of the final product while minimizing variations and defects. This method is particularly
beneficial in industries where tight tolerances are crucial, such as aerospace, automotive, and precision
engineering.
Selective assembly involves thorough inspection and testing of components to identify those with the
most precise dimensions and characteristics. These selected parts are assembled, leveraging their
strengths to achieve the desired accuracy and functionality in the final product. The selective assembly
consists of the following process steps.
1. Measurement and Sorting: Individual parts (typically mating pairs like shafts and holes) are
measured for their actual dimensions.
2. Grouping by Size: Parts are then sorted into groups based on their measured size. These groups
typically correspond to specific tolerance ranges.
3. Assembly with Matched Parts: Parts from corresponding size groups are paired together during
assembly. For example, a shaft from a larger size group would be assembled with a hole from a
larger size group.
Advantages
Improved Fit: Selective assembly reduces variability in clearance or tightness between mating
components by ensuring that parts with compatible sizes are assembled. This leads to a more
consistent and predictable final product.
Reduced Scrap: Parts with slight dimensional deviations outside the intended tolerance range can
still be paired with compatible counterparts. This minimizes waste and improves material utilization.
Enhanced Performance: Tighter control over fit can improve the assembled product's performance.
For example, in a bearing assembly, selective pairing can minimize friction and wear.
Applications
The selective assembly is applied where the precision in fit is crucial. This includes applications like
bearings, gears, and valve assemblies where tight tolerances are essential for smooth operation and
long lifespan. High-volume production is desired: Selective assembly can streamline production
without compromising final product quality by allowing for some variation in individual parts.
Utilizing parts that might otherwise be scrapped due to slight dimensional deviations can be a cost-
effective advantage.
Limitations:
1. Implementing selective assembly adds an additional sorting and pairing step to the
manufacturing process, which can increase complexity and, potentially, production time.
2. Accurate measurement of individual parts is crucial for effective selective assembly, requiring
additional inspection equipment and procedures.
3. Selective assembly is most beneficial for parts with well-defined tolerances and mating
relationships. It may be unsuitable for simpler assemblies or components with less critical
dimensional requirements.
Interchangeability
Traditionally, manufacturing workflows exhibited limited output. Skilled artisans were responsible
for creating and fitting components, achieving the desired fit through manual adjustments. The
advent of mass production, however, revolutionized contemporary manufacturing practices. Modern
industrial environments witness the fabrication of parts by specialized workers across geographically
dispersed facilities, followed by their subsequent assembly at separate locations. Within this
decentralized framework, the dimensional consistency of mating parts becomes paramount. Each
component must strictly adhere to pre-defined dimensional specifications and tolerance limits to
guarantee seamless assembly during the final product integration stage. This stringent adherence is
critical to accommodate the geographically dispersed nature of modern manufacturing, wher e parts
originating from various sources must integrate flawlessly.
proper assembly. This system is commonly used in applications where the size and accuracy of the
hole are critical, such as bearing housings or mounting points.
The hole basis system operates by designating the hole's nominal size, with a zero lower deviation
(fundamental deviation), as the basic size. Varied clearances or interferences are then achieved by
adjusting the limits of the mating part, typically the shaft, to attain different classes of fit.
Essentially, the hole's limits remain fixed while those of the shaft are adjusted to achieve the desired
type of fit. This means that the dimensional range of the hole stays constant across different fits of
the same accuracy level.
In contemporary engineering practices, the hole basis system is predominantly favored due to its
inherent advantage of the ease of adjusting shaft sizes compared to hole sizes. This preference is
largely driven by the widespread use of drills, reamers, and similar tools for producing the majority
of holes in engineering works. The necessity of employing a large number of tools of varying sizes
to adjust hole dimensions poses a logistical challenge, making it more convenient to modify shaft
sizes instead. This simplifies manufacturing processes and reduces the complexity associated with
tooling requirements. However, there are situations where the shaft basis system proves to be more
advantageous than the hole basis system. Notably, in the manufacturing of large-sized parts, the
shaft basis system may offer benefits such as increased flexibility and efficiency in achieving
desired fits.
In the shaft basis system, the design size of a shaft, with a zero upper deviation (fundamental deviation),
serves as the basic size. Varied clearances or interferences are then achieved by adjusting the limits of
the hole to attain different types of fit. In essence, the limits of the shaft remain constant while those of
the holes are varied to achieve the required fit.
Figure 4.5 Gauge design using Taylor's Principle for hole and shaft
Maximum Material Condition (MMC) and Minimum Material Condition (MMC) are critical concepts
in engineering design and manufacturing, especially concerning fits and tolerances. In the context of a
shaft and a hole both having a specified dimension of 40 ± 0.05 mm.
Maximum Material Condition refers to the state where the shaft or hole contains the maximum material
allowed within the specified tolerance. For the shaft, Maximum Material Condition occurs at the upper
limit of the dimensional tolerance range, meaning the shaft would have a diameter of 40.05 mm (40+
0.05 mm). Conversely, for the hole, the Maximum Material Condition occurs at the lower limit of the
dimensional tolerance range, resulting in a diameter of 39.95 mm (40-0.05 mm). In Maximum Material
Condition, the parts have the tightest fit possible within the specified tolerance range. The Minimum
Material Condition refers to the state where the shaft or hole contains the minimum amount of material
allowed within the specified tolerance. For the shaft, the Minimum Material Condition occurs at the
lower limit of the dimensional tolerance range, resulting in a diameter of 39 .95 mm (40-0.05 mm). For
the hole, Minimum Material Condition occurs at the upper limit of the dimensional tolerance range,
meaning the hole would have a diameter of 40.05 mm (40 + 0.05 mm). In Minimum Material Condition,
the parts have the loosest fit possible within the specified tolerance range.
Angular Measurement
Length standards like the foot and meter are human inventions, created arbitrarily. Due to challenges in
replicating these standards accurately, the wavelength of light has become a reference standard for
length. However, the standard for angles, derived from circles, is not man-made but inherent in nature.
Whether termed as degrees or radians, angles have a direct relationship with circles, which are formed
by a line revolving around one of its ends.
Whether defined as the circumference of a planet or the orbit of an electron, circles maintain a
consistent relationship with their parts.
Gyanmanja ri Innova tive Universi ty
114
In metrology, the science of precise measurement, angular measurement plays a vital role in ensuring
the accuracy of objects and their functionality. It's crucial for tasks like verifying angles of cuts, slopes,
and tapers on machine parts. Metrology demands high precision, and various instruments like
protractors, sine bars, and angle gauges are employed to achieve this. The selection of the appropriate
tool and the most fitting unit (degrees or radians) depend on the specific requirement and the desired
level of accuracy. Accurate angle measurement is crucial in various industrial settings, from workshops
to tool rooms, for assessing interchangeable parts, gears, jigs, and fixtures. Measurements include taper
angles of bores, gear flank angles, seating surface angles of jigs, and taper angles of jibs. Interestingly,
in machine part alignment assessment, angle measurement serves to detect errors in straightness,
parallelism, and flatness, often with highly sensitive instruments like autocollimators. A spectrum of
angle measurement instruments exists, ranging from simple scaled devices to advanced laser
interferometry-based tools. Basic types, such as vernier protractors, offer improved discrimination (least
count) and are supported by mechanical mechanisms for accurate positioning and locking. Spirit levels
find extensive application in mechanical and civil engineering, aiding in aligning structural elements
like beams and columns. Instruments like clinometers, based on spirit level principles but with higher
resolution, are popular in metrology. This chapter explores popular angle measurement devices widely
utilized across industries.
2. Angle Gauges:
Also referred to as bevel protractors or inclinometers, angle gauges are specialized instruments
tailored for the high-precision measurement of angles.
Comprising a movable arm or blade affixed to a base adorned with a calibrated scale, angle gauges
are pivotal in machining, tool making, and metrology for verifying machine part and component
angles.
3. Sine Bars:
Sine bars emerge as precision measuring devices dedicated to facilitating accurate angular
measurement and inspection endeavors. Comprising two parallel bars or cylinders mounted on a flat
base, sine bars leverage trigonometric principles to achieve high-precision angle measurement by
altering the relative height of one end with respect to the other.
4. Autocollimators:
Autocollimators represent optical marvels employed for the meticulous measurement of minute
angular deviations and alignments with unparalleled precision. Typically integrating a light source,
collimator lens, and viewing telescope, autocollimators find widespread application in optics,
astronomy, and precision engineering for alignment and calibration endeavors.
6. Theodolites: Theodolites stand as precision optical instruments meticulously crafted for the
measurement of horizontal and vertical angles in surveying and engineering applications. Consisting
of a telescope mounted on a rotating platform embellished with graduated scales, theodolites are
indispensable in tasks such as land surveying. Construction layout and structural alignment.
7. Digital Angle Finders: Rounding off the roster, digital angle finders emerge as electronic marvels
harnessed for the high-accuracy measurement of angles with unparalleled ease of use. Typically
featuring a digital display and integrated sensors for direct angle measurement, digital angle finders
find widespread adoption in carpentry, woodworking, and metalworking for precise angle
quantification in fabrication and assembly endeavors.
A hallmark feature of the universal bevel protractor is its ability to provide dual readings, facilitating
measurements in both clockwise and counterclockwise directions from the zero reference point. This
adaptability renders it suitable for a myriad of tasks across diverse industries. Equipped with
extendable and retractable blades, the protractor accommodates measurements on various surfaces,
including planar, internal, and external angles. The pivoting base enhances maneuverability and ease
maneuverability of adjustment when positioning the protractor on the object under examination.
Industries reliant on precise angle measurements, such as engineering, metalworking. and
woodworking, commonly employ the universal bevel protractor as a staple tool. Its utility extends to
tasks like machine and tool angle adjustments and the measurement of complex shapes and surfaces.
minutes. Similar to the working principle of the vernier caliper, the zero line on the vernier scale
coincides with the main scale to determine the main scale reading. When divisions on the
vernier scale align with those on the main scale, the vernier scale reading is noted. By
combining these values with the least count of the Universal Bevel Protractor, precise angle
measurements can be calculated.
In the given scenario, to determine the total reading of the Universal Bevel Protractor, we utilize
the formula:
Total Reading = Main Scale Reading + (Number of divisions at which it coincides with any
division on the main scale x Least count of the Vernier scale).
Given:
Main scale reading = 10°
Vernier scale reading (number of the division at which it coincides with any division on the main
scale) = 3rd division
Least count of the Universal Bevel protractor = 5 minutes
Substituting the provided values into the formula:
Total Reading = 10°+ (3x5 minutes)
Total Reading = 10°+ 15 minutes
Total Reading = 10° 15'
Therefore, the total reading of the Universal Bevel Protractor in the given case is 10 degrees and
15 minutes.
Advantages
1. The universal bevel protractor offers precise angle measurements, crucial for accurate adjustments in
various applications.
2. Capable of measuring both internal and external angles, as well as surface inclinations, it caters to a
wide range of measurement needs. Its versatility makes it suitable for use in various industries and
applications.
3. Its ability to provide dual readings enhances flexibility by facilitating measurements i n both
clockwise and counter clockwise directions.
4. Equipped with user-friendly features like extendable blades and a pivoting base, it is adaptable to
different measurement scenarios.
5. Enables precise alignment of machine tools and components, enhancing operational efficiency.
6. Useful in quality control and inspection processes, ensuring products meet specifications and
supports precise layout and machining tasks, aiding in accurate fabrication processes.
Disadvantages
1. The many number of components may pose a challenge for inexperienced users.
2. Certain models are delicate and require careful handling to avoid damage.
3. Relatively expensive compared to simpler angle measurement tools, it may be a barrier for some
users.
4. Despite versatility, it may not suit high-precision applications due to its restricted measurement
range.
5. 5. Regular calibration is necessary to maintain accuracy, incurring additional time and cost for users.
Sine Bar
A sine bar, alternatively referred to as a precision angle device, stands as a specialized tool in
precision measurement within machining and metrology. It is an integral to machining and
metrology, serves as a precision measuring instrument. Comprising two parallel bars featuring
accurately angled surfaces, typically set at intervals of 5°, 10°, or 15°, it plays a pivotal role in the
measurement of angles with utmost accuracy. Machinists rely on this tool to facilitate precise
machining and inspection processes, ensuring the quality and accuracy of manufactured
components. Its primary function lies in accurately measuring and setting angles with exceptional
precision and accuracy. Widely utilized across machine shops, quality control laboratories, and
manufacturing sectors, sine bars play a crucial role in ensuring the precise alignment and machining
of workpieces at predetermined angular inclinations.
A sine bar, when paired with slip gauge blocks, emerges as a precision angular measurement tool
esteemed for its accuracy in evaluating angles across machining, grinding, and inspection tasks.
Renowned for its proficiency in both precise angle measurement and workpiece alignment, this
instrument is crafted from high-quality, corrosion-resistant steel. Engineered with durability in mind,
sine bars are designed to endure wear while retaining accuracy, rendering them indispensable for
tasks demanding meticulous angle measurements and alignments.
Construction of Sine Bar
The construction of a sine bar involves a rigid steel gauge body featuring two equally sized rollers
aligned parallel to each other along their axes. The top surface of the steel bar runs parallel to a line
connecting the centers of the rollers, with the length of the sine bar precisely corresponding to the
distance between these roller centers, typically set at 100 mm, 200 mm, or 300 mm. Relief holes
strategically placed reduce its weight. However, a sine bar alone cannot effectively measure angles;
it requires the use of slip gauges and elevation gauges.
1. Surface Plate: In order to ensure that the sine bar has a precise horizontal reference surface, a
surface plate serves as the basis for positioning the sine bar and related parts. The sine bar's top
surface must be parallel to the surface plate's horizontal planes for proper alignment, which is
very important.
2. Dial Gauge: Dial gauges assess surface uniformity, registering zero deflections during traversal
to confirm surface parallelism with its base. In the sine bar setup, dial gauges are vital for
verifying the alignment of the workpieces upper surface with surface plate or measuring the angle
of the tapered sine.
3. Block Gauges or Slip Gauges: Block gauges, also called slip gauges, act as precise standards for
height and length measurements, enhancing the accuracy of sine bar setups.
4. Vernier Height Gauge: Vernier height gauges determine the height of the sine bar rollers,
facilitating angle measurements for larger components within the setup.
Working Principle
The working principle of a side bar is rooted in fundamental trigonometric principles. When one
roller of the sine bar is positioned on a surface plate and the other roller is set at the height of the slip
gauge, it establishes a triangular configuration involving the sine bar, surface plate, and slip gauge. In
this triangular setup, the hypotenuse corresponds to the sine bar itself, formed by integrating vertical
slip gauges with the surface plate base. If we denote the slip gauge height as H and the sine bar length
as L, the sine ratio is expressed as H divided by the length of the sine bar (L). Consequently, the
angle e can be determined by calculating the inverse sine (sin^-1) of H divided by L, ensuring precise
angular measurements.
Spirit Level
A spirit level, a fundamental tool in engineering metrology, traces its origins back to practices in cold
western regions. Originally filled with 'spirits of wine' to prevent freezing, these instruments earned the
general term "spirit level." Functioning as an angular measuring device, the spirit level employs a
bubble that consistently moves to the highest point within a glass vial. A typical spirit level comprises a
base, known as the reference plane, which rests on the machine part under assessment for straightness or
flatness determination. When the base is horizontal, the bubble centers on the graduated scale engraved
on the glass. As the base deviates from the horizontal, the bubble shifts to the highest point of the tube.
The bubble's position relative to the scale measures the machine part's angularity, with the scale
calibrated to directly indicate the reading in minutes or seconds. The cross-test level, positioned at a
right angle to the main bubble scale, also indicates inclination in the perpendicular plane. A screw
adjustment facilitates setting the bubble to zero by referencing it with a surface plate.
The performance of a spirit level hinges on the geometric relationship between the bubble and two
references: gravity acting at the center of the bubble and the scale against which the bubble position is
read. Sensitivity is determined by the radius of curvature of the bubble formed against the inside surface
of the glass vial and the base length of its mount. For a level with graduations at a 2 mm interval
representing a tilt of 10", the angle (8) can be calculated as θε = 10×π/(180 × 3600), resulting in a radius
(R) of approximately 41.274 m. If the base length is 250 mm, the height (h) for a 2 mm, 8e = h/250,
then the bubble movement is h=0.012mm. Sensitivity increases with a larger radius of curvature or a
shorter base length, with a preferred sensitivity of 10" per division for precision measurement.
While a spirit level is primarily used for aligning machine parts and assessing flatness and straightness
rather than measuring angles, it is essential to ensure accuracy by carefully setting the vial relative to the
base. To minimize error, a recommended procedure involves taking readings from both ends of the vial,
reversing the base, repeating readings, averaging the four readings, and repeating the process for critical
cases.
Clinometer
A clinometers is a specialized application of a spirit level, where the spirit level is mounted on a
rotary member within a housing. One face of the housing serves as the instrument's base, while a
circular scale on the housing allows for measuring the angle of inclination of the rotary member
relative to its base. Clinometers are primarily used to determine the included angle between two
adjacent faces of a workpiece. To achieve this, the instrument's base is placed on one face of the
workpiece, and the rotary body is adjusted until a zero reading of the bubble is obtained. The angle
of rotation is then noted on the circular scale against the index. A similar reading is taken on the
second face of the workpiece, and the included angle between the faces is calculated as the
difference between the two readings.
Working Principle
To determine the inclination using clinometers (Figure 4.10), you first need to level the bubble unit,
then read the scales through the reader eyepiece. The upper aperture displays two pairs of double
lines and two single lines. Adjust the micrometer knob until the single line aligns precisely between
the double lines, setting the micrometer scale. Then, read the main and micrometer scales, and sums
their readings to obtain the desired angle. This setup cancels out any centering error of the circle.
The scales are illuminated by a low-voltage lamp, ensuring clear visibility. Additionally, the bubble
unit is daylight illuminated and equipped with a lamp for alternative illumination. A locating face on
the back allows horizontal use with the accessory worktable or reflector unit. To measure surface
inclination, adjust the clinometers‘ vial until it is approximately level, then use the slow-motion
screw for final centering adjustment. To measure the angle between two surfaces, place the
clinometers on each surface sequentially, and calculate the difference in angle.
The clinometers can also be used as a precision setting tool for setting tool heads or tables at specific
angles. First, set the micrometer scale, then rotate the glass scale to align the relevant graduation
with the index, using the slow-motion screw for final adjustment. Tilting the work surface until the
bubble is centered sets it to the specified angle relative to a level plane.
Applications
Clinometers find applications in checking angular faces and relief angles on large cutting tools and
milling cutter inserts. They are also used for setting inclinable tables on jig boring machines and
performing angular work on grinding machines. The Hilger and Watts type of clinometers is
commonly used, featuring a circular glass scale divided from 0° to 360° at 10' intervals. A
subdivision of 10' is achievable with an optical micrometer, while a coarse scale marked every 10
degrees is provided for rough work. Some instruments include a worm and quadrant arrangement for
readings up to 1' accuracy. In certain clinometers, no bubble is present; instead, a graduated circle
supported on accurate ball bearings automatically aligns with the true vertical position when
released. Readings are taken against the circle with the aid of a vernier, allowing for an accuracy of
up to 1 second.
Angle Gauges
Dr. Tomlinson of N.P.L. developed the first combination of angle gauges. This set comprises
thirteen individual gauges, combined with one square block and one parallel straight edge, enabling
the setup of any angle to the nearest 3 seconds. Similar to the assembly of slip gauges to achieve
linear dimensions, angle gauges can be stacked to attain a desired angle. Constructed from hardened
steel and meticulously seasoned, angle gauges ensure enduring angular precision. The measuring
faces undergo careful lapping and polishing to achieve high accuracy and flatness, akin to slip
gauges. These gauges measure approximately 3 inches (76.2 mm) in length and 5/8 inch (15.87 mm)
in width, with lapped faces accurate to within 0.0002 mm. The angle between the two ends is
maintained within ± 2 seconds.
This diagram demonstrates how two gauge blocks can be combined to produce different angles. When a
5° angle block is paired with a 30° angle block (as shown in Fig. 5.14(a)), the resulting angle is 35°.
Conversely, if the 5° angle block is reversed and combined with the 30° angle block (as illustrated in
Fig. 5.14(b)), the resulting angle becomes 25°. Reversing an angle block subtracts its value from the
total angle generated by the other blocks, allowing for diverse angle combinations with minimal gauges.
Constructed from hardened steel, angle gauges undergo precision lapping and polishing to ensure
accuracy and flatness. Typically measuring about 75 mm in length and 15 mm in width, these gauges
offer surfaces accurate up to ±2". They are available in sets of 6, 11, or 16, with Table 5.2 detailing the
specifications of each block in these sets. While most angles can be created in multiple ways,
minimizing error is essential, especially as the number of gauges used increases. The set of 16 gaug es,
for instance, can form angles ranging from 0° to 99° in 1" increments, offering a total of 3,56,400
combinations. The laboratory master-grade set achieves accuracy up to one-fourth of a second, while
the inspection-grade set is accurate to ½", and the tool room-grade set maintains accuracy within 1".
The diagrams illustrate how angle gauges can be combined to achieve desired angles. Each gauge is
marked with the symbol '<', indicating the direction of the included angle. When adding angles, all '<'
symbols should align, while for subtraction, the gauge should be flipped to align the symbol in the
opposite direction.
Let's take an example: to create an angle of 42°35'20" using a 16-gauge set, we start by subtracting a 3°
block from a 45° block to get 42°. Then, combining a 30' gauge with a 5' gauge gives us 35'. Finally, we
use a 20" gauge. All gauges are added except for the 3º gauge, which is reversed and wrung with the
others for alignment on a surface plate. Calibrating angle gauge blocks is relatively simpler compared to
slip gauges because angles are self-proving portions of a circle. For instance, three equal portions of 90°
must equal 30° each. This breakdown system allows for the creation of masters of angle measurement,
with each combination proven by the same method. Additionally, the accuracy of angle gauges is less
sensitive to temperature changes compared to slip gauges. Therefore, a gauge block manufactured at
one temperature will retain the same angle at a different temperature, provided the readings are taken
after stabilization and the entire gauge is exposed to the same temperature.
Angle gauges find various uses in precision measurement and quality control processes:
Direct Measurement of Die Insert Angles: Angle gauges are directly employed to measure the angle
in a die insert. The insert is positioned against an illuminated glass surface plate or inspection light
box. Using a combination of angle gauges, the built-up combination is carefully adjusted and
inserted in position so that no white light can be seen between the gauge faces and die faces. The
alignment is crucial, with all engraved Vs on the angle gauges in the same line for addition of
angles, while those on the other side are subtracted.
Utilization with Square Plate: Angle gauges are often paired with a square plate to enhance
versatility in their application. The square plate typically guarantees 90° angles within a specific
tolerance, such as 2 seconds of arc. For instances demanding exceptional accuracy, each comer of
the square plate is numbered, and a test certificate accompanies the angle gauge set, detailing the
measured angle of each comer. Figure 4.18 illustrates a setup to test the angle of a V-gauge with an
included angle of 102°, positioned against an illuminated glass surface plate. Slip gauges may be
used to facilitate the testing process.
Figure 4.13 Setup used for checking a V-gauge with an included angle of 102°
7. Angle gauges are utilized in automotive and aerospace industries for setting angles in vehicle
components, engine parts, and aircraft structures.
Screw threads have a twofold purpose in engineering applications. Firstly, they aid in transmitting
power and motion, enabling mechanisms to operate efficiently. Secondly, they play a crucial role in
securely fastening two components together, often utilizing nuts, bolts, and studs to achieve this
connection. The variety of screw threads is extensive, encompassing variations in form such as included
angle, head angle, and helix angle, among others. This wide range of thread configurations allows
engineers to choose a suitable option for specific requirements, ensuring optimal performance in various
contexts.
When it comes to screw threads, they are broadly classified into two main types: external threads and
internal threads. External threads are found outside a cylindrical or conical surface, while internal
threads are within a hole or bore. Understanding these distinctions is fundamental in selecting the
appropriate threading solution for a given application.
Types of Screw Threads
a. V-screw Thread: It is also known as V-threads, features a V-shaped profile with symmetrical flanks
meeting at a 60-degree angle. They efficiently transmit power and motion while minimizing friction
and providing self-locking properties. Widely used in fasteners, machinery, and precision
instruments, V-threads offer reliability and ease of use across diverse applications.
b. American National Thread: It is also known as the Unified Thread Standard (UTS), encompasses
both external (bolts and screws) and internal (nuts and tapped holes) threads. UTS is a
comprehensive system encompassing both external threads, found in bolts and screws, and intemal
threads, utilized in nuts and tapped holes. Widely adopted in the United States and Canada for inch-
based threads, UTS provides a standardized framework for thread design and interchangeability
across various Industries. Its versatility and widespread usage make it a cornerstone of engineering
and manufacturing in North America.
c. Metric Thread: Metric threads form the backbone of thread standards worldwide, rooted in the
International System of Units (SI). Embraced by nations adhering to the metric system, these threads
offer a seamless and universal approach to thread measurement and specification. Available in
coarse and fine pitch variations, metric threads cater to a broad spectrum of applications, ranging
from automotive and aerospace to machinery and consumer products.
d. Square Thread: Renowned for their efficiency in power transmission, square threads feature a square
cross-section that maximizes contact area and minimizes frictional losses. Ideal for applications
where axial movement of heavy loads is paramount, square threads deliver exceptional strength and
durability. Their precise geometry and high mechanical efficiency make them indispensable in
machinery requiring smooth and reliable operation, from lifting systems to precision instruments.
e. Acme Thread: Acme threads boast a distinctive trapezoidal profile, engineered to excel in
applications demanding robustness and precision. Widely employed in power screws and machinery
requiring efficient load transmission and high accuracy, Acme threads ensure reliable performance
under heavy loads and harsh operating conditions. Their rugged design and superior mechanical
properties make them indispensable in diverse industrial settings.
f. Whitworth Thread: In the 19th century, Whitworth threads represented a foundational thread
standard that played a pivotal role in industrialization and standardization. Although less prevalent
in modem applications, Whitworth threads continue to endure in legacy equipment and historical
contexts, particularly in the United Kingdom and its former colonies. Their enduring legacy is a
testament to their contribution to the evolution of thread engineering and manufacturing practices.
g. Knuckle Thread: It feature a rounded profile, designed for smooth operation and resistance to
damage. They are used in applications where durability and ease of use are essential, such as in
electrical fittings. Knuckle threads provide a secure fastening while minimizing the risk of thread
damage or stripping. Their rounded shape also promotes smoother engagement and disengagement,
making them ideal for frequent assembly and disassembly tasks.
h. Buttress Thread: Characterized by one flank perpendicular to the thread axis and the other flank
angled, buttress threads excel in applications requiring unidirectional load support and resistance to
axial forces. Commonly found in mechanisms such as jackscrews and vices, buttress threads ensure
stable and secure performance under extreme loading conditions. Their unique design provides
enhanced strength and rigidity, making them ideal for applications where safety and reliability are
paramount.
Terminologies of Screw Thread
1. External Thread:
An external thread is the screw thread formed on the outer surface of a workpiece, commonly seen
in bolts and studs. Conversely, an internal thread is created within the inner surface of a workpiece,
as seen in the thread of a nut.
2. Axial Thread:
The pitch line, or axis of a thread, is an imaginary line that passes through the screw's center
longitudinally. The thread flanks are extended until they meet, making an apex or vertex, creating
the fundamental triangle (an imaginary shape).
3. Angle of Thread:
The angle of a thread, also known as the included angle, is the angle between the flanks of a thread
measured in the axial plane. The flank angle is the angle formed between a flank of the thread and a
line perpendicular to the thread axis passing through the vertex of the fundamental triangle .
4. Pitch Line:
The axis of a thread, also known as the pitch line, is an imaginary line that runs longitudinally
through the center of the screw.
5. Fundamental triangle:
The fundamental triangle is an imaginary shape formed when the thread flanks are extended until
they meet, resulting in an apex or vertex.
6. Angle of Thread:
The angle of a thread, also referred to as the included angle, is the angle measured between the
thread flanks in the axial plane.
7. Flank Angle:
The flank angle is the angle formed between a flank of the thread and a line perpendicular to the
thread axis passing through the vertex of the fundamental triangle.
8. Pitch:
The pitch refers to the distance between two corresponding points on adjacent threads, measured
along the axis of the thread.
9. Lead:
Lead indicates the axial distance covered by the screw during one complete revolution around its
axis.
14. Addendum:
Addendum is the radial distance between the major diameter and the pitch line for extemal threads.
For intemal threads, it's the radial distance between the minor diameter and the pitch line.
15. Dedendum:
Dedendum is the radial distance between the minor diameter and the pitch line for extemal threads.
For intemal threads, it's the radial distance between the major diameter and the pitch line.
1: Angular pitch
2: Pitch
3: Major diameter
4: Pitch diameter
5: Minor diameter
6: Pitch line
7: Apex
8: Root
9: Crest
10: Addendum
11: Dedendum
ISO standards play a pivotal role in ensuring the reliability and uniformity of screw threads across
various industries. Here are some key ISO standards related to metric screw threads:
1. ISO 68-1: This standard outlines the basic profile for ISO general-purpose metric screw threads,
providing fundamental design principles for thread geometry and dimensions.
2. ISO 261: It offers a general plan for ISO general-purpose metric screw threads, laying out the
essential parameters and specifications for thread designation and classification.
3. ISO 262: This standard specifies selected sizes for screws, bolts, and nuts for ISO general-
purpose metric screw threads, facilitating standardized sizing and interchangeability.
4. ISO 724: It defines basic dimensions for ISO general-purpose metric screw threads, establishing
the foundational measurements essential for thread manufacturing and application.
5. ISO 956-3: This standard focuses on tolerances for ISO general-purpose metric screw threads,
particularly deviations for constructional screw threads, ensuring consistency and quality in
thread production.
6. ISO 1502: This standard addresses gauges and gauging for ISO general- purpose metric screw
threads, providing definitions and symbols essential for accurate measurement and inspection.
Fits of threads
ISO threads adhere to a tolerance grade system, essential for specifying permissible variations in
thread dimensions. This system determines the fit between male (external) and female (internal)
threads, influencing the ease of assembly and disassembly, as well as the load-bearing capacity of
the connection. The tolerance grade consists of a two-part code: a capital letter representing the
external thread (e.g., E or G) and a lowercase letter indicating the internal thread (e.g., e, f, g, or h),
followed by a number denoting the tolerance class. Lower numbers signify tighter tolerances,
meaning smaller allowable variations in dimensions, while higher numbers represent looser
tolerances, allowing for larger variations. These tolerance grades, in combination with the specified
tolerance class, determine the type of fit between threads. There are three main types:
1. Clearance Fit: Characterized by a loose fit, enabling easy assembly and disassembly.
Commonly employed in applications where threads do not bear significant loads, such as cover
screws or access panels.
2. Interference Fit: Exhibiting a tight fit, resulting in a strong connection between threads. Ideal
for applications requiring high load transmission, such as in engines or gearboxes.
3. Medium Fit: Falling between clearance and interference fits, this type is versatile and widely
used across various applications where a balance between ease of assembly and load-bearing
capacity is desired.
Errors in Threads
Errors in threads can stem from various sources, spanning initial manufacturing inconsistencies to
operational wear and tear. A comprehensive understanding of these errors is imperative for
upholding the integrity and functionality of threaded connections. Below is a detailed breakdown of
common thread errors:
1. Pitch Error:
This error arises from deviations in the distance between adjacent threads from the ideal pitch. Such
deviations can lead to improper engagement between mating threads, significantly impacting the fit
and functionality of the connection, potentially compromising its integrity.
2. Lead Error:
Lead errors manifest as inconsistencies in the axial advancement of the thread per revolution. These
variations result in uneven movement of mating components, predisposing to potential
misalignment issues. Consequently, the reliability and efficiency of the threaded connection may be
compromised.
3. Form Error:
Form errors present as irregularities in the contour or shape of the thread profile. Whether due to
excessive or insufficient material in specific areas, these irregularities impede proper mating
between threads and escalate stress concentrations. Rectifying form errors is pivotal to preserving
the structural integrity of the threaded connection.
7. Thread Misalignment: Thread misalignment occurs when an offset or angular deviation occurs
between mating threads. Mitigating thread misalignment is crucial to ensuring the smooth assembly
and operation of threaded components.
Pitch Errors
Pitch errors in threads arise when the distance between adjacent threads deviates. from the intended
pitch. Such variations can greatly affect the interaction between mating threads, potentially resulting in
an inadequate fit and compromised connection functionality. Rectifying pitch errors is essential to
maintain the integrity and operational efficiency of threaded components. The pitch errors are
classified into
1. Progressive Error
2. Periodic Error
3. Drunken Error
4. Irregular Error
Periodic Error: Characterized by a repetitive pattern of variations in thread spacing, this error can be
attributed to factors such as machine tool vibrations or inconsistencies in material properties. The
periodic nature of this error necessitates careful monitoring and adjustment to ensure uniformity in
threaded connections.
Drunken Error: Drunken error manifests as a significant and irregular deviation in pitch at a specific
location along the thread, resembling a localized "bump" or "dip" in the thread pattern. This anomaly
often results from sudden machine malfunctions or interruptions during the machining process,
highlighting the importance of maintaining operational stability and consistency.
Figure 4.19
Irregular Error: This category encompasses random or unpredictable variations in pitch that do not fit
into the aforementioned classifications. Such errors may stem from a combination of factors or
unknown causes, underscoring the complexity of mitigating and addressing irregularities in threaded
components. Vigilance and thorough analysis are essential in managing irregular errors to uphold the
quality and reliability of threaded connections.
The thimble's circumference is divided into 50 equal parts. Each division on the thimble represents a
movement of the screw by 0.5 mm / 50 = 0.01 mm.
The anvil and spindle are the two key components for measurement. The screw thread is placed between
the anvil and the spindle tip. By rotating the thimble, the screw drives the spindle forward until it gently
contacts the screw thread. A ratchet mechanism ensures consistent pressure during contact.
Bench Micrometre
The bench micrometre is a specialized instrument utilized for highly accurate measurements of various
dimensions, including the outer diameter of screw threads. Unlike handheld micrometres, the bench
micrometre is securely mounted on a stable workbench or table, providing a rigid and vibration-free
platform for precise measurements. In measuring the outer diameter of a screw thread, the screw thread
sample is carefully positioned between the spindle and anvil of the bench micrometre. The spindle,
controlled by a calibrated screw mechanism, is gradually adjusted until it lightly contacts the crest of the
screw thread. This adjustment is typically facilitated by a precision micrometre head, allowing for
extremely fine adjustments to ensure accurate measurement.
The measurement is then read directly from the micrometre scale, which may be graduated in
increments as small as 0.001 milli metres (mm) or 0.0001 inches (in), depending on the level of
precision required. For example, if the micrometre scale reads 5.250 mm, it indicates that the outer
diameter of the screw thread measures precisely 5.250 mm.
Unlike the ordinary micrometre, the bench micrometre is fixed to a stable workbench or table, ensuring
precise measurements with minimal vibration. Its specialized design allows for higher precision,
particularly for small dimensions or tight tolerances. While both micrometers cover a wide range of
measurements, the bench micrometre is preferred for larger and more complex components. It's
commonly utilized in specialized manufacturing and quality control settings, whereas the handheld
micrometre offers versatility for various applications, including fieldwork and general workshop tasks.
The major diameter of screw thread = S± (D2 – D1)
The setting cylinder is a reference cylinder with a precisely known diameter (S). It's used to calibrate the
micrometre before measuring the screw thread.
Micrometre Readings:
R1: This is the micrometre reading when the two jaws of the micrometre are closed over the setting
cylinder.
R2: This is the micrometre reading when the two jaws of the micrometre are closed over the screw
thread.
A calibrated setting cylinder with a diameter approximately equal to the major diameter of the internal
thread serves as the reference standard for conducting measurements.
Initially, the instrument is set on this setting cylinder, and the corresponding reading of the dial indicator
is recorded. Subsequently, the floating head gauge mounted in the comparator is retracted to bring the
tips of the stylus into contact with the root of the screw thread under the pressure of the spring. The
reading of the dial indicator in this configuration is noted.
D represents the diameter of the cylindrical reference standard or calibrated setting cylinder, R₁ denotes
the reading of the dial indicator on the setting cylinder, and R2 signifies the reading of the dial indicator
on the screw thread,
Then, the major diameter of the internal thread can be determined as follows:
Major diameter of internal thread=D+(R2-R1)
Measurement of the minor diameter is conducted through a comparative process employing small V-
pieces that make contact with the root of the threads. These V-pieces are carefully selected to ensure
that their included angle is smaller than the angle of the thread. Positioned on either side of the screw,
with their bases against the micrometre faces, the V- pieces facilitate accurate measurement. Initially, a
reading is taken using a setting cylinder corresponding to the dimension being measured. Subsequently,
the threaded workpiece is mounted between the centres, and a second reading is obtained. The
difference between these two readings directly indicates the error in the minor diameter.
During the measurement procedure, the object is carefully positioned between the anvil and the spindle
tip of the micrometre. Subsequently, the thimble is rotated to drive the spindle forward until it lightly
contacts the object, ensuring consistent pressure throughout the measurement process. The measurement
is then determined by reading two scales: the sleeve scale and the thimble scale. Firstly, the graduation
line on the sleeve scale is noted, aligned with the edge of the thimble, providing the whole milli metre
value. Secondly, the number of divisions on the thimble scale past the reference point on the micrometre
body is counted, representing hundredths of a milli metre. Finally, the final measured diameter is
obtained by summing the readings from the sleeve scale and the value from the thimble scale. This
comprehensive approach ensures accurate and precise measurements of the object's diameter using the
micrometre.
Figure 4.25 Measurement of the Minor Diameter of Internal threads using Taper Parallels
Taper parallels are inserted inside the thread and adjusted until they are perfectly aligned with each
other to measure the minor diameter of a thread. This adjustment ensures a firm contact is established
with the minor diameter of the thread. Once the taper parallels are correctly positioned, the diameter
over their outer edges is measured using a micrometre. This measured diameter corresponds to the
minor diameter of the thread.
When dealing with large mirror diameters of intemal threads, a combination of two rollers with known
diameters and a set of slip gauges is employed to measure the minor diameter. The process involves
spanning the inner diameter using the rollers and slip gauges. The minor diameter is calculated using the
formula:
Minor diameter = d1 + d2 + 1
Where, d1 and d2 represent the diameters of the rollers,
I denotes the length of the slip gauge set.
Figure 4.26 Internal diameters Measurement of screw thread with slip gauges and rollers
The wire method involves the use of small, hardened steel wires, commonly referred to as best-size
wires. These wires are carefully placed within the thread groove, and measurements are taken over them
to determine the effective diameter. The technique offers versatility and accuracy, making it a preferred
choice in thread measurement applications.
There are three primary variations of the wire method:
1. One-Wire Method: In this approach, a single wire is placed within the thread groove, and
measurements are taken over it to calculate the effective diameter.
2. Two-Wire Method: Utilizing two wires placed on opposite sides of the thread groove, this method
offers improved accuracy by accounting for potential thread angle variations.
3. Three-Wire Method: Considered the most accurate among the wire methods, the three-wire method
involves placing three wires at specific locations within the thread groove. By taking measurements
over these wires and applying a mathematical formula, the effective diameter can be precisely
determined.
Two-Wire Method
In this method, two steel wires with the same diameter are positioned on opposite sides of a screw, as
illustrated in Fig 4.27. The distance between the wires (M) is measured using a micrometre. Then, the
effective diameter is calculated using the formula De = T + P, where T is the dimension beneath the
wires and P is the correction factor.
T=M-2d
Where d is the diameter of the best-size wire.
To establish the relationships between two wires of equal size and a screw thread, refer to the figure.
The wires must be chosen in such a way that they touch the screw thread on the pitch line. It is
important to note that the equations mentioned earlier hold true only if this prerequisite is fulfilled.
d Accordingly, from triangle OFD, OF cosec (x/2)
FA= d cosec (x/2)-= ==[cosec(x/2) – 1
FG = GC cot (x/2) = cot (x/2) (because BC = pitch/2 and 2
GC = pitch/4)
for screw pitch measurement, this microscope incorporates specialized features and capabilities to
ensure precise analysis and assessment of screw threads. The optical head, comprising the lens
system responsible for image magnification, is securely affixed to the supporting column via a
clamping screw, ensuring structural integrity during operation. The supporting column serves as the
vertical axis, providing stability and support for both the optical head and the stage. Facilitating
lateral movement of the stage, the micrometre screw enables precise scanning of the specimen
across the field of view. Complementarily, the micrometre screw for longitudinal movement grants
control over the vertical positioning of the stage, facilitating accurate focusing on the specimen. The
stage, acting as a flat platform, serves as the surface upon which the specimen is positioned for
examination. Finally, the base, situated at the bottom of the microscope, plays a crucial role in
providing overall stability to the instrument.
To utilize a tool maker's microscope effectively, follow a systematic approach. Firstly, place the
threaded workpiece onto the microscope stage, ensuring it is securely positioned. Then, align specific
points on the thread with the crosshairs of the microscope. Utilize the precision micrometres integrated
into the stages to make precise adjustments as needed for measurement. Once aligned, read the
measurements displayed on the microscope's micrometres and protractor. These readings provide
valuable data regarding lateral and longitudinal movements, as well as angular measurements if
necessary. Finally, calculate the differences between parameters such as diameter, pitch, or thread angle
to determine the characteristics of the workpiece accurately. Tool maker's microscopes are invaluable
tools for inspecting the dimensions and tolerances of precision-engineered components such as machine
parts, gears, and electronic circuits, ensuring high standards of quality and accuracy in manufacturing
processes.
and gears. Thread pitch, defined as the distance between adjacent threads, is a critical parameter that
directly influences the performance and functionality of threaded assemblies. The Screw Pitch
Measuring Machine employs advanced measurement techniques and precise instrumentation to ensure
reliable and precise determination of thread pitch, contributing to the overall quality and integrity of
threaded components in various industries.
The working principle of a Screw Pitch Measuring Machine revolves around the precise detection and
measurement of thread features to determine the pitch accurately. The operation of the Screw Pitch
Measuring Machine is facilitated by its spring-loaded head, which enables the stylus to traverse up the
flank of the thread and down into the subsequent space as it moves along the thread. Accurate
positioning of the stylus between the two flanks is ensured by maintaining alignment between the
Gyanmanja ri Innova tive Universi ty
146
pointer T and its index mark when readings are recorded. This alignment guarantees precision in
measurement. Upon achieving the correct position, the micrometre reading is noted. Subseque ntly, the
stylus is advanced into the next thread space by rotating the micrometre, allowing for a second reading
to be taken. The difference between these two readings corresponds to the pitch of the thread being
measured. This process is repeated sequentially along the entire length of the screw thread until
comprehensive coverage is achieved, ensuring thorough and precise measurement of the thread pitch.
Screw pitch measuring machines offer numerous advantages in precision measurement processes. Their
foremost benefit lies in their ability to provide highly accurate measurements of thread pitch, ensuring
the quality and integrity of threaded components in manufacturing and quality control settings.
Additionally, these machines enhance operational efficiency by facilitating rapid and systematic
measurement procedures, thereby optimizing productivity and reducing inspection time. Many modern
screw pitch measuring machines feature automation capabilities, minimizing the potential for human
error and streamlining the measurement process. Moreover, their versatility enables them to measure a
wide range of threaded components, from screws and bolts to nuts and gears, making them invaluable
tools across various industries. Furthermore, these machines often come equipped with software for
generating comprehensive reports, enabling efficient documentation and traceability of measurement
data for quality control purposes. However, despite their numerous advantages, screw pitch measuring
machines may present challenges such as high initial costs, complex operating procedures requiring
specialized training, susceptibility to damage, limited applicability to certain thread types or
components, and the necessity for regular calibration and maintenance to ensure accuracy and
reliability.
The construction of a thread gauge micrometre is meticulously engineered for high precision and
reliability. It typically consists of a 60-degree pointed spindle and a double V- shaped swiveling anvil.
The spindle and anvil are precisely machined to ensure smooth movement and accurate alignment
during measurements. Additionally, the micrometre may feature a finely calibrated thimble and barrel
mechanism for precise adjustment and measurement reading. One of the key features of a thread gauge
micrometre is its ability to zero effectively. When the micrometre is zeroed, the pitch line of the spindle
and the anvil coincide, ensuring that measurements are taken from a consistent reference point. This
zeroing capability is crucial for achieving accurate and repeatable measurements across different
threaded components.
In practical use, the thread gauge micrometre is applied by placing the threaded component between the
spindle and the anvil. The micrometre is then gently closed until the thread comes into contact with both
the spindle and the anvil. By rotating the thimble or barrel, the user can precisely measure the pitch
diameter, major diameter, and minor diameter of the thread. The data obtained from a thread gauge
micrometre is vital for ensuring the quality and compatibility of threaded components in various
applications. For example, in manufacturing processes, accurate thread measurements are essential for
verifying the conformity of machined parts and ensuring proper assembly of mechanical systems.
Similarly, in quality control procedures, thread gauge micro metres play a critical role in inspecting and
validating the dimensions and tolerances of threaded components to meet industry standards and
specifications.
The Floating Carriage Diameter Measuring Machine is constructed with several notable features:
Robust Cast Iron Base: Ensures stability and durability for reliable performance.
Dimensional Stability: Designed to maintain precise measurements over time.
Precision Ground Internal Ways: Achieved through meticulous grinding to ensure utmost accuracy.
Micrometre Least Count: Typically set at 0.002 mm with a non-rotary spindle for fine measurement
resolution.
The machine comprises three primary units:
1. Base Casting: Houses a pair of meticulously aligned centers where the threaded workpiece is
securely mounted, constituting the first carriage.
2. Lower Carriage: Positioned atop the first carriage at a precise 90-degree angle, capable of parallel
movement along the thread axis.
3. Upper Carriage: Mounted on the lower carriage, this unit features V-ball slides enabling movement
perpendicular to the thread axis.
The upper carriage is equipped with a micrometre thimble featuring a graduated cylindrical scale,
enabling measurements with a resolution of up to 0.002 mm. Additionally, a fiducial indicator replaces
the fixed anvil on one end, facilitating consistent measurements under uniform pressure. Both the
micrometre thimble and fiducially indicator are outfitted with specialized exchangeable anvils tailored
to accommodate various thread forms.
The Floating Carriage Diameter Measuring Machine operates on the following principles.
Setup: The workpiece with threads to be inspected is securely placed between two centers,
supported by pillars on the machine's base.
Adaptability: The distance between the centers can be adjusted to accommodate different lengths of
the threaded workpiece.
Alignment: After inserting the workpiece, the lower carriage is adjusted to ensure proper alignment
and positioning.
Calibration: The anvils of both the micrometre and the fiducial indicator are finely adjusted to
make precise contact with the threaded screw, while the fiducial indicator is set to zero.
Orientation: With the fiducial indicator and micrometre spindle aligned perpendicular to the line
between the centers, measurements are taken from the cylindrical scale on the micrometre thimble.
Consistency: The fiducial indicator, equipped with a single index line, is designed to maintain a
consistent measuring pressure for accurate and repeatable readings.
Additional Support: For measuring effective diameter, supplementary supports are provided above
the micrometre carriage to accommodate wires, V-pieces, etc.
Applications
Automotive Engineering: In automotive engineering, the concept of limits, fits, and tolerances
plays a critical role in ensuring the proper functioning of various components such as engine parts,
gears, and bearings. Engineers need to select appropriate fits to ensure smooth operation, minimal
wear, and optimal performance of the vehicle.
Manufacturing Industry: In the manufacturing industry, selective assembly techniques are used to
achieve interchangeability of parts. By understanding the principles of limits, fits, and tolerances,
manufacturers can produce components with precise dimensions, facilitating assembly processes and
minimizing production costs.
Machinery Design and Construction: The hole and shaft basis system is widely employed in
machinery design and construction to establish standardized fits between mating parts. Engineers
utilize Taylor's Principle to design plug and ring gauges, ensuring the quality and accuracy of
manufactured components as per standards such as IS 919-1993 and IS 3477-1973.
Quality Control and Inspection: Multi-gauging and inspection techniques are essential in quality
control processes to verify the dimensional accuracy and interchangeability of manufactured parts.
By employing advanced measurement tools and gauges, manufacturers can detect deviations from
specified tolerances and ensure compliance with design requirements.
Precision Instrumentation: Angular measurement plays a crucial role in precision instrumentation
and alignment tasks. Instruments such as universal bevel protractors, sine bars, spirit levels, and
angle gauges are utilized to measure and set precise angles, ensuring the alignment and accuracy of
mechanical systems and instruments.
Screw Thread Manufacturing: Screw thread measurements are vital in industries such as
aerospace, automotive, and machinery manufacturing. Engineers need to adhere to ISO grade and
fits standards to produce threads with specified tolerances and ensure compatibility between mating
parts. Measurement techniques such as the two-wire method and thread gauge micrometers are
employed to accurately measure thread dimensions and detect errors such as pitch errors.
Quality Assurance in Aerospace: In the aerospace industry, where precision is
Paramount, the application of limits, fits, and tolerances is crucial in ensuring the reliability and
safety of aircraft components. Stringent quality assurance measures, including precise screw thread
measurements and angular alignments, are employed to meet stringent regulatory requirements and
ensure the integrity of aerospace systems.
Medical Device Manufacturing: In the manufacturing of medical devices and
Equipment, adherence to strict tolerances is essential to ensure the functionality and safety of the
products. Limits, fits, and tolerances are carefully considered in the design and production of
components such as implants, surgical instruments, and diagnostic equipment to meet regulatory
standards and quality requirements.
Consumer Electronics: In the consumer electronics industry, where miniaturization and precision
are key, the application of limits, fits, and tolerances is critical in the design and manufacturing of
electronic components and assemblies. Precise fits and tolerances are necessary to ensure the proper
functioning and reliability of electronic devices such as smart phones, laptops, and tablets.
Energy Sector: In the energy sector, particularly in the production and maintenance
of power generation equipment such as turbines and generators, limits, fits, and tolerances are vital
for ensuring efficiency, reliability, and safety. Proper fits and tolerances are maintained during
manufacturing and assembly processes to minimize friction, wear, and the risk of failure, thereby
optimizing energy production and reducing downtime.
These applications demonstrate the diverse range of industries and sectors where the concepts of
limits, fits, and tolerances are applied to ensure quality, precision, and reliability in engineering
design, manufacturing, and maintenance processes.
Unit Summary
This unit covers essential concepts in precision engineering:
Understanding the significance of limits, fits, and tolerances.
Selective assembly techniques and interchangeability principles.
Introduction to hole and shaft basis systems for fit determination.
Taylor's principle and design considerations for plug and ring gauges.
Multi-gauging and inspection techniques for quality control.
Angular measurement instruments and their application.
ISO grade, thread fits, and measurement methods for screw threads.
5 Introduction to Measurements
.
Applications
This unit is designed to provide an in-depth understanding of gear measurement and testing, crucial
for ensuring the accuracy and functionality of gear mechanisms in mechanical systems. It begins
with the principles of analytical and functional inspection methods, including the rolling test, which
is used to evaluate the overall performance and quality of gears.
The unit then covers specific techniques for measuring tooth thickness using the constant chord
method and the use of gear tooth verniers, which are essential for maintaining the precise
dimensions required for proper gear function. Various types of gear errors, such as backlash, run out,
and composite errors, are also discussed, along with their impact on gear performance and how they
can be measured and minimized.
Additionally, the unit includes comprehensive coverage of machine tool testing procedures . It
explains how to test for parallelism, straightness, squareness, coaxiality, roundness, and run out, as
well as the alignment of machine tools according to IS standard procedures. This knowledge is vital
for ensuring that machine tools are operating correctly and producing parts within the desired
tolerances.
Apart from this, at the end of the unit, the overall broad concepts are provided as a unit summary.
Besides, a large number of multiple-choice questions as well as descriptive-type questions with
Bloom's taxonomy action verbs are included. A list of references and suggested readings is given in
the unit so that one can go through them for practice. It is important to note that for getting more
information on various topics of interest, some QR codes have been provided in different sections
which can be scanned for relevant supportive knowledge. Video resources along with QR codes are
mentioned for getting more information on various topics of interest which can be surfed or scanned
through mobile phones for viewing.
Rationale
Accurate gear measurement and testing are essential for the reliability and efficiency of mechanical
systems. Gears must be manufactured to precise specifications to ensure proper fit and function, and
any errors in gear production can lead to significant performance issues. Understanding the
principles and techniques of gear measurement and machine tool testing enables engineers to
produce high-quality gears and maintain the precision of manufacturing equipment. This unit
provides the necessary knowledge and skills to achieve these goals, preparing students and
professionals to excel in fields that require meticulous gear inspection and testing.
Gear Measurement
A gear is a mechanical device that transfers power using a toothed wheel. In this gear drive, the
driving and driven wheels are in direct contact with each other. Precision is the most critical aspect
of gear manufacturing, as gears achieve about 99 percent transmission efficiency. Therefore,
accurate testing and measurement of gears are essential. To thoroughly inspect a gear, it is important
to focus on the raw materials used in production, as well as the machining, heat treatment, and tooth
finishing of the blanks. Additionally, gear blanks must be evaluated for tooth thickness and
dimensional accuracy across various gear forms.
Gear Terminologies
1) Pitch Surface: The surface of a theoretical rolling cylinder (or cone, etc.) that represents the toothed
gear being replaced.
2) Pitch Circle: A cross-section of the pitch surface taken perpendicular to its axis.
3) Addendum Circle: The circle defining the outermost points of the gear teeth in the right section.
4) Root (or Dedendum) Circle: The circle defining the base of the spaces between the gear teeth in a
right section.
5) Addendum: The radial distance between the pitch circle and the addendum circle.
6) Dedendum: The radial distance between the pitch circle and the root circle.
7) Clearance: The difference between the Dedendum of one gear and the addendum of its mating gear.
8) Face of a Tooth: The portion of the tooth surface extending outward from the pitch surface.
9) Flank of a Tooth: The portion of the tooth surface extending inward from the pitch surface.
10) Circular Thickness (or Tooth Thickness): The thickness of the tooth measured along the pitch
circle, represented as an arc length.
11) Tooth Space: The distance between adjacent teeth measured along the pitch circle.
12) Backlash: The difference between the circular thickness of one gear and the tooth space of its
mating gear.
13) Circular Pitch (p): The combined width of a tooth and a space measured along the pitch circle,
defined mathematically as (p = TD/N), where D is the pitch diameter and N is the number of teeth.
14) Diametric Pitch (P): The number of teeth per unit of pitch diameter, calculated as (P=N/D).
15) Module (m): The ratio of the pitch diameter to the number of teeth, with the pitch diameter typically
given in inches or millimeters. In the case of inches, the module is the inverse of the diametral pitch.
16) Fillet: The small radius connecting the tooth profile to the root circle.
17) Pinion: The smaller gear in any pair of mating gears, with the larger gear simply referred to as the
gear.
18) Velocity Ratio: The ratio of the rotational speed of the driving gear to that of the driven gear.
19) Pitch Point: The tangency point of the pitch circles in a pair of mating gears.
20) Common Tangent: The line tangent to the pitch circles at the pitch point.
21) Line of Action: The line perpendicular to the mating tooth profiles at their contact point.
22) Path of Contact: The trajectory traced by the contact point of a pair of tooth profiles.
23) Pressure Angle: The angle between the common normal at the point of tooth Contact and the
common tangent to the pitch circles: it is also the angle between the line of action and the common
tangent.
24) Base Circle: An imaginary circle in involute gearing used to generate the involute curves forming
the tooth profiles.
Rolling Test
A rolling test is an essential technique in gear measuring that's used to assess the tooth contact pattern
and meshing properties of gears in real-world operations.
In this test, two mating gears engage dynamically while rotating, allowing the assessment of essential
parameters like tooth contact pattern, contact ratio, pressure angle and tooth profile, backlash, noise, and
vibration.
A highly effective and convenient method for measuring gear thickness involves using two or three
different-sized rollers. This approach checks for vibrations at multiple points on the gear teeth,
providing accurate measurements and identifying potential issues.
By conducting rolling tests as part of gear inspection procedures, manufacturers can ensure that their
gears meet design specifications and deliver reliable operation in various applications.
This test is done using a Parkinson Gear Tester. This test brings out any errors in tooth profile, pitch,
concentricity and pitch line.
As the gears mesh and rotate together, any deviations in the tooth profile, pitch, or concentricity of the
test gear cause fluctuations in the center distance.
These variations are detected by a dial indicator or an electronic sensor attached to the apparatus, which
records the changes.
The outcome of the rolling test is a precise measurement of the gear's manufacturing qualit y,
highlighting any discrepancies that may affect its performance.
Chordal Thickness:
Chordal thickness is the straight-line distance (chord) between two points on the gear tooth, extending
from the pitch circle across the tooth profile. This method simplifies the measurement process while
providing sufficiently accurate results for most applications. Chordal thickness can be measured using
calipers or specialized gear-measuring instruments, making it a practical solution for many gear
inspection processes.
Tooth Thickness and Depth Variation: The thickness of the tooth (w) and the depth (d) can vary
depending on the number of teeth when using the gear tooth vernier calliper method. Despite these
variations, for a given tooth size, contact with the rack consistently occurs at points A and B whenever
the gear rotates.
Consider △ DAE with angle∠ADE = ϕ.
AD
cosϕ =
DE
AD = DEcosφ
Thus,
1
AD = ⋅ π ⋅ m ⋅ cosϕ
4
Calculating the chord length AB :
From Fig. 5.1,
l(DE) = l(DF) = ArcDG
Given:
1/4 circular pitch = 1/4 ⋅ π ⋅ m
Consider △ DCA with angle ∠CAD = ϕ.
cosϕ = CA/AD
Thus,
1
CA = AD ⋅ cosφ = ⋅ π ⋅ m ⋅ cosϕSinϕ = cos 2 ϕ ⋅ π ⋅ m/4
4
From Fig. 5.1:
chord length AB = 2 ⋅ l(CA) = 2 ⋅ cos 2ϕ ⋅ π ⋅ m/4
The depth h can be calculated as follows:
From △ DAC,
sinϕ = CD/AD
Thus,
CD = AD ⋅ sinϕ = 14 ⋅ π ⋅ m ⋅ sinϕ ⋅ cosϕ
GD = GC + CD
Where,GD = addendum = module
GD = m
CD = 1/4 ⋅ π ⋅ m ⋅ sinϕ ⋅ cos ϕ,
GC = depth = h
1
m = h + ⋅ π ⋅ m ⋅ cosϕ ⋅ sinϕ
4
h = m − 1/4 ⋅ π ⋅ m ⋅ cosϕ ⋅ sinϕ
Depth h = 𝐦(1 − 1/4 ⋅ 𝛑 ⋅ cosϕ ⋅ sinϕ)
The concept of a constant chord is crucial in gear measurement and ensures that the gear teeth maintain
consistent contact points with the basic rack, leading to reliable and accurate gear operation.
Errors in Gears
1. Profile Error: The profile error is defined as the maximum distance between any point on the actual
tooth profile and the design profile. This error indicates how much the actual gear tooth shape
deviates from the intended design, affecting the smoothness and efficiency of gear meshing.
2. Pitch Error: Pitch error refers to the difference between the actual pitch (the distance between
corresponding points on adjacent teeth) and the design pitch. Accurate pitch is crucial for ensuring
proper gear engagement and minimizing noise and vibration during operation.
3. Cyclical Error: Cyclical error is a recurring deviation that occurs with each complete revolution of
the gear. This type of error can lead to periodic variations in gear performance, potentially causing
torque and speed transmission fluctuations.
4. Run out: Run out is the total variation in measurement observed on a fixed indicator as the contact
points are rotated around a fixed axis without any axial movement. It represents the total deviation
of the gear surface from a true circular path, impacting gear accuracy and performance.
5. Eccentricity: Eccentricity is a measure of how much the center of the gear deviates from its
intended rotational axis. It is often calculated as half of the radial runout, indicating the off - center
positioning of the gear, which can cause uneven wear and load distribution.
6. Wobble: Wobble is the measurement of runout at a specified distance from the rotational axis, taken
parallel to the axis. It indicates the tilt or misalignment of the gear face relative to its axis of rotation,
affecting gear alignment and engagement.
7. Radial Runout: Radial runout measures the deviation perpendicular to the rotational axis. It shows
how much the gear teeth deviate from a true circular path in the radial direction, which can impact
the smoothness of gear rotation and load distribution.
8. Undulation: Undulation refers to periodic deviations of the actual tooth surface from the intended
design surface. These wave-like irregularities can affect the gear's ability to transmit motion
smoothly and efficiently, leading to variations in contact stress and wear.
9. Axial Run out: Axial run out is the deviation measured parallel to the rotational axis while the gear
is rotating. It indicates the axial displacement of the gear teeth, affecting the alignment and
engagement with mating gears and potentially leading to increased wear and noise.
By understanding and controlling these various types of errors, it is possible to ensure higher precision,
better performance, and longer life for gear systems.
The inspection of gears involves identifying potential manufacturing errors in the following elements:
Pitch-circle eccentricity refers to the deviation of the gear's pitch circle from its true center, causing the
gear to vibrate periodically with each rotation. This vibration can lead to premature gear tooth failure.
To measure run out, eccentricity testers are used.
The testing process involves mounting the gear on a mandrel. The tester's dial indicator is equipped with
a specially designed tip that matches the gear's module. This tip is placed between the gear's tooth
spaces. As the gear is rotated tooth by tooth, the dial indicator measures any deviations, revealing
variations in the gear's pitch circle. This method ensures accurate detection of eccentricity, allowing for
corrective measures to be taken to prevent gear failure.
iii. Backlash
Backlash refers to the amount of rotation a gear can have before its nonworking flank comes into
contact with the teeth of its mating gear. It is measured numerically at the point of the pitch circle where
the gears mesh the tightest.
There are two categories of backlash:
1. Circumferential Backlash
2. Normal Backlash
To calculate backlash, the following steps are performed:
1) Lock one of the two mating gears in place.
2) Rotate the other gear forward and backward.
3) Use a comparator to measure the maximum displacement during this rotation.
Circumferential backlash is the measure of this displacement, taken as a tangent to the comparator
stylus's locking location relative to the reference cylinder. This method ensures an accurate
measurement of the rotational play between the gear teeth, helping to identify and correct excessive
backlash that can lead to gear noise, wear, and decreased accuracy.
iii. Composite
Composite testing of gears involves evaluating the variation in center distance as a gear meshes tightly
with a master gear. This testing method helps identify errors in gear manufacturing by measuring how
gears interact under operating conditions. In composite gear checking, two main types of variations are
assessed: tooth-to-tooth composite variation and total composite variation.
Composite testing provides a comprehensive gear quality assessment by highlighting localized and
overall manufacturing errors. By measuring tooth-to- tooth and total composite variations,
manufacturers can identify and rectify issues that impact gear performance, ensuring higher precision,
reliability, and longevity in gear applications.
the foundational accuracy and alignment of the machine tool components before they are subjected
to operational stresses.
2. Dynamic tests
Dynamic evaluations involve tests conducted under dynamic loading conditions, where the
alignment and performance of the machine tool are examined while it is in operation. These tests are
crucial for understanding how the machine tool behaves under actual working conditions, including
the effects of cutting forces and vibrations. Dynamic evaluations provide a comprehensive
assessment of the machine's operational accuracy and stability.
2. Flatness:
Flatness testing determines whether a surface lies in a single plane. This is crucial for surfaces that
require uniform contact with other components or workpieces. Flatness errors can lead to
inaccuracies in machining operations and poor surface finishes. This test ensures that the machine's
worktables, bases, and other flat surfaces are correctly aligned.
5. Rotations:
Rotational tests assess the accuracy of rotating components, such as spindles and rotary tables.
These tests measure the concentricity, run out, and angular positioning accuracy. Accurate rotational
movements are vital for operations like drilling, milling, and tuming, where the precise circular
motion is required.
6. Coaxiality
Coaxiality tests determine whether multiple components share a common axis. This is critical for
operations involving rotating parts, such as spindles and chucks, where misalignment can cause
vibrations, uneven wear, and inaccuracies in the machined product. Coaxiality ensures that all
relevant components align perfectly along the same axis, maintaining the integrity of the rotational
movements and the overall machining process.
2. Straightness Tests
Laser Setup: Position the laser emitter to establish a straight reference line. The laser
produces a real straight line, which is superior to the imaginary line provided by traditional
alignment telescopes.
Measurement: Use a laser receiver or detector to measure deviations along the machine's bed
ways or movement paths. Record the deviations as the laser detects any misalignment over
long distances.
3. Flatness Tests
Surface Check: Place the laser emitter perpendicular to the surface to be tested. Direct
displacement measurements are taken using the laser receiver.
Recording Deviations: Move the laser receiver across the surface and record any deviatio ns
from flatness. The data can be used to create a detailed map of the surface profile.
4. Parallelism Tests
Baseline Setup: Establish a laser baseline parallel to the reference surface or component.
Measurement: Using the laser receiver, measure the distance from the laser line to the
component at various points to ensure parallelism. Adjust the components as necessary to
achieve the required parallelism.
5. Squareness Tests
Optical Square: Use an optical square in conjunction with the laser equipment to establish a
square reference relative to the laser baseline.
Measurement: Position the laser receiver at various points to measure the squareness of
components relative to the established laser line. Adjust as necessary to correct any
deviations.
6. Coaxiality Tests
Alignment Setup: Align the laser emitter with the rotational axis of the machine tool.
Measurement: Use the laser receiver to measure the coaxiality of rotating components, such
as spindles and tailstocks. Ensure that all components share a common axis and adjust if
necessary.
7. Operational Verification
Component Alignment: Check the alignment of multiple components to a predetermined
straight line established by the laser. This is particularly important for components spaced at
long distances.
Machined Surface Check: After adjustments, perform a test run to verify the accuracy of
the machining process.
8. Documentation:
Record all measurements and deviations observed during the testing process.
Compare the results with the permissible limits specified in the IS standards.
The Indian Standards (IS) for alignment tests on machine tools are specific to different types of
machines and their components. Here are some key IS standards that outline the procedures and
requirements for alignment tests:
1. IS 2063:1988 - "Test Charts for Lathes‖ This standard specifies the test procedures for general-
purpose lathes, covering geometric and practical tests.
2. IS 2200:1988-"Test Charts for Milling Machines" This standard provides the test procedures for
milling machines, including tests for geometric accuracy.
3. IS 12449:1988-"Test Charts for Vertical Turning and Boring Machines" This standard outlines
the alignment tests for vertical turning and boring machines.
4. IS 13022:1991- "Test Charts for Radial Drilling Machines" This standard covers the alignment
tests for radial drilling machines.
5. IS 12181 (Parts 1 to 4):1992- "Acceptance Conditions for Vertical Turning and Boring Lathes"
These standards specify the acceptance conditions, including alignment tests for vertical turning and
boring lathes.
6. IS 13275:1992- "Test Charts for Horizontal Boring and Milling Machines" This standard outlines
the tests for checking the accuracy of horizontal boring and milling machines.
7. IS 13936 (Parts 1 to 5):1994- "Acceptance Conditions for Machining Centres" These standards
include tests for the accuracy and alignment of machining centers.
These IS standards provide detailed procedures and acceptable tolerances for conducting alignment tests
on various types of machine tools, ensuring they meet the necessary precision and operational
requirements.
Applications
Gear Measurement
Quality Control in Manufacturing
Ensures gears meet design specifications and tolerances, crucial for the reliable performance of
mechanical systems in various industries.
Prevents defects and reduces waste in production, leading to cost savings and improved efficiency.
Automotive Industry
Essential for the production of transmission systems, differentials, and other critical components.
Helps in maintaining smooth and efficient power transfer, reducing noise and vibration in vehicles.
Aerospace Industry
Critical for the manufacturing of precision gears used in aircraft engines, landing gear, and control
systems.
Ensures safety and reliability by adhering to stringent quality standards.
Industrial Machinery
Applied in the production and maintenance of gears for heavy machinery, conveyors, and robotic
systems.
Enhances the durability and performance of industrial equipment.
Consumer Electronics
Used in the manufacturing of gears for household appliances, power tools, and electronic devices.
Ensures smooth operation and longevity of consumer products.
Medical Devices
Critical for producing high-precision gears used in medical equipment such as MRI machines,
surgical robots, and diagnostic devices.
Ensures accurate and reliable performance in healthcare applications.
Woodworking
Ensures the proper functioning and alignment of woodworking machinery, leading to precise cuts
and improved product quality.
Reduces material waste and increases efficiency in furniture and cabinetry production.
Unit Summary
This unit focuses on the critical aspects of gear measurement and the testing of machine tools, ensuring
high precision and reliability in mechanical systems.
Gear Measurement involves both analytical and functional inspections. Analytical inspection uses
precise instruments and mathematical methods to evaluate gear parameters theoretically, while
functional inspection assesses gear performance under actual operating conditions. An essential
practical evaluation in gear measurement is the rolling test, which checks the smoothness and accuracy
of gear operation by observing its interaction with a master gear or gear rolling tester.
One of the precise methods for measuring gear teeth is the constant chord method, which measures
tooth thickness at a specific point to ensure proper meshing and load distribution. Additionally, the gear
tooth vemier is a specialized tool that provides quick and accurate measurement of tooth thickness at the
pitch circle diameter, crucial for maintaining quality control
Understanding and identifying errors in gears is vital for their proper function. Backlash, the dearance
between mating gear teeth, must be controlled to avoid binding and ensure smooth operation. Runout,
the deviation from the ideal circular path, can cause vibration and noise, while composite errors, the
combined effects of various individual gear errors, can significantly impact overall gear performance
and reliability.
Machine Tool Testing includes evaluating several key parameters to ensure the precision and alignment
of machine tools. Parallelism ensures that surfaces or axes are parallel, which is crucial for accurate
machining. Straightness verifies that components like guideways and spindles maintain a straight line,
essential for precision work. Squareness checks the
perpendicularity of surfaces and axes, guaranteeing that machined parts meet design specifications.
Coaxiality measures the alignment of multiple axes or cylindrical features to a common centerline,
important for the proper functioning of rotating parts. Roundness assesses the circularity of cylindrical
parts to ensure uniform diameter and surface finish. Run-out evaluates the deviation of a rotating part
from its intended axis of rotation, affecting balance and functionality
Finally, alignment testing of machine tools as per Indian Standard (IS) procedures ensures that all
components are correctly aligned to meet standardized performance and precision criteria. This
comprehensive approach to gear measurement and machine tool testing is essential for maintaining the
quality, efficiency, and reliability of mechanical systems.