SMP017 Calibration of Instruments
SMP017 Calibration of Instruments
1.0 PURPOSE:
This document is intended to provide a standard procedure for the calibration of field instruments .
2.0 SCOPE:
This procedure covers all linear instruments.
4.0 REFERENCES:
ISO 17025,ANSI/NCSLZ 540-1,
Agy, D. et al., Calibration: Philosophy In Practice, Second Edition, Fluke Corporation, Everett,
WA, 1994.
Lipt´ak, B´ela G. et al., Instrument Engineers’ Handbook – Process Measurement and Analysis Volume
I, Fourth Edition, CRC Press, New York, NY, 2003.
“Micro Motion ELITE Coriolis Flow and Density Meters”, product data sheet DS-00374 revision L,
Micro Motion, Inc., June 2009.
5.0 ANNEXURE:
A,B,C,D,E
6.0 DISTRIBUTION:
SIGN:
ROTODYNE INDIA MIEL/SP E&I/SMP/17(00)
ISSUE DATE: 01/09/2015
PROCEDURE AND GUIDELINES FOR PAGE: 2 OF 32
CALIBRATION OF FIELD INSTRUMENTS
IMS: ISO9001:2008, ISO14001:2004,OHSAS
18001:2007
time technicians
10.0 DEFINITIONS:
Nil
PPEs 1. 2. 3. 4. 5.
Required Safety Helmet Safety Goggles Safety Shoes Nose Mask Ear Plugs/Muffs
12.0 PREREQUISITES:
12.1 Tools
D-Type Spanner Set ( 6-32 mm)
Ring Type Spanner Set (6-32 mm)
Screw Driver Set ( Star & flat upto 250 mm length)
watch makers tools
12.2 Test Equipments
SIGN:
ROTODYNE INDIA MIEL/SP E&I/SMP/17(00)
ISSUE DATE: 01/09/2015
PROCEDURE AND GUIDELINES FOR PAGE: 3 OF 32
CALIBRATION OF FIELD INSTRUMENTS
IMS: ISO9001:2008, ISO14001:2004,OHSAS
18001:2007
HART
Multimeter
12.3 Others
M-Seal
PVC Tape
Insulation Tape/TEFLON TAPE
Waste Cloth
To calibrate an instrument means to check and adjust (if necessary) its response so the output accurately
corresponds to its input throughout a specified range. In order to do this, one must expose the
instrument to an actual input stimulus of precisely known quantity. For a pressure gauge, indicator, or
transmitter, this would mean subjecting the pressure instrument to known fluid pressures and comparing
the instrument response against those known pressure quantities. One cannot perform a true calibration
without comparing an instrument’s response to known, physical stimuli.
To range an instrument means to set the lower and upper range values so it responds with the desired
sensitivity to changes in input. For example, a pressure transmitter set to a range of 0 to 200 PSI (0 PSI
= 4 mA output ; 200 PSI = 20 mA output) could be re-ranged to respond on a scale of 0 to 150 PSI (0
PSI = 4 mA ; 150 PSI = 20 mA). In analog instruments, re-ranging could (usually) only be
accomplished by re-calibration, since the same adjustments were used to achieve both purposes. In
digital instruments, calibration and ranging are typically separate adjustments (i.e. it is possible to re-
range a digital transmitter without having to perform a complete recalibration), so it is important to
understand the difference.
The purpose of calibration is to ensure the input and output of an instrument correspond to one another
predictably throughout the entire range of operation. We may express this expectation in the form of a
graph, showing how the input and output of an instrument should relate:
This graph shows how any given percentage of input should correspond to the same percentage
of output, all the way from 0% to 100%.
SIGN:
ROTODYNE INDIA MIEL/SP E&I/SMP/17(00)
ISSUE DATE: 01/09/2015
PROCEDURE AND GUIDELINES FOR PAGE: 4 OF 32
CALIBRATION OF FIELD INSTRUMENTS
IMS: ISO9001:2008, ISO14001:2004,OHSAS
18001:2007
Things become more complicated when the input and output axes are represented by units of
measurement other than “percent.” Take for instance a pressure transmitter, a device designed to
sense a fluid pressure and output an electronic signal corresponding to that pressure. Here is a graph
for a pressure transmitter with an input range of 0 to 100 pounds per square inch (PSI) and an
electronic output signal range of 4 to 20 milliamps (mA) electric current:
SIGN:
ROTODYNE INDIA MIEL/SP E&I/SMP/17(00)
ISSUE DATE: 01/09/2015
PROCEDURE AND GUIDELINES FOR PAGE: 5 OF 32
CALIBRATION OF FIELD INSTRUMENTS
IMS: ISO9001:2008, ISO14001:2004,OHSAS
18001:2007
Although the graph is still linear, zero pressure does not equate to zero current. This is called
a live zero, because the 0% point of measurement (0 PSI fluid pressure) corresponds to a non-zero
(“live”) electronic signal. 0 PSI pressure may be the LRV (Lower Range Value) of the transmitter’s
input, but the LRV of the transmitter’s output is 4 mA, not 0 mA.
Any linear, mathematical function may be expressed in “slope-intercept” equation form:
y=mx+b
Where,
y=Vertical position on the graph
x=Horizontal position on graph
m=Slope of line
b=Point of intersection between the line and the vertical (y) axis
This instrument’s calibration is no different. If we let x represent the input pressure in units
of PSI and y represent the output current in units of milliamps, we may write an equation for this
instrument as follows:
y=0.16x+4
On the actual instrument (the pressure transmitter), there are two adjustments which let us
match the instrument’s behavior to the ideal equation. One adjustment is called the zero while
the other is called the span. These two adjustments correspond exactly to the b and m terms of
the linear function, respectively: the “zero” adjustment shifts the instrument’s function vertically
on the graph, while the “span” adjustment changes the slope of the function on the graph. By
adjusting both zero and span, we may set the instrument for any range of measurement within the
manufacturers limits.
The relation of the slope-intercept line equation to an instrument’s zero and span adjustments
reveals something about how those adjustments are actually achieved in any instrument. A “zero”
adjustment is always achieved by adding or subtracting some quantity, just like the y-intercept term
SIGN:
ROTODYNE INDIA MIEL/SP E&I/SMP/17(00)
ISSUE DATE: 01/09/2015
PROCEDURE AND GUIDELINES FOR PAGE: 6 OF 32
CALIBRATION OF FIELD INSTRUMENTS
IMS: ISO9001:2008, ISO14001:2004,OHSAS
18001:2007
b adds or subtracts to the product mx. A “span” adjustment is always achieved by multiplying or
dividing some quantity, just like the slope m forms a product with our input variable x.
Zero adjustments typically take one or more of the following forms in an instrument:
• Bias force (spring or mass force applied to a mechanism)
• Mechanical offset (adding or subtracting a certain amount of motion)
• Bias voltage (adding or subtracting a certain amount of potential)
Span adjustments typically take one of these forms:
• Fulcrum position for a lever (changing the force or motion multiplication)
• Amplifier gain (multiplying or dividing a voltage signal)
• Spring rate (changing the force per unit distance of stretch)
It should be noted that for most analog instruments, zero and span adjustments are interactive.
That is, adjusting one has an effect on the other. Specifically, changes made to the span adjustment
almost always alter the instrument’s zero point1. An instrument with interactive zero and span
adjustments requires much more effort to accurately calibrate, as one must switch back and forth
between the lower- and upper-range points repeatedly to adjust for accuracy.
DAMPING ADJUSTMENTS
The vast majority of modern process transmitters (both analog and digital) come equipped with a
feature known as damping. This feature is essentially a low-pass filter function placed in-line with
the signal, reducing the amount of process “noise” reported by the transmitter.
Imagine a pressure transmitter sensing water pressure at the outlet of a large pump. The flow of
water exiting a pump tends to be extremely turbulent, and any pressure-sensing device connected
to the immediate discharge port of a pump will interpret this turbulence as violent fluctuations in
pressure. This means the pressure signal output by the transmitter will fluctuate as well, causing any
indicator or control system connected to that transmitter to register a very “noisy” water pressure:
SIGN:
ROTODYNE INDIA MIEL/SP E&I/SMP/17(00)
ISSUE DATE: 01/09/2015
PROCEDURE AND GUIDELINES FOR PAGE: 7 OF 32
CALIBRATION OF FIELD INSTRUMENTS
IMS: ISO9001:2008, ISO14001:2004,OHSAS
18001:2007
Such “noise” wreaks havoc with most forms of feedback control, since the control system will
interpret these rapid fluctuations as real pressure changes requiring corrective action. Although it
is possible to configure some control systems to ignore such noise, the best solution is to correct
the problem at the source either by relocating the pressure transmitter’s impulse line tap to a place
where it does not sense as great an amount of fluid turbulence, or somehow prevent that sensed
turbulence from being represented in the transmitter’s signal.
Since this noise is of a much greater frequency than the normal cycles of pressure in a process
system, it is relatively easy to reduce the amount of noise in the transmitter signal simply by filtering
that electronic signal using a low-pass filter circuit.
The simplest low-pass filter circuit is nothing more than a resistor and capacitor:
Low-frequency voltage signals applied to this circuit emerge at the output terminal relatively
unattenuated, because the reactance of the capacitor is quite large at low frequencies. High-frequency
signals applied to the same circuit become attenuated by the capacitor, which tends to “short” those
signals to ground with its low reactance to high frequencies. The performance of such a filter circuit
is primarily characterized by its cutoff frequency, mathematically defined as f = 2πRC 1 . The cutoff
frequency is the point at which only 70.7% of the input signal appears at the output (a -3 dB
attenuation in voltage).
If successfully applied to a process transmitter, such low-pass filtering has the effect of “quieting”
an otherwise noisy signal so only the real process pressure changes are seen, while the effect of
turbulence (or whatever else was causing the noise) becomes minimal. In the world of process
control, the intentional low-pass filtering of process measurement signals is often referred to as
damping because its effect is to “damp” (turn down) the effects of process noise:
SIGN:
ROTODYNE INDIA MIEL/SP E&I/SMP/17(00)
ISSUE DATE: 01/09/2015
PROCEDURE AND GUIDELINES FOR PAGE: 8 OF 32
CALIBRATION OF FIELD INSTRUMENTS
IMS: ISO9001:2008, ISO14001:2004,OHSAS
18001:2007
Low-frequency voltage signals applied to this circuit emerge at the output terminal relatively
unattenuated, because the reactance of the capacitor is quite large at low frequencies. High-frequency
signals applied to the same circuit become attenuated by the capacitor, which tends to “short” those
signals to ground with its low reactance to high frequencies. The performance of such a filter circuit
is primarily characterized by its cutoff frequency, mathematically defined as f = 2πRC 1 . The cutoff
frequency is the point at which only 70.7% of the input signal appears at the output (a -3 dB
attenuation in voltage).
If successfully applied to a process transmitter, such low-pass filtering has the effect of “quieting”
an otherwise noisy signal so only the real process pressure changes are seen, while the effect of
turbulence (or whatever else was causing the noise) becomes minimal. In the world of process
control, the intentional low-pass filtering of process measurement signals is often referred to as
damping because its effect is to “damp” (turn down) the effects of process noise:
In order for damping to be a useful tool for the technician in mitigating measurement noise, it
must be adjustable. In the case of the RC filter circuit, the degree of damping (cutoff frequency) may
be adjusted by changing the value or either R or C, with R being the easier component to adjust. In
digital transmitters where the damping is performed by a digital algorithm (either a sophisticated
digital filtering routine or something as simple as successive averaging of buffered signal values in
a first-in-first-out shift register), damping may be adjusted by setting a numerical value in the
transmitter’s configuration parameters. In pneumatic transmitters, damping could be implemented
by installing viscous elements to the mechanism, or more simply by adding volume to the signal line
(e.g. excess tubing length, larger tubing diameter, or even “capacity tanks” connected to the tube
for increased volume).
The key question for the technician then becomes, “how much damping do I use?” Insufficient
damping will allow too much noise to reach the control system (causing “noisy” trends, indications,
and erratic control), while excessive damping will cause the transmitter to understate the significance
of sudden (real) process changes. In my experience there is a bad tendency for instrument technicians
SIGN:
ROTODYNE INDIA MIEL/SP E&I/SMP/17(00)
ISSUE DATE: 01/09/2015
PROCEDURE AND GUIDELINES FOR PAGE: 9 OF 32
CALIBRATION OF FIELD INSTRUMENTS
IMS: ISO9001:2008, ISO14001:2004,OHSAS
18001:2007
to apply excessive damping in transmitters. A transmitter with too much damping (i.e. cutoff
frequency set too low, or time constant value set too high) causes the trend graph to be very
smooth, which at first appears to be a good thing. After all, the whole point of a control system is
to hold the process variable tightly to setpoint, so the appearance of a “flat line” process variable
trend is enticing indeed. However, the problem with excessive damping is that the transmitter
gives a sluggish response to any sudden changes in the real process variable. A dual-trend graph
of a pressure transmitter experiencing a sudden increase in process pressure shows this principle,
where the undamped transmitter signal is shown in the upper portion and the over-damped signal
in the lower portion (please note the vertical offset between these two trends is shown only for your
convenience in comparing the two trend shapes):
Excessive damping causes the transmitter to “lie” to the control system by reporting a process
variable that changes much slower than it actually does. The degree to which this “lie” adversely
affects the control system (and/or the human operator’s judgment in manually responding to the
change in pressure) depends greatly on the nature of the control system and its importance to the
overall plant operation.
One way damping may cause control problems is in systems where the loop controller is aggressively
tuned. In such systems, even relatively small amounts of damping may cause the
actual process variable to overshoot set point because the controller “thinks” the process variable
is responding too slowly and takes action to speed its response. A prime application for this to
happen in is flow control, where the process variable signal is typically “noisy” and the control
action typically aggressive. A technician may introduce damping to the transmitter with all good
intent, but unexpectedly causes the control system to wildly overshoot set point (or even oscillate)
because the controller is trying to get a “sluggish” process variable to respond quicker. In reality,
SIGN:
ROTODYNE INDIA MIEL/SP E&I/SMP/17(00)
ISSUE DATE: 01/09/2015
PROCEDURE AND GUIDELINES FOR PAGE: 10 OF 32
CALIBRATION OF FIELD INSTRUMENTS
IMS: ISO9001:2008, ISO14001:2004,OHSAS
18001:2007
the process variable (fluid flow rate) is not sluggish at all, but only appears that way because the
transmitter is damped. What is worse, this instability will not appear on a trend of the process
variable because the control system never sees the real process variable, but only the “lie” reported
by the over-damped transmitter. If any rule may be given as to how much damping to use in any
transmitter, it is this: use as little as necessary to achieve good control.
When calibrating a transmitter in a shop environment, the damping adjustment should be set to
its absolute minimum, so the results of applying stimuli to the transmitter are immediately seen by
the technician. Any amount of damping in a transmitter being calibrated serves only to slow down
the calibration procedure without benefit.
LRV,AND,URV.SETTINGS(DIGITAL.TRIM)
The advent of “smart” field instruments containing microprocessors has been a great advance for
industrial instrumentation. These devices have built-in diagnostic ability, greater accuracy (due to
digital compensation of sensor non linearities), and the ability to communicate digitally with host
devices for reporting of various parameters.
A simplified block diagram of a “smart” pressure transmitter looks something like this:
SIGN:
ROTODYNE INDIA MIEL/SP E&I/SMP/17(00)
ISSUE DATE: 01/09/2015
PROCEDURE AND GUIDELINES FOR PAGE: 11 OF 32
CALIBRATION OF FIELD INSTRUMENTS
IMS: ISO9001:2008, ISO14001:2004,OHSAS
18001:2007
It is important to note all the adjustments within this device, and how this compares to the
relative simplicity of an all-analog pressure transmitter:
Note how the only calibration adjustments available in the analog transmitter are the “zero” and
“span” settings. This is clearly not the case with smart transmitters. Not only can we set lower
and upper-range values (LRV and URV) in a smart transmitter, but it is also possible to calibrate
the analog-to-digital and digital-to-analog converter circuits independently of each other. What
this means for the calibration technician is that a full calibration procedure on a smart transmitter
potentially requires more work and a greater number of adjustments than an all-analog transmitter3.
A common mistake made among students and experienced technicians alike is to confuse the
range settings (LRV and URV) for actual calibration adjustments. Just because you digitally set the
LRV of a pressure transmitter to 0.00 PSI and the URV to 100.00 PSI does not necessarily mean it
will register accurately at points within that range! The following example will illustrate this fallacy.
Suppose we have a smart pressure transmitter ranged for 0 to 100 PSI with an analog output
range of 4 to 20 mA, but this transmitter’s pressure sensor is fatigued from years of use such that an
actual applied pressure of 100 PSI generates a signal that the analog-to-digital converter interprets
as only 96 PSI4. Assuming everything else in the transmitter is in perfect condition, with perfect
calibration, the output signal will still be in error:
SIGN:
ROTODYNE INDIA MIEL/SP E&I/SMP/17(00)
ISSUE DATE: 01/09/2015
PROCEDURE AND GUIDELINES FOR PAGE: 12 OF 32
CALIBRATION OF FIELD INSTRUMENTS
IMS: ISO9001:2008, ISO14001:2004,OHSAS
18001:2007
As the saying goes, “a chain is only as strong as its weakest link.” Here we see how the calibration
of the most sophisticated pressure transmitter may be corrupted despite perfect calibration of
both analog/digital converter circuits, and perfect range settings in the microprocessor. The
microprocessor “thinks” the applied pressure is only 96 PSI, and it responds accordingly with a
19.36 mA output signal. The only way anyone would ever know this transmitter was inaccurate at
100 PSI is to actually apply a known value of 100 PSI fluid pressure to the sensor and note the
incorrect response. The lesson here should be clear: digitally setting a smart instrument’s LRV and
URV points does not constitute a legitimate calibration of the instrument.
For this reason, smart instruments always provide a means to perform what is called a digital trim
on both the ADC and DAC circuits, to ensure the microprocessor “sees” the correct representation
of the applied stimulus and to ensure the microprocessor’s output signal gets accurately converted
into a DC current, respectively.
A convenient way to test a digital transmitter’s analog/digital converters is to monitor the
microprocessor’s process variable (PV) and analog output (AO) registers while comparing the real
input and output values against trusted calibration standards. A HART communicator device5
provides this “internal view” of the registers so we may see what the microprocessor “sees.”
The following example shows a differential pressure transmitter with a sensor (analog-to-digital)
calibration error:
SIGN:
ROTODYNE INDIA MIEL/SP E&I/SMP/17(00)
ISSUE DATE: 01/09/2015
PROCEDURE AND GUIDELINES FOR PAGE: 13 OF 32
CALIBRATION OF FIELD INSTRUMENTS
IMS: ISO9001:2008, ISO14001:2004,OHSAS
18001:2007
Here, the calibration standard for pressure input to the transmitter is a digital pressure gauge,
registering 25.00 inches of water column. The digital multimeter (DMM) is our calibration standard
for the current output, and it registers 11.93 milliamps. Since we would expect an output of
12.00 milliamps at this pressure (given the transmitter’s range values of 0 to 50 inches W.C.),
we immediately know from the pressure gauge and multimeter readings that some sort of calibration
error exists in this transmitter. Comparing the HART communicator’s displays of PV and AO
against our calibration standards reveals more information about the nature of this error: we see
that the AO value (11.930 mA) agrees with the multimeter while the PV value (24.781 ”W.C.) does
not agree with the digital pressure gauge. This tells us the calibration error lies within the sensor
(input) of the transmitter and not with the DAC (output). Thus, the correct calibration procedure
to perform on this errant transmitter is a sensor trim.
In this next example, we see what an output (DAC) error would look like with another differential
pressure transmitter subjected to the same test:
SIGN:
ROTODYNE INDIA MIEL/SP E&I/SMP/17(00)
ISSUE DATE: 01/09/2015
PROCEDURE AND GUIDELINES FOR PAGE: 14 OF 32
CALIBRATION OF FIELD INSTRUMENTS
IMS: ISO9001:2008, ISO14001:2004,OHSAS
18001:2007
Once again, the calibration standard for pressure input to the transmitter is a digital pressure
gauge, registering 25.00 inches of water column. A digital multimeter (DMM) still serves as our
calibration standard for the current output, and it registers 11.93 milliamps. Since we expect 12.00
milliamps output at this pressure (given the transmitter’s range values of 0 to 50 inches W.C.), we
immediately know from the pressure gauge and multimeter readings that some sort of calibration
error exists in this transmitter (just as before). Comparing the HART communicator’s displays of
PV and AO against our calibration standards reveals more information about the nature of this
error: we see that the PV value (25.002 inches W.C.) agrees with the digital pressure gauge while
the AO value (12.001 mA) does not agree with the digital multimeter. This tells us the calibration
error lies within the digital-to-analog converter (DAC) of the transmitter and not with the sensor
(input). Thus, the correct calibration procedure to perform on this errant transmitter is an output
trim.
Note how in both scenarios it was absolutely necessary to interrogate the transmitter’s
microprocessor registers with a HART communicator to determine where the error was located.
Merely comparing the pressure and current standards’ indications was not enough to tell us any
more than the fact we had some sort of calibration error inside the transmitter. Not until we viewed
the microprocessor’s own values of PV and AO could we determine whether the calibration error
was related to the ADC (input), the DAC (output), or perhaps even both.
SIGN:
ROTODYNE INDIA MIEL/SP E&I/SMP/17(00)
ISSUE DATE: 01/09/2015
PROCEDURE AND GUIDELINES FOR PAGE: 15 OF 32
CALIBRATION OF FIELD INSTRUMENTS
IMS: ISO9001:2008, ISO14001:2004,OHSAS
18001:2007
In genral technicians attempt to use the LRV and URV settings in a manner
not unlike the zero and span adjustments on an analog transmitter to correct errors such as these.
While it may be possible to get an out-of-calibration transmitter to yield correct output current
signal values over its calibrated range of input values by skewing the LRV and URV settings, it
defeats the purpose of having separate “trim” and “range” settings inside the transmitter. Also, it
causes confusion if ever the control system connected to the transmitter interrogates process variable
values digitally rather than interpreting it via the 4-20 mA loop current signal. Finally, “calibrating”
a transmitter by programming it with skewed LRV/URV settings corrupts the accuracy of any
intentionally nonlinear functions such as square-root characterization (used for flow measurement
applications).
Once digital trims have been performed on both input and output converters, of course, the
technician is free to re-range the microprocessor as many times as desired without re-calibration.
This capability is particularly useful when re-ranging is desired for special conditions, such as process
start-up and shut-down when certain process variables drift into uncommon regions. An instrument
technician may use a hand-held HART communicator device to re-set the LRV and URV range
values to whatever new values are desired by operations staff without having to re-check calibration
by applying known physical stimuli to the instrument. So long as the ADC and DAC trims are
both fine, the overall accuracy of the instrument will still be good with the new range. With analog
instruments, the only way to switch to a different measurement range was to change the zero and
span adjustments, which necessitated the re-application of physical stimuli to the device (a full re
calibration). Here and here alone we see where calibration is not necessary for a smart instrument. If
overall measurement accuracy must be verified, however, there is no substitute for an actual physical
calibration, and this entails both ADC and DAC “trim” procedures for a smart instrument.
Completely digital (“Field bus”) transmitters are similar to “smart” analog-output transmitters
with respect to distinct trim and range adjustments. .
An analogy for calibration versus ranging
The concepts of calibration (trimming) and ranging are often difficult for new students of
instrumentation to immediately grasp. A simple analogy useful for understanding these topics is that of
setting a digital alarm clock.
Suppose you purchase a digital alarm clock to wake you up at 7:00 AM in the morning so that you can
get to school on time. It would be foolish to simply unpack your new clock from its box, power it up,
and set the wake-up time to 7:00 AM expecting it will wake you at the correct time.
Before trusting this alarm time of 7:00 AM, you would first have to synchronize your new clock to
some standard time source (such as the time broadcast by your local telephone service, or better yet the
shortwave radio broadcast of WWV or WWVH6) so that it accurately registers time for the
zone in which you live. Otherwise, the wake-up setting of 7:00 AM will be hopelessly uncertain.
Once your clock is synchronized against a trusted time source, however, the wake-up (alarm)
time may be set at will. If your class schedule changed, allowing one more hour of sleep, you could
re-set the wake-up time from 7:00 AM to 8:00 AM without any need to re-synchronize (re-calibrate)
the clock. The only reason for re-synchronizing your clock to the time standard is to compensate
for inevitable drift due to imperfections in the clock circuitry.
Synchronizing the clock to a standard time source is analogous to “calibrating” or “trimming”
SIGN:
ROTODYNE INDIA MIEL/SP E&I/SMP/17(00)
ISSUE DATE: 01/09/2015
PROCEDURE AND GUIDELINES FOR PAGE: 16 OF 32
CALIBRATION OF FIELD INSTRUMENTS
IMS: ISO9001:2008, ISO14001:2004,OHSAS
18001:2007
a smart transmitter: you are establishing an accurate correspondence between what the device’s
microprocessor perceives and what the actual (real-life) values are. This step need only be done at
the very beginning of the device’s service, and every so often as warranted by the device’s calibration
drift over time7.
Setting the wake-up (alarm) time on the clock is analogous to setting the LRV and URV
parameters of a smart transmitter: you are defining the action(s) taken by the device at certain
measured values. For the alarm clock, you are defining the hour and minute of day when the alarm
sounds. For the transmitter, you are defining the measured variable values at which it will output
4 mA and 20 mA (for a 4-20 mA analog output range).
By contrast, an analog transmitter blends the functions of calibration and ranging into one. A
useful analogy for this is to imagine using a simple wind-up mechanical timer to wake you at 7:00
AM. Such a crude timing device does not even register time in hours and minutes like a digital alarm
clock: instead, it simply counts down time from its starting point and sounds an alarm when the
descending count reaches zero. In order to set this device for a 7:00 AM wake-up alarm, you must
first establish the current time and then calculate how many hours the timer must run before the
time reaches 7:00 AM (e.g. if you are setting the wind-up alarm when you go to bed at 10:30 PM,
this would equate to a timing period of 8.5 hours).
Every single time you set this wind-up alarm, you must consult a time standard to know how
many hours and minutes of count-down time to set it for. If you decide to wake up at a different
time in the morning, you must (once again) consult a standard time source, perform the necessary
arithmetic, and set the timer accordingly. Setting the alarm time on this mechanism necessitates
re-calibrating it to the local standard time without exception. Here, there is no distinction between
synchronization and alarm setting; no distinction between calibration and ranging – to do one is to
do the other.
DISCRETE.INSTRUMENTS
SIGN:
ROTODYNE INDIA MIEL/SP E&I/SMP/17(00)
ISSUE DATE: 01/09/2015
PROCEDURE AND GUIDELINES FOR PAGE: 17 OF 32
CALIBRATION OF FIELD INSTRUMENTS
IMS: ISO9001:2008, ISO14001:2004,OHSAS
18001:2007
the existence of deadband, it would not matter which way the applied pressure changed during the
calibration test. However, deadband will always be present in a discrete instrument, whether that
deadband is adjustable or not.
For example, a pressure switch with a deadband of 5 PSI set to trip at 85 PSI falling would re-set at 90
PSI rising. Conversely, a pressure switch (with the same deadband of 5 PSI) set to trip
at 85 PSI rising would re-set at 80 PSI falling. In both cases, the switch “trips” at 85 PSI, but
the direction of pressure change specified for that trip point defines which side of 85 PSI the re-set
pressure will be found.
A procedure to efficiently calibrate a discrete instrument without too many trial-and-error
attempts is to set the stimulus at the desired value (e.g. 85 PSI for our hypothetical low-pressure
switch) and then move the set-point adjustment in the opposite direction as the intended direction
of the stimulus (in this case, increasing the set-point value until the switch changes states). The
basis for this technique is the realization that most comparison mechanisms cannot tell the difference
between a rising process variable and a falling setpoint (or visa-versa). Thus, a falling pressure may
be simulated by a rising set-point adjustment. You should still perform an actual changing-stimulus
test to ensure the instrument responds properly under realistic circumstances, but this “trick” will
help you achieve good calibration in less time.
CALIBRATION PROCEDURE
calibration refers to the adjustment of an instrument so its output accurately corresponds to its input
throughout a specified range. This definition specifies the outcome of a calibration process, but not the
procedure. It is the purpose of this section todescribe procedures for efficiently calibrating different
types of instruments
.
(A) LINEAR..INSTRUMENTS(ANALOG TYPE)
The simplest calibration procedure for an analog, linear instrument is the so-called zero-and-span
method. The method is as follows:
1. Apply the lower-range value stimulus to the instrument, wait for it to stabilize
2. Move the “zero” adjustment until the instrument registers accurately at this point
3. Apply the upper-range value stimulus to the instrument, wait for it to stabilize
4. Move the “span” adjustment until the instrument registers accurately at this point
5. Repeat steps 1 through 4 as necessary to achieve good accuracy at both ends of the range
An improvement over this crude procedure is to check the instrument’s response at several
points between the lower- and upper-range values. A common example of this is the so-called
five-point calibration where the instrument is checked at 0% (LRV), 25%, 50%, 75%, and 100%
SIGN:
ROTODYNE INDIA MIEL/SP E&I/SMP/17(00)
ISSUE DATE: 01/09/2015
PROCEDURE AND GUIDELINES FOR PAGE: 18 OF 32
CALIBRATION OF FIELD INSTRUMENTS
IMS: ISO9001:2008, ISO14001:2004,OHSAS
18001:2007
(URV).of range. A variation on this theme is to check at the five points of 10%, 25%, 50%,
75%,and90%,while still making zero and span adjustments at 0% and 100%. Regardless of the
specific percentage points chosen for checking, the goal is to ensure that we achieve (at least)
the minimum necessary accuracy at all points along the scale, so the instrument’s response may
be trusted when placed into service.
Yet another improvement over the basic five-point test is to check the instrument’s response
at five calibration points decreasing as well as increasing. Such tests are often referred to as Updown
calibrations. The purpose of such a test is to determine if the instrument has any significant hysteresis: a
lack of responsiveness to a change in direction.
Some analog instruments provide a means to adjust linearity. This adjustment should be moved
only if absolutely necessary! Quite often, these linearity adjustments are very sensitive, and prone to
over-adjustment by zealous fingers. The linearity adjustment of an instrument should be changed only if
the required accuracy cannot be achieved across the full range of the instrument. Otherwise,it is
advisable to adjust the zero and span controls to “split” the error between the highest and
lowest points on the scale, and leave linearity alone.
Unlike the zero and span adjustments of an analog instrument, the “low” and “high” trim
functions of a digital instrument are typically non-interactive. This means you should only have to
apply the low- and high-level stimuli once during a calibration procedure. Trimming the sensor of
a “smart” instrument consists of these four general steps:
1. Apply the lower-range value stimulus to the instrument, wait for it to stabilize
3. Apply the upper-range value stimulus to the instrument, wait for it to stabilize
2. Measure the output signal with a precision milliammeter, noting the value after it stabilizes
SIGN:
ROTODYNE INDIA MIEL/SP E&I/SMP/17(00)
ISSUE DATE: 01/09/2015
PROCEDURE AND GUIDELINES FOR PAGE: 19 OF 32
CALIBRATION OF FIELD INSTRUMENTS
IMS: ISO9001:2008, ISO14001:2004,OHSAS
18001:2007
5. Measure the output signal with a precision milliammeter, noting the value after it stabilizes
(B)NON..LINEAR..INSTRUMENTS
The calibration of inherently nonlinear instruments is much more challenging than for linear
instruments. No longer are two adjustments (zero and span) sufficient, because more than two
points are necessary to define a curve.
Examples of nonlinear instruments include expanded-scale electrical meters, square root
characterizes, and position-characterized control valves.
Every nonlinear instrument will have its own recommended calibration procedure, so Refer
to the manufacturer’s literature for specific instrument. When calibrating a nonlinear instrument,
document all the adjustments make (e.g.
how many turns on each calibration screw) just in case you find the need to “re-set” the instrument
back to its original condition. More than once I have struggled to calibrate a nonlinear instrument
only to find myself further away from good calibration than where I originally started. In times like
these, it is good to know you can always reverse your steps and start over
The slope-intercept form of a linear equation describes the response of any linear
instrument:
y=mx+b
Where,
y=Output
SIGN:
ROTODYNE INDIA MIEL/SP E&I/SMP/17(00)
ISSUE DATE: 01/09/2015
PROCEDURE AND GUIDELINES FOR PAGE: 20 OF 32
CALIBRATION OF FIELD INSTRUMENTS
IMS: ISO9001:2008, ISO14001:2004,OHSAS
18001:2007
m=Span adjustment
x=Input
b=Zero adjustment
A zero shift calibration error shifts the function vertically on the graph, which is equivalent
to altering the value of b in the slope-intercept equation. This error affects all calibration points
equally, creating the same percentage of error across the entire range:
If a transmitter suffers from a zero calibration error, that error may be corrected by carefully
moving the “zero” adjustment until the response is ideal, essentially altering the value of b in the
line a. equation.
A span shift calibration error shifts the slope of the function, which is equivalent to altering
the value of m in the slope-intercept equation. This error’s effect is unequal at different points
throughout the range:
If a transmitter suffers from a span calibration error, that error may be corrected by carefully
moving the “span” adjustment until the response is ideal, essentially altering the value of m in the
linear equation.
SIGN:
ROTODYNE INDIA MIEL/SP E&I/SMP/17(00)
ISSUE DATE: 01/09/2015
PROCEDURE AND GUIDELINES FOR PAGE: 21 OF 32
CALIBRATION OF FIELD INSTRUMENTS
IMS: ISO9001:2008, ISO14001:2004,OHSAS
18001:2007
A linearity calibration error causes the instrument’s response function to no longer be a straight
line. This type of error does not directly relate to a shift in either zero (b) or span (m) because the
slope-intercept equation only describes straight lines:
Some instruments provide means to adjust the linearity of their response, in which case this
adjustment needs to be carefully altered. The behavior of a linearity adjustment is unique to each
model of instrument, and so you must consult the manufacturer’s documentation for details on how
and why the linearity adjustment works. If an instrument does not provide a linearity adjustment,
the best you can do for this type of problem is “split the error” between high and low extremes, so
the maximum absolute error at any point in the range is minimized:
SIGN:
ROTODYNE INDIA MIEL/SP E&I/SMP/17(00)
ISSUE DATE: 01/09/2015
PROCEDURE AND GUIDELINES FOR PAGE: 22 OF 32
CALIBRATION OF FIELD INSTRUMENTS
IMS: ISO9001:2008, ISO14001:2004,OHSAS
18001:2007
A hysteresis calibration error occurs when the instrument responds differently to an increasing
input compared to a decreasing input. The only way to detect this type of error is to do an up-down
calibration test, checking for instrument response at the same calibration points going down as going
up:
Hysteresis errors are almost always caused by mechanical friction on some moving element
(and/or a loose coupling between mechanical elements) such as bourdon tubes, bellows, diaphragms,
pivots, levers, or gear sets. Friction always acts in a direction opposite to that of relative motion,
which is why the output of an instrument with hysteresis problems always lags behind the changing
input, causing the instrument to register falsely low on a rising stimulus and falsely high on a
falling stimulus. Flexible metal strips called flexures – which are designed to serve as frictionless
pivot points in mechanical instruments – may also cause hysteresis errors if cracked or bent. Thus,
hysteresis errors cannot be rectified by simply making calibration adjustments to the instrument –
one must usually replace defective components or correct coupling problems within the instrument
mechanism.
In practice, most calibration errors are some combination of zero, span, linearity, and hysteresis
problems.
SIGN:
ROTODYNE INDIA MIEL/SP E&I/SMP/17(00)
ISSUE DATE: 01/09/2015
PROCEDURE AND GUIDELINES FOR PAGE: 23 OF 32
CALIBRATION OF FIELD INSTRUMENTS
IMS: ISO9001:2008, ISO14001:2004,OHSAS
18001:2007
SIGN:
ROTODYNE INDIA MIEL/SP E&I/SMP/17(00)
ISSUE DATE: 01/09/2015
PROCEDURE AND GUIDELINES FOR PAGE: 24 OF 32
CALIBRATION OF FIELD INSTRUMENTS
IMS: ISO9001:2008, ISO14001:2004,OHSAS
18001:2007
form for archival. This greatly streamlines the task of data management for calibration technicians.
Such calibration equipment also provides capability for preassigned calibration tests where the
technician simply downloads the calibration schedule to the electronic calibrator complete with
test points and acceptable tolerances of error, eliminating potential sources of error in having the
technician determine calibration points or error margins for a particular instrument. The same
calibrator may also provide a way to upload the collected data to a computer database. In some
industries, this degree of rigor in calibration record-keeping is merely helpful; in other industries
it is vital for business. Examples of the latter include pharmaceutical manufacturing, where
regulatory agencies (such as the Food and Drug Administration in the United States) enforces
rigorous standards for manufacturing quality which include requirements for frequent testing and
data archival of process instrument accuracy.
It is not uncommon for calibration tables to show multiple calibration points going up as well as
going down, for the purpose of documenting hysteresis and deadband errors. Note the following
example, showing a transmitter with a maximum hysteresis of 0.313 % (the offending data points
are shown in bold-faced type):
SIGN:
ROTODYNE INDIA MIEL/SP E&I/SMP/17(00)
ISSUE DATE: 01/09/2015
PROCEDURE AND GUIDELINES FOR PAGE: 25 OF 32
CALIBRATION OF FIELD INSTRUMENTS
IMS: ISO9001:2008, ISO14001:2004,OHSAS
18001:2007
In the course of performing such a directional calibration test, it is important not to overshoot
any of the test points. If you do happen to overshoot a test point in setting up one of the input
conditions for the instrument, simply “back up” the test stimulus and re-approach the test point
from the same direction as before. Unless each test point’s value is approached from the proper
direction, the data cannot be used to determine hysteresis/dead band error.
SIGN:
ROTODYNE INDIA MIEL/SP E&I/SMP/17(00)
ISSUE DATE: 01/09/2015
PROCEDURE AND GUIDELINES FOR PAGE: 26 OF 32
CALIBRATION OF FIELD INSTRUMENTS
IMS: ISO9001:2008, ISO14001:2004,OHSAS
18001:2007
Another important consideration with turn down is the accuracy of the instrument at the stated
turn down. The further an instrument is “turned down” from its maximum span, generally the
worse its accuracy becomes at that reduced span. For example, the Micro Motion “ELITE” series
of Coriolis mass flow meters are advertised to perform within an accuracy envelope of ±0.05% at
turn down ratios up to 20:1, but that measurement uncertainty increases to ±0.25% at a turn down
of 100:1, and to ±1.25% at a turn down of 500:1. It should be noted that the degradation of
measurement accuracy at large turn down ratios is not some defect of Micro Motion flow meters (far
from it!), but rather an inescapable consequence of pushing an instrument’s turn down to its limit.
ANNEXURE - A
SIGN:
ROTODYNE INDIA MIEL/SP E&I/SMP/17(00)
ISSUE DATE: 01/09/2015
PROCEDURE AND GUIDELINES FOR PAGE: 27 OF 32
CALIBRATION OF FIELD INSTRUMENTS
IMS: ISO9001:2008, ISO14001:2004,OHSAS
18001:2007
ANNEXURE - B
ANNEXURE - C
SIGN:
ROTODYNE INDIA MIEL/SP E&I/SMP/17(00)
ISSUE DATE: 01/09/2015
PROCEDURE AND GUIDELINES FOR PAGE: 28 OF 32
CALIBRATION OF FIELD INSTRUMENTS
IMS: ISO9001:2008, ISO14001:2004,OHSAS
18001:2007
ANNEXURE - D
SIGN:
ROTODYNE INDIA MIEL/SP E&I/SMP/17(00)
ISSUE DATE: 01/09/2015
PROCEDURE AND GUIDELINES FOR PAGE: 29 OF 32
CALIBRATION OF FIELD INSTRUMENTS
IMS: ISO9001:2008, ISO14001:2004,OHSAS
18001:2007
SIGN:
ROTODYNE INDIA MIEL/SP E&I/SMP/17(00)
ISSUE DATE: 01/09/2015
PROCEDURE AND GUIDELINES FOR PAGE: 30 OF 32
CALIBRATION OF FIELD INSTRUMENTS
IMS: ISO9001:2008, ISO14001:2004,OHSAS
18001:2007
ANNEXURE - E
SIGN:
ROTODYNE INDIA MIEL/SP E&I/SMP/17(00)
ISSUE DATE: 01/09/2015
PROCEDURE AND GUIDELINES FOR PAGE: 31 OF 32
CALIBRATION OF FIELD INSTRUMENTS
IMS: ISO9001:2008, ISO14001:2004,OHSAS
18001:2007
17.0 RECORDS:
Equipment History Register (if required)
Log Book
Maintenance Check list (if any)
SIGN:
ROTODYNE INDIA MIEL/SP E&I/SMP/17(00)
ISSUE DATE: 01/09/2015
PROCEDURE AND GUIDELINES FOR PAGE: 32 OF 32
CALIBRATION OF FIELD INSTRUMENTS
IMS: ISO9001:2008, ISO14001:2004,OHSAS
18001:2007
SIGN: