WP GWP Global Weighing Standard
WP GWP Global Weighing Standard
White Paper
Ensure Accurate Weighing with the
Global Weighing Standard
When a balance or scale is purchased, it is necessary to ensure that the instrument will
meet the metrological requirements of the intended application, i.e., that it is fit for its
intended purpose. Fitness for purpose is not a reflection of the instrument’s performance
but a statement of its suitability for the purpose for which it is intended to be used.
Fitness for purpose not only needs to be considered at the time of selection of the
instrument but also throughout its daily routine operation. But what makes non-automatic
weighing instruments fit for purpose in terms of accuracy?
This white paper investigates measurement uncertainty and how assessing this
uncertainty for a weighing instrument is the basis for maintaining the fit-for-purpose
status over the instrument’s lifetime. It also examines how regular calibration and routine
testing helps to ensure ongoing accuracy.
Table of Contents
1. Introduction 2
2. Weighing Regulations 3
3. The Benefits of Accurate Weighing 5
4. Measurement Uncertainty and Minimum Weight 6
4.1. Quantification of Accuracy: Measurement Uncertainty 6
4.2. Minimum Weight 12
4.3. The Concept of the Safety Factor 15
4.4. A Common Misconception Regarding Minimum Weight 17
5. Lifecycle of Weighing Instruments 18
5.1. Good Weighing Practice™ (GWP®) 18
5.2. Balance and Scale Selection 18
5.3. Confirmation of Performance at Place of Use 19
5.4. Routine Operation 21
6. Conclusion 29
7. References 30
1. Introduction
Weighing is an essential initial step in almost every application or process in a laboratory. In manufacturing
Good Weighing Practice™
environments, weighing can also be a crucial part of the production process, for example in dispensing or pre-
paring formulations. Whether in a lab or on a factory floor, the accuracy of a weighing result can have a signifi-
cant impact on the overall quality and integrity of the final product. This applies across all industries, from the
manufacture of pharmaceuticals, chemicals, fragrances or automotives to academic and research institutes,
testing laboratories, and companies focused on contract research and manufacturing. Throughout all industries,
accurate weighing is essential to ensure continuous adherence to predefined process requirements and to avoid
inconsistent quality, waste, rework, and Out of Specification (OOS) or Out of Tolerance results that violate inter-
nal or external regulations and guidelines.
This white paper introduces Good Weighing Practice™ (GWP®), the science-based global standard for efficient
lifecycle management of weighing devices. GWP is a state-of-the-art strategy to reduce measurement errors and
ensure reproducibly accurate weighing results based primarily on the user’s weighing requirements and prevail-
ing weighing risks.
GWP starts with the selection of the appropriate weighing device. Understanding weighing process requirements
and important balance and scale properties, such as minimum weight, is essential to select an appropriate
weighing device within the framework of the design qualification. Further, GWP confirms whether the balance or
scale is performing as expected in its actual place of use. It provides proof to the user that the weighing instru-
ment is fit for purpose in terms of accuracy when it is released for routine operation. Based on the evaluation of
the respective weighing process requirements, GWP also provides scientific guidance to the user regarding cali-
brating and testing of weighing instruments during the instrument’s lifecycle.
Over the lifecycle of the weighing instrument, GWP takes the weighing process requirements and risks into
account to establish a specific routine testing schedule for the instrument. The higher the impact of inaccurate
weighing results, and the more stringent the weighing tolerance requirements are, the more frequently calibration
and user tests must be carried out. However, for applications with lower risk and less stringent weighing toler-
ance requirements, testing efforts can be reduced accordingly. Risk and lifecycle management form an integral
part of the overall strategy of GWP to bridge the gap between regulatory compliance, process quality, productiv-
ity, and cost consciousness.
Weighing is a critical activity in many laboratory and production workflows. Nevertheless, weighing is not
always well understood, and its complexity is often underestimated. As the quality of weighing can strongly
influence the quality of the final result of an analytical procedure, the United States Pharmacopeia (USP) and
the European Pharmacopoeia (Ph. Eur.) specifically require highly accurate results when weighing analytes for
quantitative measures
Compared to the laboratory, the importance of weighing accuracy is even more underestimated in production
environments and a scale is often considered as being just another production tool. In the current practice of
selection and operation of a scale, factors such as hygiene, ingress protection, risks of fire or explosion, health
and safety of the operator, and productivity are regarded as being of high priority. Metrological criteria are thus
often not taken sufficiently into consideration.
Furthermore, in production environments, it is common for operators to have little knowledge of weighing prin-
ciples compared to operators in a laboratory. Unintended errors through the improper use of weighing devices
or the wrong choice of a weighing device are consequently more frequent in production than in a laboratory and
OOS errors hence occur more often.
Another frequent practice is to use existing balances and scales for a purpose other than the one they were
acquired for. Here too, the performance of the repurposed scale very often does not meet the metrological needs
of the new application.
OOS results in production are not only a sign that quality might be at risk but can also indicate a risk to the
health and safety of the consumer, a potential breach of legal metrology requirements, and an economic loss for
the company. In such situations, raw materials, human resources, and assets have been utilized in a process
that has resulted in bad quality. Products may need to be reworked or even disposed of. The detection of an
error may trigger tedious and costly recall actions that negatively impact the brand.
For production environments, there are no weighing requirements comparable to the USP and Ph. Eur. General
Chapters for balances used for weighing analytes for quantitative measures. Furthermore, regulations such as
ISO and GMP do not specify how accurate results must be defined and verified. Consequently, the principles
applied across a wide range of industries are very diverse.
As described in pharmaceutical GMP regulations or food regulations published by BRCGS, IFS, SQF, or FSSC,
quality relevant instruments also need to be tested and calibrated periodically. For example, the FSSC 22000
Scheme for Food Safety Management System Certification stipulates in Chapter 2.6.9 that quality control
measures:
“[…] shall include a program for calibration and verification of equipment used for quality and quantity
control.”
While pointing out the importance of measurement device control, the standards do not provide concrete advice
on how to carry this out.
In 2022, the FDA published guidance on how to handle OOS results and how to perform a proper
investigation , stating:
“Laboratory errors should be relatively rare. Frequent errors suggest a problem that might be due to inad-
equate training of analysts, poorly maintained or improperly calibrated equipment, or careless work.”
ments such as Genetically Modified Organisms (GMOs) and nanotechnology. The increase in the international
sourcing and trading of food and feed is expected to propel this forward even more.
With these trends and corresponding changes in international and national laws, standards and inspection
processes will be subject to regular revision. In addition, the U.S. Modernization of Cosmetics Regulation Act
(MoCRA) of 2022 , which requires cosmetics manufacturers to follow GMP guidelines, and the U.S. Food
Safety Modernization Act (FSMA) from 2011 demonstrate a shift in the focus of federal regulators by requir-
ing companies to prevent safety issues rather than respond to them. The regulations cover enhanced prevention
control and increased frequency of mandatory FDA inspections. While in the past, almost all FDA Observations
and Warning Letters were addressed to the pharmaceutical and medical device industry, food and cosmetics
companies are now being increasingly targeted.
Besides being a regulatory requirement, accurate weighing affects product quality and production costs. Any
measures taken to ensure appropriate weighing quality will influence all three areas:
1. Compliance
Ensuring and providing evidence of weighing accuracy is often a critical requirement for regulatory compli-
ance, particularly in industries that are subject to strict standards and regulations. By employing a risk-based
approach to weighing accuracy and maintaining audit-proof documentation, companies can ensure that they
are fully compliant with internal and external requirements.
2. Product quality
Accurate weighing leads to reproducibility of results and consistent product quality. Customers can therefore
trust that the product they receive will be of the same quality each time.
3. Production costs
Ensuring appropriate weighing accuracy reduces production costs in several ways:
a) Accurate weighing helps minimize waste by ensuring that the correct amount of material is used in each
production run, which is especially important for expensive or rare materials. When materials are over-
or under-weighed, it can lead to excess waste or insufficient product, both of which can be costly.
b) Accurate weighing can also reduce disposal costs by minimizing the amount of excess material that
needs to be discarded.
c) When materials are over- or under-weighed, it can lead to production errors that require rework. This can
be time-consuming and costly, particularly if the errors are not identified until later in the production pro-
cess. Ensuring accurate weighing reduces the likelihood of production errors, leading to less rework and
overall cost savings.
d) The cost of device testing can be lowered due to the reduction in unnecessary testing.
In summary, the benefits of ensuring appropriate weighing accuracy are significant and the investment of the
time, effort, and costs of ensuring weighing accuracy quickly pays off.
Key Takeaway
Accurate weighing offers significant benefits in terms of compliance, quality, and costs.
What is weighing accuracy? A common misconception is that the readability of a weighing instrument is equal
to its accuracy. For example, statements like this are often heard during design qualification:
“I want to buy an analytical balance with a readability of 0.1 mg, because that is the accuracy I need for
my application.”
A company may select an analytical balance with a capacity of 200 g and a readability of 0.1 mg, because it is
believed that the balance is “accurate to 0.1 mg”. However, this is a mistake as will be explained below.
Key Takeaway
The readability of a balance or scale does not define its weighing accuracy.
A second misconception is the principle of “what you see is what you get”. For example, an operator weighs
a product on a floor scale and gets a reading of 120,000 kg, which is interpreted as the true amount of mate-
rial that has been weighed. However, this reading might not exactly reflect the amount weighed. The amount
weighed might differ slightly from the indication on the display of the scale, which could have an impact on the
work being undertaken. This is due to the measurement uncertainty, which will be explained in more detail later.
Measurement uncertainty is often perceived by the user as weighing error.
Key Takeaway
What you see is not what you get; the amount indicated on the display may differ from the actual
amount of material being weighed. Like all measuring instruments, balances and scales cannot
offer absolute accuracy; they all have an inherent uncertainty.
4.1.2. The Properties of a Balance or Scale That Influence Its Weighing Accuracy
There are several properties, quantified in the specifications of the weighing instrument, that determine its overall
performance. The most important are repeatability (RP), eccentricity (EC), sensitivity (SE), and nonlinearity (NL),
which are illustrated in Figure 1, and explained in detail in technical literature
1 kg
Load
0 1 kg
Figure 1: The properties of a balance or scale that determine overall weighing performance. Repeatability: The red Gaussian curve
represents the distribution of the measurement values due to repeatability. Eccentricity: The blue circles represent the values obtained from
eccentric loading. Sensitivity: The continuous orange line indicates the sensitivity offset from the perfect straight line (dashed orange line).
Nonlinearity: The grey areas represent the natural variation from the continuous line that represents the sensitivity offset.
To understand how these factors influence performance, and hence, the selection of a weighing instrument,
the term “measurement uncertainty” must first be discussed. The “Guide to the Expression of Uncertainty in
Measurement (GUM)” defines uncertainty as a:
“parameter, associated with the result of a measurement, that characterizes the dispersion of the values
that could reasonably be attributed to the measurand”.
In simple words, measurement uncertainty describes a plus or minus range around a weighing result in
which the true weight lies. A weight result should therefore always be expressed with a +/- range, for example,
3.025 g ± 0.015 g.
The weighing uncertainty, i.e., the uncertainty when an object is weighed, can be estimated from the specifica-
tions of a balance or scale. This is typically the case during a design qualification. After a balance or scale is
installed, measurement uncertainty is initially determined by a calibration carried out as part of the operational
qualification and periodically thereafter as part of the performance qualification. International guidelines on the
calibration of non-automatic weighing instruments stipulate details on the determination of measurement uncer-
tainty . The results of the calibrations are made transparent in appropriate calibration certificates.
The EURAMET cg-18 calibration guideline represents the most widespread reference document that details the
methodology of deriving the measurement uncertainty of non-automatic weighing instruments. Other calibration
guidelines referred to here are national or regional implementations that are fully compliant with EURAMET cg-18.
In general, measurement uncertainty of weighing instruments can be approximated by a positive sloped straight
line—the higher the net load on the balance or scale, the larger the (absolute) measurement uncertainty
becomes, as the values in Table 1 show and as represented by the green line in Figure 2. The relative measure-
ment uncertainty describes the measurement uncertainty as a percentage of the applied load. Relative mea-
surement uncertainty is calculated by dividing the absolute measurement uncertainty by the net indication and
Table 1: Extract of a calibration certificate of a scale. Every example value of a net indication is listed with its absolute measurement
uncertainty (shown here in green) and its relative measurement uncertainty (shown here in blue).
Measurement
Uncertainty
Absolute
Measurement
Relative Uncertainty
Measurement
Uncertainty 0.0092 kg
4.6 %
0.0054 kg
0.48 %
0.0024 kg
0.0017 kg 0.068 %
0.0016 kg 0.031 %
0.026 %
Load
0.035 kg 0.350 kg 3.500 kg 17.500 kg 35.000 kg
Figure 2: Schematic drawing of the absolute (green line) and relative (blue line) measurement uncertainty of a typical weighing instrument.
(Please note that this graph is simplified for ease of explanation.)
The absolute measurement uncertainty, as a positive sloped straight line, can be mathematically expressed by:
Uabs = α + β · m
α
Load
Figure 3: α is the intercept and represents the absolute measurement uncertainty at zero load, while β represents the slope of the absolute
measurement uncertainty curve.
As Figure 3 illustrates, the behavior of the relative measurement uncertainty follows a hyperbolic curve. The rela-
tive measurement uncertainty becomes larger as the load becomes smaller. The measurement uncertainty does
not become zero, even at almost zero load, and must therefore be taken into account in every weighing process.
With smaller loads, the relative uncertainty can be significant and unacceptably high. There will be a point at
which the relative measurement uncertainty is too high for the weighing process and the weighing results cannot
be trusted anymore.
It is therefore highly recommended for all processes involving balances or scales to ensure that the measure-
ment uncertainty of the weighing device at the applied load of the application does not conflict with the required
user-defined weighing process tolerance.
Key Takeaway
– The measurement uncertainty never becomes zero on any weighing device.
– The absolute measurement uncertainty increases with increasing load, whereas the relative
measurement uncertainty decreases (and vice versa).
– At low loads, the relative measurement uncertainty becomes very high.
– Ensuring that the measurement uncertainty stays within the required weighing tolerance is
important, especially for critical processes.
The behavior of the individual contributing factors to measurement uncertainty (repeatability, eccentricity,
sensitivity and nonlinearity) is illustrated in Figure 4. The graph shows the relative weighing uncertainty as a
Good Weighing Practice™
function of the applied load for an analytical balance with a capacity 200 g. The uncertainty can be separated
into three distinct regions:
Region 1
Region 1 includes loads where the contribution of repeatability dominates the uncertainty. In this example,
region 1 covers loads up to approximately 10 g. As repeatability is a weak function of gross load (if at all), the
relative uncertainty decreases inversely proportional to the load.
Region 3
Region 3 includes loads where the contributions of sensitivity offset and eccentricity dominate the uncertainty.
In this example, region 3 covers loads above approximately 100 g. The relative uncertainties of these properties
are independent of the load; consequently, the combined relative uncertainty remains (essentially) constant.
Region 2
Region 2 is the transition region where the uncertainty rolls off from inverse proportionality to a constant value.
For the majority of laboratory balances, nonlinearity makes an insignificant contribution to uncertainty, as its
relative uncertainty over the entire weighing range is never the dominant contributor.
Relative Weighing
Uncertainty U [%]
1 utot
Region 1 Region 2 Region 3
uRP
uEC
0.1
uNL
uSE
0.01 Repeatability
dominates
Sensitivity/
Eccentricity
0.001 dominates
0.0001
0.00001
0.01 0.1 1 10 100 1,000
Sample Mass ms [g]
Figure 4: Relative weighing uncertainty versus load (with zero tare load) of a typical analytical balance (utot, thick black curve). The
contributing components to uncertainty are also shown: Repeatability (uRP, red), Eccentricity (uEC, blue), Sensitivity (uSE, orange) and
Nonlinearity (uNL, grey). Repeatability dominates uncertainty in the red region, sensitivity or eccentricity in the orange region.
Industrial scales follow the same principles as laboratory balances, with additional constraints arising from the
technology used. Many scales use strain gauge load cells that lead to a lower resolution than balances that use
load cells based on electromagnetic force compensation . In these cases, the so-called “rounding error”
may be predominant. However, for higher resolution scales that also use load cells based on electromagnetic
Linearity deviation is often considered a significant contributor, but it can generally be ignored when weighing
small loads. Since the relative measurement uncertainty becomes smaller when weighing larger loads, it can be
concluded that nonlinearity makes a negligible contribution to the measurement uncertainty of the instrument.
In the same way as for laboratory balances, attention must be focused on the repeatability to define the critical
limit of a high-resolution industrial scale.
Key Takeaway
For analytical balances and microbalances, which are used for weighing small loads, repeatability
is the dominant contributor to the measurement uncertainty. Whereas, with precision balances
and industrial weighing instruments, which are used for weighing materials with a wide range of
loads, sensitivity and eccentricity are additional contributing factors. For high-resolution weighing
instruments, repeatability can be the dominant factor in applications involving weighing loads at the
lower end of the measurement range.
In addition to the inherent weighing properties of balances and scales, environmental factors also influence
measurement uncertainty. In one way, these factors are accounted for in the determination of the measurement
uncertainty. However, due to their volatile nature, these factors may significantly influence measurement uncer-
tainty in an unexpected way. To allow for this, it is recommended to introduce a safety factor. The concept of the
safety factor is explained in chapter .
As well as determining the performance of an instrument, ACC assesses whether that performance
is good enough to meet the accuracy requirements of the process for which it is being used, i.e.,
whether it is fit for purpose. This is done with the GWP® Certificate as an annex to the calibration
certificate. Various other tolerance assessments are available to analyze the calibration results,
providing the user with concrete pass/fail statements.
It is necessary to define a weighing tolerance (accuracy requirement) for every weighing process. This tolerance
specifies the maximum permitted relative measurement uncertainty of a measurement and may be specified by
the company and/or stipulated by relevant regulations. It is typically an acceptable ± range around the weighing
result.
Adding the weighing tolerance as a percentage to the graph of measurement uncertainty, as shown in Figure 5,
there is a point where the required weighing tolerance equals the relative measurement uncertainty curve. At this
point of intersection, the load just has the acceptable weighing accuracy. This load value is called the minimum
weight. Any smaller load weighed on that device, indicated by the red area, will result in inaccurate measure-
ments according to the defined tolerance. This is because the relative measurement uncertainty of the instrument
is larger than the required accuracy of the weighing process. Consequently, there is a specific accuracy limit, or
minimum weight, for every weighing instrument for a given application. It is therefore necessary to weigh at least
this amount of material in order that the level of uncertainty satisfies the specific weighing accuracy requirement.
Minimum Weight
Load
Figure 5: The accuracy limit of the instrument, the minimum weight, is the intersection point between the relative measurement uncertainty
curve (in blue) and the required weighing tolerance.
Key Takeaway
The minimum weight is the smallest load that can be weighed to achieve a required weighing
tolerance. It represents the smallest amount of material that can be weighed where the relative
measurement uncertainty is just acceptable. Weighing loads larger than the minimum weight is
considered accurate as the relative measurement uncertainty is lower than the required weighing
tolerance. However, it is important to understand that the minimum weight applies to the net load
being weighed. See section 4.4 for more details.
The minimum weight is derived from the measurement uncertainty of the weighing device and is determined dur-
ing calibration. Its calculation is based on the intercept α and the slope β of the absolute measurement uncer-
tainty curve (see Figure 3).
At the minimum weight, the relative measurement uncertainty is equal to the weighing tolerance (see Figure 5):
Since the relative measurement uncertainty equals the absolute measurement uncertainty divided by the applied
load, at the minimum weight:
Using the equation Uabs = α + β m given earlier, the minimum weight can be calculated from:
Rearranging:
m min = α / (Tol - β)
It should be emphasized that for weighing small loads on analytical balances and microbalances, the dominant
contributing factor to the uncertainty stems from repeatability. Repeatability is expressed as the standard devia-
tion, s, of a series of replicate weight measurements. In laboratory applications, the minimum weight is typically
a very small load compared to the capacity of the balance, and, for analytical balances and microbalances, the
standard deviation is quite often the dominant portion of α. Considering a coverage factor, or k value, of 2, which
provides a 95 % confidence interval for the distribution of the replicate weight measurements, for small loads on
analytical balances and microbalances, the minimum weight is approximated to 2s instead of α. However, in the
case that the standard deviation, s, is smaller than the rounding error, the rounding error is used instead. This
concept is explained in more detail below.
METTLER TOLEDO White Paper 13
The minimum weight for these applications can be approximated as:
This approximation of the minimum weight equation is emphasized in Pharmacopeia requirements for balances
(USP General Chapter 41 and Ph. Eur. General Chapter 2.1.7). USP General Chapter 41 defines the repeatability
requirement as follows:
“Repeatability is satisfactory if twice the standard deviation of the weighing value, s, divided by the
desired smallest net weight (i.e., smallest net weight that the users plan to use on that balance), does not
exceed 0.10 % .”
m min = (k / Tol) · s
= (2 / 0.10 %) · s
= 2000 · s
In USP General Chapter 1251 , which is an informational chapter supplementing the mandatory require-
ments laid out in General Chapter 41, the concept of minimum weight is explained in detail. As an important
consideration, it is stated that this chapter not only provides information for balances used for weighing analytes
used for quantitative measures (scope of USP General Chapter 41) but also has a wider scope by addressing
balances used in all analytical procedures.
It is important to note that due to rounding of the digital indication, there is a lower limit to the standard devia-
tion, which is usually expressed as 0.29d, with d being the scale interval of the weighing instrument. This limit is
based on the assumption of a rectangular distribution of the measurement results before digitalization. As every
weighing measurement consists of two readings (gross and tare), which are considered independent from each
other, the lower limit to the standard deviation of a weighing measurement is expressed as √2·0.29d = 0.41d.
Consequently, the lower limit of the minimum weight for the respective weighing accuracy requirement is also
given by the finite readability of the weighing instrument.
Note that the statements in the preceding sections specifically apply to laboratory applications involving weigh-
ing for quantitative measures, where typically small loads are weighed on analytical balances or microbalances.
For precision balances, and especially for industrial scales, the other balance parameters (SE, NL, EC) also con-
tribute to measurement uncertainty at the lower end of the weighing range, so the minimum weight cannot be
determined by repeatability alone. Furthermore, the assumption of only weighing very small amounts of material
in relation to the capacity of the weighing device is no longer applicable. For these cases, the minimum weight
needs to be derived from the complete measurement uncertainty as presented in the calibration certificate.
A summary of the two different approaches to determining minimum weight can be seen in Table 2.
Applicable standards and EURAMET cg-18, ASTM E898 Pharmacopeial compendia, e.g. USP <41>,
guidelines Ph. Eur. 2.1.7
It is important to state that the performance of balances and scales changes over time and therefore the mini-
mum weight will also change. This is due to wear and tear and changing environmental conditions that affect
the performance of the instrument, for example, vibrations, drafts, and temperature changes. Different operators
using a device also adds variability to the minimum weight, due to varying skill levels and the way in which they
carry out a weighing measurement. The minimum weight is determined during calibration, i.e., at a particular
time, with particular environmental conditions, and by a qualified, authorized technician. However, during daily
operation, to account for the aforementioned variations, and ensure that the operator always weighs above the
minimum weight, it is highly recommended to apply a safety factor to the minimum weight (as illustrated in
Figure 6). It is important to note that the smallest net weight, which is the smallest net load that the user intends
to weigh with the instrument, should always be larger than the minimum weight with the safety factor applied to
ensure accurate weighing results over time.
Key Takeaway
The smallest net weight that the user intends to weigh on the instrument must be larger than the
minimum weight with the safety factor applied to ensure accurate weighing results over time.
When the minimum weight is calculated from the measurement uncertainty derived in calibration, the minimum
weight including the safety factor is determined by dividing the required weighing tolerance by the safety factor.
For example, if the required weighing tolerance is 1 %, a safety factor of 2 results in a minimum weight that is
calculated based on a weighing tolerance of 0.5 %.
Based on the concept presented in chapter 4.2, the minimum weight is consequently derived as
Minimum Weight
1% Safety Factor
Load
Figure 6: Applying a safety factor to the minimum weight ensures that wear and tear, changing environmental conditions, and different
operators are accounted for with regard to weighing accuracy over time. In this example, a safety factor of 2 is applied, i.e., the required
weighing tolerance of 1 % is divided by 2 and the minimum weight is calculated based on a weighing tolerance of 0.5 %. The safe
weighing range covers all loads larger than the minimum weight based on the safety factor requirement.
For standard weighing processes in the laboratory, a safety factor of 2 is commonly used, provided the environ-
mental conditions are reasonably stable and the operators are trained. For production conditions, very critical
applications, or an unstable environment, a safety factor of 3 or higher is recommended.
This means the minimum weight is simply multiplied by the safety factor.
The safety factor is also made transparent in USP General Chapter 1251:
“Factors that can influence repeatability while the balance is in use include:
1. The performance of the balance and thus the minimum weight can vary over time because of changing
environmental conditions.
2. Different operators may weigh differently on the balance, i.e., the minimum weight determined by different
operators may be different.
3. The standard deviation of a finite number of replicate weighings is only an estimation of the true standard
deviation, which is unknown.
4. The determination of the minimum weight with a test weight may not be completely representative for the
weighing application.
5. The tare vessel also may influence minimum weight because of the interaction of the environment with
the surface of the tare vessel.
For these reasons, when possible, weighings should be made at larger values than the minimum weight.”
Finally, a critical misconception that is prevalent in the industry must be highlighted: Many operators wrongly
believe that the weight of the tare vessel also accounts for the adherence to the minimum weight requirement.
They believe that if a load is smaller than the minimum weight, a tare vessel can be used to “move” that load
into the safe weighing range, thinking that weighing accuracy would be fulfilled.
This would mean that, with a large enough tare container, even one gram could be weighed on a production
floor scale of 3 tons capacity and the measurement would still comply with the applicable weighing tolerance.
As the rounding error of the digital indication is always the lowest limit of the overall measurement uncertainty
of the instrument, it can be easily understood that weighing such a small amount of material—in whatever tare
container—cannot lead to accurate results. Furthermore, every weighing result is always the difference between
two readings, before and after material has been added or removed from the tare container. This difference char-
acterizes the net amount of material, and the uncertainty in the measurement of this net amount of material must
comply with the applicable weighing tolerance requirement. These considerations clearly show us that this wide-
spread misinterpretation can cause significant unnoticed weighing errors. Similarly, if more than one component
is weighed in a tare container, for example as part of a formulation process, every single component must fulfill
the minimum weight requirement.
As another example, imagine having 1 kg of soup, and adding 50 g (5 %) of hot chili spices, resulting in a soup
that is too spicy. Now consider weighing the soup together with a large cooking pot of 10 kg. While the 50 g
chili now only makes up approximately 0.5 % of the combined weight of the pot and soup together, there is still
too much chili (5 %) in the soup. What counts is the net weight of each individual load, not in combination with
the tare vessel.
“In order to satisfy the required weighing tolerance, when samples are weighed the amount of sample
mass (i.e., the net weight) must be equal or larger than the minimum weight. The minimum weight
applies to the sample weight, not to the tare or gross weight.”
Key Takeaway
Every load weighed—excluding the tare vessel—needs to comply with the weighing tolerance
requirements. This means that every load, without its tare vessel, needs to exceed the minimum
weight. It is highly recommended to apply a safety factor to compensate for changes in the
performance of the weighing device. Implementing a safety factor leads to a safe weighing range,
as established earlier in this chapter. This applies to both the addition and removal of a load as
well as for formulation, dispensing, and other similar applications—every single component must
exceed the minimum weight.
To help ensure weighing accuracy, METTLER TOLEDO developed the globally recognized standard Good
Weighing Practice, or GWP, in 2007. It is a standardized, risk-based approach developed for the secure selec-
tion, calibration and routine operation of balances and scales. It is suitable for weighing large and small quanti-
ties and it is applicable for any weighing device from any manufacturer in any workplace or industry. Its scien-
tific principles are incorporated into many guidelines and standards such as USP, Ph. Eur., UKAS, ISO and ASTM
.
GWP goes beyond confirmation of how accurate a weighing device is. The objective of GWP is to ensure weigh-
ing devices are fit for their intended purpose in terms of accuracy and can deliver weighing results that can be
trusted. GWP covers the entire lifecycle of a weighing device. As already established in chapter 3, the benefits of
accurate weighing include:
• Reliable quality, through:
– Consistency of product quality
– Reproducibility of results
• Reduced costs, through:
– Minimized material waste
– Reduced disposal costs
– Less rework
• Easier compliance with:
– Audit-proof documentation
– Appropriate quality management processes
Good Weighing Practice ensures fit-for-purpose weighing results from the point of selection through the proper
definition of purpose and tolerances to confirmation with a fit-for-purpose check. Furthermore, GWP provides an
appropriate program for periodic performance verification.
Key Takeaway
The goal of Good Weighing Practice is to select and maintain balances and scales that are fit for
purpose in terms of accuracy.
In the process of sourcing a new weighing instrument, GWP principles help to avoid purchasing the wrong
instrument. To select the right device, the following requirements need to be defined:
• Maximum weight: The maximum load to be weighed (including the tare container)
• Smallest net weight: The smallest load to be weighed (excluding the tare container)
• Weighing tolerance and regulations: The acceptable measurement uncertainty (often referred to as weighing
error, specified as ± percentage, and derived from standards and/or regulations (e.g. Pharmacopoeias, spe-
cific ISO standards, etc., or defined by the user based on specific quality requirements)
• Safety factor: A consideration of the environment and external device influences (e.g. vibrations, drafts, number
of operators, training level, etc.)
5.3.1. Calibration
After selection and installation, the device’s actual performance at its place of use needs to be determined. This
is done by calibration.
Many standards and regulations also stipulate that instruments that are relevant for the quality of the process or
product under consideration must be periodically calibrated. Examples include:
“[…] measuring equipment shall be calibrated or verified, or both, at specified intervals against measure-
ment standards traceable to international or national measurement standards […].”
— ISO 9001:2015 , 7.1.5.2 Measurement Traceability
“Automatic, mechanical or electronic equipment […] shall be routinely calibrated, inspected or checked
according to a written program designed to assure proper performance.”
— 21 CFR Part 211.68 (a), US GMP for Pharma
“The methods and responsibility for calibration and recalibration of measuring, test and inspection equip-
ment used for monitoring activities outlined in prerequisite programs, food safety plans and other process
controls […] shall be documented and implemented.”
— SQF Food Safety Code: Food Manufacturing – Chapter 11.2.3 Calibration
The statements cited above delegate the responsibility for the correct operation of equipment to the user. This
also applies for weighing instruments. Statements like these are usually formulated vaguely, as they are meant
as general guidelines. However, questions like “How often should I test my weighing instrument?” emerge in
situations where guidance is needed to design standard operating procedures to assure the proper functioning of
the instrument that neither are too exhaustive, and thus costly and time consuming, nor too loose, and thus not
adequate to assure consistently accurate results.
While calibration establishes the relationship between measurement standards and indications (how well the
instrument performs), it only delivers measurement uncertainty values. It does not provide an assessment of
whether or not the device meets specific requirements. However, in many cases, the calibration procedure is
the starting point for a subsequent assessment of the results. It is common practice to document a subsequent
results assessment in the form of an annex to the calibration certificate.
Tolerances can come from a variety of sources. With respect to weighing instruments, the manufacturer speci-
fies initial tolerances for each balance or scale model. Legal metrology requirements, commonly referred to as
"Legal for Trade", like OIML R76-1 or HB44 , as well as industry specific regulations, like USP General
To ensure accurate weighing results, Good Weighing Practice recommends the fit-for-purpose check. This is the
most holistic performance assessment as it takes individual accuracy needs as well as applicable standards
and regulations into account and compares them against the weighing performance determined at calibration.
Fit-for-Purpose Check
As an annex to the ACC calibration, the GWP® Certificate provides evidence that the instrument is
good enough to do its intended job at its place of use. It delivers a fit-for-purpose check in terms of
accuracy for balances and scales.
There are other regulations which might need to be fulfilled such as legal metrology requirements or legacy
accuracy limits. These tolerances are different, so it is recommended to confirm fitness for purpose in terms
of accuracy against the main applications or processes in addition to any tolerance assessments required by
regulations.
1. Calibration in-situ by authorized personnel, including the determination of measurement uncertainty and mini-
mum weight under normal utilization conditions: The aim is to assess the complete performance of the instru-
ment by testing all relevant weighing parameters of the instrument, made transparent to the user in a calibration
certificate. It is important to mention that for weighing instruments, the zero point is an official calibration point
as an associated measurement uncertainty can be reported (mainly consisting of the uncertainty contributions
due to rounding and the finite standard deviation of repeatability). Consequently, the instrument is calibrated
from zero, and there is no explicit need to have specific calibration points close to or at the working point, i.e.
the point within the operating range of the instrument which reflects the typical load of substances weighed. This
statement also applies when weighing very small amounts of substances (in comparison to the capacity of the
device) as the working point is bracketed by the zero point and the next regular calibration point.
2. Routine test of the weighing device, to be carried out by the user: Only those weighing parameters are as-
sessed which have the largest influence on the performance of the balance or scale: The aim is to confirm the
suitability of the instrument for the application.
> Learn more about routine testing in chapter 5.4.2.
3. Automatic tests or adjustments, where applicable, using built-in reference weights: The aim is to reduce the
effort of manual testing as also stipulated by specific guidance issued by FDA and referred to in stan-
dards .
> Learn more about automatic adjustment in chapter 5.4.3.
5.4.1. Re-Calibration
The accuracy of weighing devices becomes less reliable over time due to aging, wear, tear, and excessive
usage. This can substantially impact business success. Furthermore, changing environmental conditions and
use by different operators can also influence instrument performance. Therefore, periodic re-calibration is needed
to ensure weighing accuracy over time. Note that only onsite calibration in the environment where the balance or
scale is used allows assessment of how reliable weighing results are.
Several standards and regulations (such as ISO 9001, ISO/IEC 17025 or the European Pharmacopoeia) require
an as-found calibration (before the instrument is serviced/adjusted) as well as an as-left calibration (after the
instrument is serviced/adjusted). In this context, the as-found calibration serves to ensure the traceability of the
measurement results of the weighings carried out before the service intervention (Figure 7). If no adjustment or
manipulation is performed, then as-found calibration results are sufficient as they also act as as-left results in
this specific case.
Minimum Weight
Minimum weight
determined
Weighing
Smallest net weight performance
Adjustment
(by service)
As has been established, weighing performance changes over time and a calibration is needed to determine the
actual performance of a weighing device at a specific point in time. To track weighing performance in-between
Good Weighing Practice™
calibrations, routine testing performed by the user is essential. Such routine testing uncovers malfunctions and
eliminates inaccuracies that would otherwise go undetected until the next calibration (Figure 8).
Minimum Weight
Calibration
Routine testing
Smallest net weight
Weighing
performance
Adjustment
(by service)
Time
It has been found that many companies tend to test their laboratory balances very frequently, in many cases
daily, and a whole set of different test weights are used for part of it (the so-called linearity test). A proper risk-
based approach would reveal whether it is really necessary to conduct that many user tests, and whether testing
efforts can be reduced without compromising the quality of weighing data. Furthermore, applied test procedures
might not always be appropriate, as in the case for linearity, which does not constitute a useful test to assess
weighing accuracy and is not recommended as a part of routine user testing, as we will see below. On the other
hand, the importance of repeatability testing is frequently underestimated.
Surprisingly, the practice in production environments is different. Often, only rudimentary testing procedures or
no procedures at all are in place. This leads to inconsistent quality and to results that are Out of Specification.
Seemingly, only very few companies understand the importance of establishing a robust routine testing scheme
in production. For those conscientious users that do understand this importance, the practice is often to repro-
duce in production what has been implemented in the laboratory. However, this is typically insufficient, because
the probability, severity, and detectability of OOS results might differ significantly between the laboratory and pro-
duction environments.
A sound understanding of the instrument’s functionality and its weighing parameters, combined with the neces-
sary understanding of the process-specific weighing requirements, allows for the elimination of misconceptions
such as these and helps to prevent critical weighing errors that might result in OOS results, both in the laboratory
and the production environment. Once these foundational principles are incorporated into standard operating
procedures, an appropriate testing regimen can be developed.
ROUTINE TESTING
4. Test Tolerances
1. Test Frequency
2. Test Methods
3. Test Weights
Risk Assessment
Figure 9: Depiction of the four pillars of routine testing: Test frequency, methods, weights and tolerances.
The more stringent the tolerance requirements of a weighing process are, the higher the probability becomes that
the weighing result will not meet these requirements, and test frequency should be increased. Similarly, if the
severity of the impact of an OOS result increases, tests should be performed more frequently. In other words, if
an OOS result has a higher impact on process or product quality, this potential is offset by more frequent tests,
thereby lowering the likelihood of the occurrence of the impact (Figure 10). Additionally, if the malfunction of the
weighing instrument is easily detected, the test frequency can be decreased.
Test frequencies for all properties extend from daily user or automatic testing for applications with higher risk,
to weekly, monthly, quarterly, twice a year or yearly (e.g., calibration by authorized personnel).
0.01 %
Good Weighing Practice™
g
in
st
0.1 %
Te
e
or
M
=
1%
sk
Ri
er
gh
10 %
Hi
Impact
Low Medium High
Figure 10: Figure 10. Test frequencies and the selection of test methods are influenced by the weighing tolerance and the impact of
inaccurate weighing results.
This risk-based approach to testing is also described in USP General Chapter <1251>, where the following state-
ment can be found in respect to suggested tests during performance qualification:
“Depending on the risk of the application and the required weighing tolerance, some of these tests
[described in a table in the chapter] may be omitted. Tests also can be omitted if there is evidence that
the property in question has only minimal effect on the weighing performance. [...] Performance qualifica-
tion should be performed periodically as described in standard operating procedures, and the frequency
of each of the individual tests can vary depending on the criticality of the property.”
It is striking that USP does not make reference to daily checks of the weighing instrument. This is in line with the
risk-based approach to testing, and furthermore makes evident that testing efforts—especially in the regulated
industries—might be too high as it is still quite common practice to test weighing devices on a daily basis. In
the same way, other standards also omit reference to daily testing and refer to a proper risk-based approach
instead . As an example, UKAS LAB 14 explicitly states:
“Historically the advice has been to perform daily checks however, […] the frequency of these checks
should be determined on the basis of the risk associated with the weighing application.”
To verify a device’s weighing performance, tests should be conducted for sensitivity, repeatability and eccentric-
ity. Please note that nonlinearity testing is not recommended as a part of routine user testing for either balances
or scales. Its influence on weighing uncertainty is not dominant with any current model of weighing instrument,
and it is assessed when the weighing instrument is calibrated by authorized personnel.
The following considerations equally apply to laboratory balances and scales installed in a production
environment.
Repeatability Test
Repeatability tests are carried out less frequently than sensitivity tests. However, they are important when weigh-
ing small loads due to the fact that in low weighing ranges, repeatability is the largest contributor to measure-
ment uncertainty.
Repeatability is a measure of the ability of a balance or scale to obtain the same result in repetitive weighings
with the same load under the same measurement conditions. Repeatability is usually measured by performing 6
to 10 replicate weighings using the same test weight.
Note: Repeatability is highly affected by the ambient conditions (drafts, temperature fluctuations, and vibrations)
and also by the skill of the person performing the test. Therefore, the series of measurements must be carried out
by the same operator, in the same location, under constant ambient conditions and without interruption.
Industrial scales and laboratory balances are equally affected by repeatability, with additional constraints arising
from the technology used. Many scales use strain gauge load cells that lead to a lower resolution than balances
that use load cells based on electromagnetic force compensation . In these cases, the so-called “rounding
error” may be predominant and the repeatability may be buried in this error, leading to a zero standard devia-
tion. However, for higher resolution scales that also use load cells based on electromagnetic force compensa-
tion, repeatability is also a significant contributor to the measurement uncertainty in the lower measurement
range of the instrument.
Eccentricity Test
Eccentricity tests are recommended only when performing weighing processes of higher risk and very tight
weighing tolerances. The purpose of the eccentricity test is to ensure that every eccentric load deviation (corner-
load deviation) is within weighing tolerances. The eccentricity test assesses the deviation of the measurement
value through off-center (eccentric) loading. Eccentricity errors can be minimized by handling the load correctly,
i.e., placing it in the center of the weighing pan and by using accessories, such as ErgoClips. Note that, in gen-
eral, an eccentricity offset might play a more important role in applications using scales compared to applica-
tions using laboratory balances due to the different handling of objects in the production environment.
Ideal behavior
Good Weighing Practice™
Actual behavior
Sensitivity deviation
Sensitivity
testing
unreliable
Sensitivity testing
recommended
Load
5 % of capacity 100 % of capacity
Figure 11: For sensitivity and eccentricity testing, the test weight should be in the upper part of the weighing range. For laboratory
balances, 100 % of the device’s capacity is normally recommended; for high-capacity industrial scales 33 % is recommended. The
weight is rounded down to the next OIML/ASTM denomination. If the test weight is below 5 % capacity, then the measurable sensitivity
offset may be buried in the dispersion band of repeatability.
Figure 12 gives an overview of which test weights correspond with which routine tests.
Capacity
5%
Capacity
5% 33 % 100 %
Figure 12: For repeatability testing, the recommended test weight is ~5 % of the device’s capacity. For sensitivity and eccentricity testing
the test weight should normally be 100 % of the device’s capacity for laboratory balances and 33 % of the device’s capacity for high-
capacity industrial scales. Both weights are rounded down to the next OIML/ASTM denomination.
1. Weights for testing the sensitivity of weighing instruments need to be calibrated at regular intervals and must
be traceable (reference weights). Their maximum permissible error (mpe) must not be larger than 1/3 of the
test tolerance, so that its influence compared to this limit may be neglected. The acceptance limit of a specific
sensitivity test depends on the selected test weight and the required weighing tolerance. With this condition,
the contribution of variance of the test weight is limited to less than 10 % of the variance of the acceptance
limit. The lowest weight class which fulfills this condition is selected. For weighing applications with a very
stringent accuracy requirement, it may be necessary to consider the calibration uncertainty of the weights
instead of the mpe to accommodate for very tight acceptance limits. In this case, the calibration uncertainty
must not be larger than 1/3 of the acceptance limit. This concept is also implemented for the accuracy test of
the USP General Chapter 41 and Ph. Eur. General Chapter 2.1.7.
2. All other tests (i.e., tests of repeatability or eccentricity) may be performed with any weight, provided it does
not change its load during the test. Of course, it is always suggested to use a calibrated test weight for these
tests as well.
3. For analytical and microbalances, test weights for sensitivity are typically of higher accuracy class (OIML F2,
F1 or E2). For laboratory applications, even in cases where an OIML class M weight would suffice for a test,
OIML class F2 weights should be used instead. The reason is that the surface of class M weights is allowed to
remain rough . This increases the chances for potential contamination, a feature which is not tolerated in
laboratories. The same applies for ASTM weights, where weight classes lower than ASTM4 should not be used
in a laboratory environment . For applications in the production environment, this rationale might not ap-
ply, so that M weights can be used, provided they fulfill the requirement of the mpe or the calibration uncer-
tainty as stated above.
The result for each test must be assessed against predefined tolerances. However, the definition of adequate tol-
erances can be difficult. Good Weighing Practice recommends the use of two levels of tolerances: warning and
control limits.
The warning limit indicates that the device is not out of tolerance yet—but the safety margin has decreased. The
balance or scale can still be used, but a service technician should be contacted for a calibration combined with
an adjustment if needed.
The control limit specifies when the device is out of tolerance and no longer fit for purpose. In this case, the bal-
ance or scale must be taken out of operation immediately.
Adjustment mechanisms built into weighing instruments consist of one or more reference weights and a loading
mechanism that is triggered either manually or automatically. Such a mechanism allows convenient testing and
adjustment of the sensitivity of the weighing instrument when required. Because the built-in weight cannot be
lost, cannot be touched and is kept in a sheltered place inside the instrument, this concept has advantages over
higher accuracy class. Nevertheless, the built-in weight can be verified by performing routine tests on the weigh-
ing instrument with an external, traceable weight. With this comparison, the integrity of the built-in adjustment
mechanism can be tested.
If a weighing instrument features such an adjustment mechanism, it should be frequently used, as it is a quick
procedure that requires little to no effort. As a consequence, routine tests of sensitivity with external reference
weights may then be performed less frequently. This fact is also reflected in an important statement from the
US Food and Drug Administration:
“For a scale with a built-in auto-calibrator, we recommend that external performance checks be per-
formed on a periodic basis, but less frequently as compared to a scale without this feature .
“Checks with external weights can be replaced partially using automatic or manually triggered adjustment
by means of built-in weights.”
Ph. Eur. 2.1.7 and ASTM E898 make similar statements.
In terms of weighing performance, the combination of calibration by certified personnel, routine testing by the
user, and internal adjustment if available in the instrument combine to create consistently accurate weighing
results, as shown in Figure 13.
Service Technician
Calibration
Internal Routine
Adjustment Testing
Device User
Figure 13: The combination of calibration, routine testing and internal adjustment (if applicable) ensures consistently accurate weighing
results.
By implementing Good Weighing Practice (GWP) as the science-based global standard for efficient lifecycle
management of weighing devices, measurement errors are reduced and reproducibly accurate weighing results
can be ensured. GWP principles and services apply to balances and scales from any manufacturer and inde-
pendent of the industry, i.e., these concepts apply to the pharmaceutical, chemical, food, fragrance, metal, and
other industries, as well as to testing and calibration labs. Furthermore, GWP covers a wide range of weighing
applications, from weighing very small amounts of substances on microbalances up to weighing applications
on the industrial shop floor involving scales with capacities of a few tons.
Understanding weighing process requirements and the basic principles of balance and scale properties, such as
measurement uncertainty and minimum weight, enables the user to realize an integrated qualification strategy
as a basis for achieving qualified weighing processes. Furthermore, an important source of Out of Tolerance or
Out of Specification (OOS) results is eliminated, both in the laboratory and the production environment.
The key issue to be considered for successful weighing instrument operation is that the minimum weight for the
required weighing tolerance must be smaller than the smallest net weight expected to be weighed by the user.
Furthermore, it is recommended to apply an appropriate safety factor to compensate for the fluctuations of the
minimum weight due to variability in the environment and different operators working with the instrument. These
principles need to be considered when selecting a new balance or scale, through confirmation, and during rou-
tine operation, thereby ensuring accurate weighing results over the whole lifecycle of the device.
Appropriate and meaningful routine tests enable the user to test exactly what is needed to adhere to the spe-
cific weighing requirements, and to avoid unnecessary—and costly—testing. Risk and lifecycle management
form an integral part of an overall strategy to bridge the gap between regulatory compliance, process quality,
productivity, and cost consciousness.
GWP® Recommendation ensures appropriate selection, while the Accuracy Calibration Certificate
(ACC) in combination with a GWP® Certificate delivers a state-of-the-art calibration and fit-for-
purpose check.
To remain audit-proof over time, GWP® Verification provides a clear calibration and routine testing
plan, based on an individualized risk assessment. All this helps to ensure weighing results that can
always be trusted.
[1] General Chapter 41 “Balances”, US Pharmacopeia USP–NF, Rockville, Maryland, USA, 2023, Online Edition.
Good Weighing Practice™
[2] General Chapter 2.1.7 “Balances for Analytical Purposes”, European Pharmacopoeia, EDQM Council of
Europe, Strasbourg, France, 2023, Online Edition.
[3] FSSC 22000 Scheme “Food Safety Management System Certification”, Version 6.0, Foundation FSSC,
Gorinchem, Netherlands, 2023. Available at: www.fssc.com.
[4] Guidance for Industry: Investigating Out-of-Specification (OOS) Test Results for Pharmaceutical Production,
U.S. Food and Drug Administration, Silver Spring, Maryland, USA, 2022. Available at: https://www.fda.gov/
media/158416/download.
[5] Modernization of Cosmetics Regulation Act, 117th Congress Public Law, H.R. 2617-1389ff, USA, 2022.
[6] Food Safety Modernization Act, 111th Congress Public Law 353, 124 STAT. 3885ff., USA, 2011.
[7] Nater, R., Reichmuth, A., Schwartz, R., Borys, M., Zervos, P., “Dictionary of Weighing Terms – A Guide to
the Terminology of Weighing”, Springer, Berlin, Heidelberg, Germany, 2009. ISBN: 978-3-642-02013-1.
[8] Evaluation of Measurement Data – Guide to the Expression of Uncertainty in Measurement (GUM), JCGM
100:2008, Bureau International des Poids et Mesures, Sèvres, France, 2008. Available at: www.bipm.org.
[9] Guidelines on the Calibration of Non-Automatic Weighing Instruments, EURAMET cg-18, Version 4.0,
European Association of National Metrology Institutes, Braunschweig, Germany, 2015. Available at: www.
euramet.org.
[10] SIM Guidelines on the calibration of non-automatic weighing instruments, Sistema Interamericano de
Metrología, Montevideo, Uruguay, 2009. Available at: sim-metrologia.org.
[11] Standard Practice for Calibration of Non-Automatic Weighing Instruments, ASTM E898-20, ASTM
International, West Conshohocken, Pennsylvania, USA, 2020.
[12] Calibration Specification for Electronic Balances, JJF 1847-2020, State Administration for Market
Regulation, Beijing, China, 2020.
[13] Balsiger, F., Fritsch, K., Mottl, R., Müller-Schöll, Ch., “Weighing”, Ullmann’s Encyclopedia of Industrial
Chemistry, Wiley, Weinheim, Germany, 2017.
[14] General Chapter 1251 “Weighing on an Analytical Balance”, US Pharmacopeia USP–NF, Rockville,
Maryland, USA, 2023, Online Edition.
[15] Guidance on the calibration of weighing machines used in testing and calibration laboratories, UKAS
LAB14, 7th Edition, United Kingdom Accreditation Service, Staines-upon-Thames, UK, 2022. Available at:
www.ukas.com.
[16] Quantitative nuclear magnetic resonance spectroscopy — Purity determination of organic compounds
used for foods and food products — General requirements for 1H NMR internal standard method, ISO
24583:2022, International Organization for Standardization, Vernier, Geneva, Switzerland, 2022.
[18] Current Good Manufacturing Practice for Finished Pharmaceuticals, CFR Title 21, Part 211, USA, 2023.
Available at: www.ecfr.gov.
[19] SQF Food Safety Code: Food Manufacturing, Edition 9, SQFI, Arlington, Virginia, USA, 2020. Available at:
www.sqfi.com.
[20] Questions and Answers on Current Good Manufacturing Practices, Good Guidance
Practices, Level 2 Guidance – Equipment, U.S. Food and Drug Administration, Silver
Spring, Maryland, USA, 2004. Available at: www.fda.gov/drugs/guidances-drugs/
questions-and-answers-current-good-manufacturing-practice-requirements-equipment.
[21] Non-automatic weighing instruments – Metrological and technical requirements – Tests, OIML R 76-1,
International Organization of Legal Metrology, Paris, France, 2006. Available at: www.oiml.org/en/publica-
tions/recommendations/en/files/pdf_r/r111-p-e04.pdf
[22] Specifications, Tolerances, and Other Technical Requirements for Weighing and Measuring Devices,
NIST Handbook 44, National Institute of Standards and Technology, Gaithersburg, Maryland, USA, 2023.
Available at: https://nvlpubs.nist.gov/nistpubs/hb/2023/NIST.HB.44-2023.pdf.
[23] Weights of Classes E1, E2, F1, F2, M1, M1-2, M2, M2-3 and M3 – Metrological and technical requirements, OIML
R 111-1, International Organization of Legal Metrology, Paris, France, 2004. Available at: www.oiml.org/en/
publications/recommendations/en/files/pdf_r/r111-p-e04.pdf.
[24] Standard Specifications for Laboratory Weights and Precision Mass Standards, ASTM E617-18, ASTM
International, West Conshohocken, Pennsylvania, USA, 2020