Estimasi Ketidakpastian Pengukuran

© ISO 2017
Guidance for the use of repeatability, reproducibility and trueness estimates in measurement uncertainty evaluation
Lignes directrices relatives à l’utilisation d’estimations de la répétabilité, de la reproductibilité et de la justesse dans l’évaluation de l’incertitude de mesure
INTERNATIONAL STANDARD
ISO
21748
Second edition
2017-04
ISROe f2e1re7n4c8e: 2n0u1m7b(Eer)

ISO 21748:2017(E)

ii © ISO 2017 – All rights reserved
COPYRIGHT PROTECTED DOCUMENT
© ISO 2017, Published in Switzerland
All rights reserved. Unless otherwise specified, no part of this publication may be reproduced or utilized otherwise in any form or by any means, electronic or mechanical, including photocopying, or posting on the internet or an intranet, without prior written permission. Permission can be requested from either ISO at the address below or ISO’s member body in the country of the requester.
ISO copyright office
Ch. de Blandonnet 8 • CP 401
CH-1214 Vernier, Geneva, Switzerland
Tel. +41 22 749 01 11
Fax +41 22 749 09 47
copyright@iso.org
www.iso.org

ISO 21748:2017(E)

Foreword..........................................................................................................................................................................................................................................v
Introduction..............................................................................................................................................................................................................................vii
1 Scope.................................................................................................................................................................................................................................1
2 Normative references......................................................................................................................................................................................1
3 Terms and definitions.....................................................................................................................................................................................1
4 Symbols..........................................................................................................................................................................................................................5
5 Principles.....................................................................................................................................................................................................................7
5.1 Individual results and measurement process performance.............................................................................7
5.2 Applicability of reproducibility data....................................................................................................................................5.3 Basic equations for the statistical model..........................................................................................................................8
5.4 Repeatability data................................................................................................................................................................................9
6 Evaluating uncertainty using repeatability, reproducibility and trueness estimates..................9
6.1 Procedure for evaluating measurement uncertainty..............................................................................................9
6.2 Differences between expected and actual precision............................................................................................10
7 Establishing the relevance of method performance data to measurement results from a particular measurement process..................................................................................................................................10
7.1 General........................................................................................................................................................................................................10
7.2 Demonstrating control of the laboratory component of bias.......................................................................7.2.1 General requirements...............................................................................................................................................10
7.2.2 Methods of demonstrating control of the laboratory component of bias....................10
7.2.3 Detection of significant laboratory component of bias................................................................13
7.3 Verification of repeatability......................................................................................................................................................13
7.4 Continued verification of performance...........................................................................................................................13
8 Establishing relevance to the test item......................................................................................................................................13
8.1 General........................................................................................................................................................................................................13
8.2 Sampling....................................................................................................................................................................................................14
8.2.1 Inclusion of sampling process...........................................................................................................................14
8.2.2 Inhomogeneity................................................................................................................................................................14
8.3 Sample preparation and pre-treatment.........................................................................................................................14
8.4 Changes in test-item type...........................................................................................................................................................14
8.5 Variation of uncertainty with level of response.......................................................................................................14
8.5.1 Adjusting sR........................................................................................................................................................................14
8.5.2 Changes in other contributions to uncertainty...................................................................................15
9 Additional factors.............................................................................................................................................................................................15
10 General expression for combined standard uncertainty.........................................................................................16
11 Uncertainty budgets based on collaborative study data...........................................................................................16
12 Evaluation of uncertainty for a combined result..............................................................................................................18
13 Expression of uncertainty information.....................................................................................................................................18
13.1 General expression...........................................................................................................................................................................18
13.2 Choice of coverage factor............................................................................................................................................................13.2.1 General..................................................................................................................................................................................18
13.2.2 Level of confidence desired.................................................................................................................................18
13.2.3 Degrees of freedom associated with the estimate............................................................................18
14 Comparison of method performance figures and uncertainty data.............................................................19
14.1 Basic assumptions for comparison....................................................................................................................................19
14.2 Comparison procedure.................................................................................................................................................................14.3 Reasons for differences.................................................................................................................................................................20
Annex A (informative) Approaches to uncertainty evaluation................................................................................................21
© ISO 2017 – All rights reserved iii
Contents Page

ISO 21748:2017(E)

Annex B (informative) Experimental uncertainty evaluation..................................................................................................26
Annex C (informative) Examples of uncertainty calculations..................................................................................................27
Bibliography.............................................................................................................................................................................................................................37
iv © ISO 2017 – All rights reserved

ISO 21748:2017(E)
Foreword
ISO (the International Organization for Standardization) is a worldwide federation of national standards bodies (ISO member bodies). The work of preparing International Standards is normally carried out through ISO technical committees. Each member body interested in a subject for which a technical committee has been established has the right to be represented on that committee. International organizations, governmental and non-governmental, in liaison with ISO, also take part in the work. ISO collaborates closely with the International Electrotechnical Commission (IEC) on all matters of electrotechnical standardization.
The procedures used to develop this document and those intended for its further maintenance are described in the ISO/IEC Directives, Part 1. In particular the different approval criteria needed for the different types of ISO documents should be noted. This document was drafted in accordance with the editorial rules of the ISO/IEC Directives, Part 2 (see www .iso .org/ directives).
Attention is drawn to the possibility that some of the elements of this document may be the subject of patent rights. ISO shall not be held responsible for identifying any or all such patent rights. Details of any patent rights identified during the development of the document will be in the Introduction and/or on the ISO list of patent declarations received (see www .iso .org/ patents).
Any trade name used in this document is information given for the convenience of users and does not constitute an endorsement.
For an explanation on the voluntary nature of standards, the meaning of ISO specific terms and expressions related to conformity assessment, as well as information about ISO’s adherence to the World Trade Organization (WTO) principles in the Technical Barriers to Trade (TBT) see the following URL: www .iso .org/ iso/ foreword .html.
This document was prepared by Technical Committee ISO/TC 69, Applications of statistical methods, Subcommittee SC 6, Measurement methods and results.
This second edition cancels and replaces the first edition (ISO 21748:2010), of which it constitutes a minor revision.
The changes compared to the previous edition are as follows:
— minor change in the title (estimation to evaluation) to reflect preferred use of terms (see third list item);
— minor changes in wording and format to conform to current ISO Directives, which included the addition of Clause 2 and renumbering of subsequent clauses;
— the phrases “estimation of measurement uncertainty” (and similar usage of “estimate”) and “evaluation of measurement uncertainty” (and similar usage of “evaluate”) have been amended to distinguish quantitative estimates of the components of uncertainty from the process of evaluations of measurement uncertainty, which can include additional relevant considerations;
— the word “standard” has been added before “uncertainty” where appropriate, for clarity;
— redundant definitions of terms defined as squared quantities, where the standard deviation was also defined [s2b, s2inh, s2L, s2r, s2W, u2(y), σ2L, σ2r] have been removed;
— in the definition of rij, “in the interval -1 to +1” was removed;
— in the definition of the term sinh, “uncertainty” was changed to “standard deviation”;
— in the definitions for u(y), ui(y) and u(Y), U(y), equations were removed (not necessary for standard terms);
— the symbols from all definitions of terms where they had been included (combined standard uncertainty, coverage factor, expanded uncertainty, standard uncertainty) have been removed;

© ISO 2017 – All rights reserved v

ISO 21748:2017(E)
— the definition of y0 has been removed because the term is not used in the document;
— in 7.4, first dash, “quality control charts” has been replaced with “control charts”;
— a note has been added to Clause 10 (previously Clause 9);
— in 13.1, 14.1 and 14.3 (previously 12.1, 13.1 and 13.3), “combined” has been added before “standard uncertainty”;
— in 13.2.1 and 13.2.2 (previously 12.2.1 and 12.2.2), the word “combined” has been removed before “expanded uncertainty”;
— in A.1, changed italics “standard uncertainties” to standard text;
— in A.1, 7th paragraph (3rd from end), “combined standard uncertainties [u(xi)]” has been changed to “additional standard uncertainties u(y)”;
— in C.3, title, “Uncertainty for AOAC method 990.12” has been replaced with “Uncertainty for measurements obtained by AOAC method 990.12”;
— in C.3.2, “eight laboratories” has been replaced with “twelve laboratories “;
— in C.4.4, “0,07 g/kg (0,7 % as mass fraction)” has been changed to “7 g/kg (0,7 % as mass fraction)”;
— References [27] and  [28] have been updated.

vi © ISO 2017 – All rights reserved

ISO 21748:2017(E)
Introduction
Knowledge of the uncertainty associated with measurement results is essential to the interpretation of the results. Without quantitative evaluations of uncertainty, it is impossible to decide whether observed differences between results reflect more than experimental variability, whether test items comply with specifications, or whether laws based on limits have been broken. Without information on uncertainty, there is a risk of misinterpretation of results. Incorrect decisions taken on such a basis can result in unnecessary expenditure in industry, incorrect prosecution in law, or adverse health or social consequences.
Laboratories operating under ISO/IEC 17025 accreditation and related systems are accordingly required to evaluate measurement uncertainty for measurement and test results and report the uncertainty where relevant. ISO/IEC Guide 98-3 is a widely adopted standard approach. However, it applies to situations where a model of the measurement process is available. A very wide range of standard test methods is, however, subjected to collaborative study in accordance with ISO 5725-2. This document provides an appropriate and economic methodology for estimating uncertainty associated with the results of these methods, which complies fully with the relevant principles of the GUM, while taking account of method performance data obtained by collaborative study.
The general approach used in this document requires the following.
— Estimates of the repeatability, reproducibility and trueness of the method in use, obtained by collaborative study as described in ISO 5725-2, be available from published information about the test method in use. These provide estimates of the intra-laboratory and inter-laboratory components of variance, together with an estimate of uncertainty associated with the trueness of the method.
— The laboratory confirms that its implementation of the test method is consistent with the established performance of the test method by checking its own bias and precision. This confirms that the published data are applicable to the results obtained by the laboratory.
— Any influences on the measurement results that were not adequately covered by the collaborative study be identified and the variance associated with the results that could arise from these effects be quantified.
An uncertainty estimate is made by combining the relevant variance estimates in the manner prescribed by the GUM. This estimate can serve, with other contributions, in the evaluation of uncertainty, or in some cases can be the final, stated, uncertainty.
The general principle of using reproducibility data in uncertainty evaluation is sometimes called a “top-down” approach.
The dispersion of results obtained in a collaborative study is often also usefully compared with measurement uncertainty evaluated using GUM procedures as a test of full understanding of the method. Such comparisons will be more effective given a consistent methodology for estimating the same parameter using collaborative study data.

© ISO 2017 – All rights reserved vii


Guidance for the use of repeatability, reproducibility and trueness estimates in measurement uncertainty evaluation
1 Scope
This document gives guidance for
— evaluation of measurement uncertainties using data obtained from studies conducted in accordance with ISO 5725-2, and
— comparison of collaborative study results with measurement uncertainty (MU) obtained using formal principles of uncertainty propagation (see Clause 14).
ISO 5725-3 provides additional models for studies of intermediate precision. However, while the same general approach may be applied to the use of such extended models, uncertainty evaluation using these models is not incorporated in this document.
This document is applicable to all measurement and test fields where an uncertainty associated with a result has to be determined.
This document does not describe the application of repeatability data in the absence of reproducibility data.
This document assumes that recognized, non-negligible systematic effects are corrected, either by applying a numerical correction as part of the method of measurement, or by investigation and removal of the cause of the effect.
The recommendations in this document are primarily for guidance. It is recognized that while the recommendations presented do form a valid approach to the evaluation of uncertainty for many purposes, it is also possible to adopt other suitable approaches.
In general, references to measurement results, methods and processes in this document are normally understood to apply also to testing results, methods and processes.
2 Normative references
There are no normative references in this document.
3 Terms and definitions
For the purposes of this document, the following terms and definitions apply.
NOTE Reference is made to “intermediate precision conditions”, which are discussed in detail in ISO 5725-3.
ISO and IEC maintain terminological databases for use in standardization at the following addresses:
— IEC Electropedia: available at http:// www .electropedia .org/
— ISO Online browsing platform: available at http:// www .iso .org/ obp
INTERNATIONAL STANDARD ISO 21748:2017(E)
© ISO 2017 – All rights reserved 1

ISO 21748:2017(E)
3.1
bias
difference between the expectation of a test result or measurement result and a true value
Note 1 to entry: Bias is the total systematic error as contrasted to random error. There may be one or more systematic error components contributing to the bias. A larger systematic difference from the true value is reflected by a larger bias value.
Note 2 to entry: The bias of a measuring instrument is normally estimated by averaging the error of indication over an appropriate number of repeated measurements. The error of indication is the “indication of a measuring instrument minus a true value of the corresponding input quantity”.
Note 3 to entry: In practice, the accepted reference value is substituted for the true value.
[SOURCE: ISO 3534‑2:2006, 3.3.2]
3.2
combined standard uncertainty
standard uncertainty of the result of a measurement when that result is obtained from the values of a number of other quantities, equal to the positive square root of a sum of terms, the terms being the variances or covariances of these other quantities weighted according to how the measurement result varies with changes in these quantities
[SOURCE: ISO/IEC Guide 98‑3:2008, 2.3.4]
3.3
coverage factor
numerical factor used as a multiplier of the combined standard uncertainty (3.2) in order to obtain an expanded uncertainty (3.4)
Note 1 to entry: A coverage factor, k, is typically in the range from 2 to 3.
[SOURCE: ISO/IEC Guide 98‑3:2008, 2.3.6]
3.4
expanded uncertainty
quantity defining an interval about a result of a measurement expected to encompass a large fraction of the distribution of values that could reasonably be attributed to the measurand
Note 1 to entry: The fraction may be regarded as the coverage probability or level of confidence of the interval.
Note 2 to entry: To associate a specific level of confidence with the interval defined by the expanded uncertainty requires explicit or implicit assumptions regarding the probability distribution characterized by the measurement result and its combined standard uncertainty (3.2). The level of confidence that may be attributed to this interval can be known only to the extent to which such assumptions can be justified.
Note 3 to entry: Expanded uncertainty is termed overall uncertainty in paragraph 5 of Reference [20].
[SOURCE: ISO/IEC Guide 98‑3:2008, 2.3.5]
3.5
precision
closeness of agreement between independent test/measurement results obtained under stipulated conditions
Note 1 to entry: Precision depends only on the distribution of random errors and does not relate to the true value or the specified value.
Note 2 to entry: The measure of precision is usually expressed in terms of imprecision and computed as a standard deviation of the test results or measurement results. Less precision is reflected by a larger standard deviation.
Note 3 to entry: Quantitative measures of precision depend critically on the stipulated conditions. Repeatability conditions (3.7) and reproducibility conditions (3.10) are particular sets of extreme stipulated conditions.

2 © ISO 2017 – All rights reserved

ISO 21748:2017(E)
[SOURCE: ISO 3534‑2:2006, 3.3.4]
3.6
repeatability
precision (3.5) under repeatability conditions (3.7)
Note 1 to entry: Repeatability can be expressed quantitatively in terms of the dispersion characteristics of the results.
[SOURCE: ISO 3534‑2:2006, 3.3.5]
3.7
repeatability conditions
observation conditions where independent test/measurement results are obtained with the same method on identical test/measurement items in the same test or measuring facility by the same operator using the same equipment within short intervals of time
Note 1 to entry: Repeatability conditions include the following:
— the same measurement procedure or test procedure;
— the same operator;
— the same measuring or test equipment used under the same conditions;
— the same location;
— repetition over a short period of time.
[SOURCE: ISO 3534‑2:2006, 3.3.6]
3.8
repeatability standard deviation
standard deviation of test results or measurement results obtained under repeatability conditions (3.7)
Note 1 to entry: It is a measure of the dispersion of the distribution of test or measurement results under repeatability conditions.
Note 2 to entry: Similarly, “repeatability variance” and “repeatability coefficient of variation” can be defined and used as measures of the dispersion of test or measurement results under repeatability conditions.
[SOURCE: ISO 3534‑2:2006, 3.3.7]
3.9
reproducibility
precision (3.5) under reproducibility conditions (3.10)
Note 1 to entry: Reproducibility can be expressed quantitatively in terms of the dispersion characteristics of the results.
Note 2 to entry: Results are usually understood to be corrected results.
[SOURCE: ISO 3534‑2:2006, 3.3.10]
3.10
reproducibility conditions
observation conditions where independent test/measurement results are obtained with the same method on identical test/measurement items in different test or measurement facilities with different operators using different equipment
[SOURCE: ISO 3534‑2:2006, 3.3.11]

© ISO 2017 – All rights reserved 3

ISO 21748:2017(E)
3.11
reproducibility standard deviation
standard deviation of test results or measurement results obtained under reproducibility conditions (3.10)
Note 1 to entry: It is a measure of the dispersion of the distribution of test or measurement results under reproducibility conditions.
Note 2 to entry: Similarly, “reproducibility variance” and “reproducibility coefficient of variation” can be defined and used as measures of the dispersion of test or measurement results under reproducibility conditions.
[SOURCE: ISO 3534‑2:2006, 3.3.12]
3.12
standard uncertainty
uncertainty (3.14) of the result of a measurement expressed as a standard deviation
[SOURCE: ISO/IEC Guide 98‑3:2008, 2.3.1]
3.13
trueness
closeness of agreement between the expectation of a test result or a measurement result and a true value
Note 1 to entry: The measure of trueness is usually expressed in terms of bias (3.1).
Note 2 to entry: Trueness is sometimes referred to as “accuracy of the mean”. This usage is not recommended.
Note 3 to entry: In practice, the accepted reference value is substituted for the true value.
[SOURCE: ISO 3534‑2:2006, 3.3.3]
3.14
uncertainty
〈measurement〉 parameter, associated with the result of a measurement, which characterizes the dispersion of the values that could reasonably be attributed to the measurand
Note 1 to entry: The parameter may be, for example, a standard deviation (or a given multiple of it), or the half-width of an interval having a stated level of confidence.
Note 2 to entry: Uncertainty of measurement comprises, in general, many components. Some of these components may be estimated from the statistical distribution of the results of a series of measurements and can be characterized by experimental standard deviations. Other components, which also can be characterized by standard deviations, are estimated from assumed probability distributions based on experience or other information.
Note 3 to entry: It is understood that the result of the measurement is the best estimate of the value of the measurand, and that all components of uncertainty, including those arising from systematic effects such as components associated with corrections and reference standards, contribute to the dispersion.
[SOURCE: ISO/IEC Guide 98‑3:2008, 2.2.3]
3.15
uncertainty budget
list of sources of uncertainty (3.14) and their associated standard uncertainties, compiled with a view to evaluating a combined standard uncertainty (3.2) associated with a measurement result
Note 1 to entry: The list often includes additional information such as sensitivity coefficients (change of result with change in a quantity affecting the result), degrees of freedom for each standard uncertainty, and an identification of the means of estimating each standard uncertainty in terms of a Type A or Type B evaluation (see ISO/IEC Guide 98-3).

4 © ISO 2017 – All rights reserved

ISO 21748:2017(E)
4 Symbols
a
coefficient indicating an intercept in the empirical relationship ˆsabmR=+
B
laboratory component of bias
b
coefficient indicating a slope in the empirical relationship ˆsabmR=+
c
coefficient in the empirical relationship ˆscmRd=
c
i
sensitivity coefficient yxi
d
coefficient indicating an exponent in the empirical relationship ˆscmRd=
e
random error under repeatability conditions
k
numerical factor used as a multiplier of the combined standard uncertainty u in order to ob�tain an expanded uncertainty U
l
laboratory number
m
mean value of the measurements
N
number of contributions included in combined uncertainty calculations
n′
number of contributions incorporated in combined uncertainty calculations in addition to collaborative study data
nl
number of replicates by laboratory l in the study of a certified reference material
nr
number of replicate measurements
p
number of laboratories
Q
number of test items from a larger batch
q
number of assigned values by consensus during a collaborative study
rij
correlation coefficient between xi and xj
sb
between-group component of variance expressed as a standard deviation
sD
estimated, or experimental, standard deviation of results obtained by repeated measurement on a reference material used for checking control of bias
sinh
standard deviation associated with the inhomogeneity of the sample
sl
estimated repeatability standard deviation with νl degrees of freedom for laboratory l during verification of repeatability
sL
experimental or estimated inter-laboratory standard deviation
ˆ
sL
adjusted estimate of standard deviation associated with B where sL is dependent on the response
sr
estimate of intra-laboratory standard deviation; the estimated standard deviation for e

sr
adjusted estimate of intra-laboratory standard deviation, where the contribution is dependent on the response

© ISO 2017 – All rights reserved 5

ISO 21748:2017(E)
sR
estimated reproducibility standard deviation

sR
estimate of the reproducibility standard deviation adjusted for laboratory estimate of repeat�ability standard deviation
ˆ
sR
adjusted estimate of reproducibility standard deviation calculated from an empirical model, where the contributions are dependent on the response
sw
estimate of intra-laboratory standard deviation derived from replicates or other repeatabili�ty studies
s
ˆ
estimated standard deviation of bias ˆ measured in a collaborative study
s(Δy)
laboratory standard deviation of differences during a comparison of a routine method with a definitive method or with values assigned by consensus
u
ˆ ()
uncertainty associated with δ due to the uncertainty of estimating δ by measuring a reference measurement standard or reference material with certified value ˆμ()
u
ˆμ()
uncertainty associated with the certified value ˆμ()
u(xi)
uncertainty associated with the input value xi; also uncertainty associated with x′i where xi and x′i differ only by a constant
u(y)
combined standard uncertainty associated with y
ui(y)
contribution to combined standard uncertainty in y associated with the value xi.
u(yi)
combined standard uncertainty associated with result or assigned value yi
u(Y)
combined standard uncertainty for the result Y = f(y1, y2, ...)
uinh
uncertainty associated with sample inhomogeneity
U
expanded uncertainty, equal to k times the standard uncertainty u
U(y)
expanded uncertainty in y
xi
value of the ith input quantity in the determination of a result

xi
deviation of the ith input value from the nominal value of x
Y
combined result formed as a function of other results yi
yi
result for test item i from the definitive method during a comparison of methods or assigned value in a comparison with values assigned by consensus
ˆ
yi
result for test item i from the routine test method during a comparison of methods
Δ
laboratory bias
D
l
estimate of bias of laboratory l, equal to the laboratory mean, m, minus the certified value ˆμ()
D
y
mean laboratory bias during a comparison of a routine method with a definitive method or with values assigned by consensus
δ
bias intrinsic to the measurement method in use

6 © ISO 2017 – All rights reserved

ISO 21748:2017(E)
ˆ

estimated or measured bias
μ
unknown expectation of the ideal result
ˆ
μ
certified value of a reference material
σ0
standard deviation for proficiency testing
σD
true value of the standard deviation of results obtained by repeated measurement on a refer�ence material used for checking control of bias
σL
inter-laboratory standard deviation; standard deviation of B
σr
intra-laboratory standard deviation; standard deviation of e
σw
within-group standard deviation
σw0
standard deviation required for adequate performance (see ISO Guide 33)
νeff
effective degrees of freedom for the standard deviation of, or uncertainty associated with, a result yi
νi
degrees of freedom associated with the ith contribution to uncertainty
νl
degrees of freedom associated with an estimate sl of the standard deviation for laboratory l during verification of repeatability
5 Principles
5.1 Individual results and measurement process performance
5.1.1 Measurement uncertainty relates to individual results. Repeatability, reproducibility and bias, by contrast, relate to the performance of a measurement or testing process. For studies under ISO 5725 series, the measurement or testing process will be a single measurement method, used by all laboratories taking part in the study. Note that for the purposes of this document, the measurement method is assumed to be implemented in the form of a single detailed measurement procedure (as defined in ISO/IEC Guide 99:2007, 2.6). It is implicit in this document that process-performance figures derived from method-performance studies are relevant to all individual measurement results produced by the process. It will be seen that this assumption requires supporting evidence in the form of appropriate quality control and assurance data for the measurement process (Clause 7).
5.1.2 It will be seen below that differences between individual test items may additionally need to be taken into account, but, with that caveat, it is unnecessary to undertake individual and detailed uncertainty studies for every test item for a well-characterized and stable measurement process.
5.2 Applicability of reproducibility data
The application of this document is based on two principles.
— First, the reproducibility standard deviation obtained in a collaborative study is a valid basis for measurement uncertainty evaluation (see A.2.1).
— Second, effects not observed within the context of the collaborative study shall be demonstrably negligible or explicitly allowed for. The latter principle is implemented by an extension of the basic model used for collaborative study (see A.2.3).

© ISO 2017 – All rights reserved 7

ISO 21748:2017(E)
5.3 Basic equations for the statistical model
5.3.1 The statistical model on which this document is based is formulated as in Formula (1):
yB
cxeii=+++ + μ (1)
where
y
is the measurement result, assumed to be calculated from an appropriate function;
μ
is the (unknown) expectation of ideal results;
δ
is a term representing bias intrinsic to the measurement method;
B
is the laboratory component of bias;

xi
is the deviation from the nominal value of xi;
ci
is the sensitivity coefficient, equal to yxi;
e
is the random error term under repeatability conditions.
B and e are assumed to be normally distributed, with variances of L2 and r2, respectively. These terms form the model used in ISO 5725-2 for the analysis of collaborative study data.
Since the observed standard deviations of method bias, δ, laboratory bias, B, and random error, e, are overall measures of dispersion under the conditions of the collaborative study, the summation cxii is over those effects subject to deviations other than those incorporated in δ, B, or e, and the summation accordingly provides a method for incorporating effects of operations that are not carried out in the course of a collaborative study.
Examples of such operations include the following:
a) preparation of test item carried out in practice for each test item, but carried out prior to circulation in the case of the collaborative study;
b) effects of sub-sampling in practice when test items subjected to collaborative study were, as is common, homogenized prior to the study. The xi are assumed to be normally distributed with expectation zero and variance u2(xi).
The rationale for this model is presented in detail in Annex A for information.
NOTE Error is generally defined as the difference between a reference value and a result. In the GUM, “error” (a value) is clearly differentiated from “uncertainty” (a dispersion of values). In uncertainty evaluation, however, it is important to characterize the dispersion due to random effects and to include them in an explicit model. For the present purpose, this is achieved by including “error terms” with zero expectation as in Formula (1).
5.3.2 Given the model described by Formula (1), the standard uncertainty u(y) associated with an observation can be estimated using Formula (2):
uy
uscuxsiir222222()=()++()+ ˆ
L (2)
where

8 © ISO 2017 – All rights reserved

ISO 21748:2017(E)
s
L2
is the estimated variance of B;
s
r2
is the estimated variance of e;
u
ˆ ()
is the standard uncertainty associated with δ due to the uncertainty of estimating δ by measuring a reference measurement standard or reference material with certified value ˆμ();
u(xi)
is the standard uncertainty associated with xi.
Given that the reproducibility standard deviation sR is given by sssRr222=+L, sR2 can be substituted for ssrL22+ and Formula (2) is reduced to Formula (3):
uy
uscuxRii22222()=()++() ˆ (3)
5.4 Repeatability data
It will be seen that repeatability data are used in this document primarily as a check of precision, which, in conjunction with other tests, confirms that a particular laboratory may apply reproducibility and trueness data in its evaluation of uncertainty. Repeatability data are also employed in the calculation of the reproducibility component of uncertainty (see 7.3 and Clause 11).
6 Evaluating uncertainty using repeatability, reproducibility and trueness estimates
6.1 Procedure for evaluating measurement uncertainty
The principles on which this document is based (see 5.1) lead to the following procedure for evaluating measurement uncertainty.
a) Obtain estimates of the repeatability, reproducibility and trueness of the method in use from published information about the method.
b) Establish whether the laboratory bias for the measurements is within that expected on the basis of the data obtained in a).
c) Establish whether the precision attained by current measurements is within that expected on the basis of the repeatability and reproducibility estimates obtained in a).
d) Identify any influences on the measurement that were not adequately covered in the studies referenced in a), and quantify the variance that could arise from these effects, taking into account the sensitivity coefficients and the uncertainties for each influence.
e) Where the bias and precision are under control, as demonstrated in b) and c), combine the reproducibility estimate [a)] with the uncertainty associated with trueness [a) and b)] and the effects of additional influences [d)] to form a combined uncertainty estimate.
These different steps are described in more detail in Clause 7 to Clause 11.
NOTE This document assumes that where bias is not under control, corrective action is being taken to bring the process under such control.

© ISO 2017 – All rights reserved 9

ISO 21748:2017(E)
6.2 Differences between expected and actual precision
Where the precision differs in practice from that expected from the studies in 6.1 a), the associated contributions to uncertainty should be adjusted. 8.5 describes adjustments to reproducibility estimates for the common case where the precision is approximately proportional to level of response.
7 Establishing the relevance of method performance data to measurement results from a particular measurement process
7.1 General
The results of collaborative study yield performance indicators (sR, sr) and, in some circumstances, a method bias estimate, which form a “specification” for the method performance. In adopting the method for its specified purpose, a laboratory is normally expected to demonstrate that it is meeting this “specification”. In most cases, this is achieved by studies intended to verify control of repeatability (see 7.3) and of the laboratory component of bias (see 7.2), and by continued performance checks [quality control and assurance (see 7.4)].
7.2 Demonstrating control of the laboratory component of bias
7.2.1 General requirements
7.2.1.1 A laboratory should demonstrate, in its implementation of a method, that bias is under control, that is, the laboratory component of bias is within the range expected from the collaborative study. In the following descriptions, it is assumed that bias checks are performed on materials with reference values closely similar to the items actually under routine test. Where the materials used for bias checks do not have reference values close to those of the materials routinely tested, the resulting uncertainty contributions should be amended in accordance with the provisions of 8.4 and 8.5.
7.2.1.2 In general, a check on the laboratory component of bias constitutes a comparison between laboratory results and some reference value(s), and constitutes an estimate of B. Formula (2) shows that the uncertainty associated with variations in B is represented by sL, itself included within sR. However, because the bias check is itself uncertain, the uncertainty of the comparison in principle increases the uncertainty of results obtained in future applications of the method. For this reason, it is important to ensure that the uncertainty associated with the bias check is small compared to sR (ideally less than 0,2 sR) and the following guidance accordingly assumes negligible uncertainties associated with the bias check. Where this is the case and no evidence of an excessive laboratory component of bias is found, Formula (3) applies without change. Where the uncertainties associated with the bias check are large, it is prudent to increase the uncertainty estimated on the basis of Formula (3), for example by including additional terms in the uncertainty budget (3.15).
Where the method is known from collaborative trueness studies to have non-negligible bias, the known bias of the method should be taken into account in assessing laboratory bias; for example, by correcting the results for known method bias.
7.2.2 Methods of demonstrating control of the laboratory component of bias
7.2.2.1 General
Bias control may be demonstrated, for example, by any of the following methods. For consistency, the same general criteria are used for all tests for bias in this document. More stringent tests may be used.
7.2.2.2 Study of a certified reference material or measurement standard
A laboratory l should perform nl replicate measurements on the reference standard under repeatability conditions, to form an estimate Δl (equal to the laboratory mean, m, minus the certified value, ˆμ) of

10 © ISO 2017 – All rights reserved

ISO 21748:2017(E)
bias on this material. Where practical, nl should be chosen such that the uncertainty snlw2 < 0,2 sR. Note that this reference standard is not, in general, the same measurement standard as that used in assessing trueness for the method. Further, Δl is generally not equal to B. Following ISO Guide 33 with appropriate changes of symbols, the measurement process is considered to be performing adequately if

l<2

D
(
4)
σD in Formula (4) is estimated by sD, given by Formula (5):
ss
snlDLw222=+ (5)
where
nl
is the number of replicates by laboratory l;
sw
is the intra-laboratory standard deviation for the nl replicates or derived from other
repeatability studies;
sL
is the inter-laboratory standard deviation derived from a collaborative study.
Compliance with the criterion in Formula (4) is taken to be confirmation that the laboratory component of bias B is within the population of values represented in the collaborative study. Note that the reference material or standard is used here as an independent check, or control material, and not as a calibrant.
NOTE 1 A laboratory is free to adopt a criterion more stringent than Formula (4), either by using a factor smaller than 2 or by implementing an alternative and more sensitive test for bias.
NOTE 2 This procedure assumes that the uncertainty associated with the reference value is small compared to σD.
7.2.2.3 Comparison with a definitive test method of known uncertainty
A laboratory l should test a suitable number nl of test items using both the definitive method and the test method in use in the laboratory, to generate nl pairs of values yyii,ˆ(), where yi is the result of the definitive method for test item “i”, and ˆyi is the value obtained from the routine test method for test item “i.” The laboratory should then calculate its mean bias Dy using Formula (6) and the standard deviation s(Δy) of the differences as in Formula (7):
D
yliiinnyyl=−()= 11ˆ (6)
s
nylyyinil ()=−−()= 1121 (
7)
where yiiiyy=−ˆ.

© ISO 2017 – All rights reserved 11

ISO 21748:2017(E)
Where practical, nl should be chosen so that the standard deviation snyl2 () < 0,2 sR. By analogy with Formula (4) and Formula (5), the measurement process is considered to be performing adequately if Dy < 2 sD where sssnylDL222=+()/ . In this case, Formula (3) is used without change.
NOTE 1 A laboratory is free to adopt a more stringent criterion than Dy < 2 sD, either by using a coverage factor smaller than 2 or by implementing an alternative and more sensitive test for bias.
NOTE 2 This procedure assumes that the standard uncertainty associated with the reference method is small compared to σD and that the deviations yiiiyy=−ˆ can be assumed to arise from a population with approximately constant variance.
7.2.2.4 Comparison with other laboratories using the same method
If a testing laboratory l participates in additional collaborative exercises (for example, proficiency testing as defined in ISO/IEC 17043) from which it may estimate a bias, the data may be used to verify control of bias. There are two likely scenarios.
a) The exercise involves testing a measurement standard or reference material with an independently assigned value and uncertainty. The procedure of 7.2.2.2 then applies exactly.
b) The comparison generates q (≥ 1) assigned values y1, y2, ..., yq by consensus. The testing laboratory, whose results are represented by ˆ,ˆ,...,ˆyyyq12, should then calculate its mean bias Dy in accordance with Formula (8) and the standard deviation s(Δy) with respect to the consensus means as in Formula (9):

yiiiqqyy=−()= 11ˆ (8)
s
qyyyiqi ()=−−()= 1121 (9)
where yiiiyy=−ˆ.
The measurement process is considered to be performing adequately if Dy < 2 sD, where sssqyDL222=+() . In this case, Formula (3) is used without change.
NOTE 1 This procedure assumes that the consensus value is based on a number of results that is large compared to q, leading to a negligible uncertainty associated with the assigned value, and that the deviations Dyican be considered to be drawn from a population with approximately constant variance.
In some proficiency schemes, all returned results ˆyi are converted to z-scores, zyyiii=−()ˆ 0, by subtracting the assigned value yi and dividing by the standard deviation σ0 for proficiency testing (see ISO/IEC 17043). Where this is the case and the standard deviation for proficiency testing is less than or equal to sR for the method, a mean z‑score between ±2q for q assigned values provides sufficient evidence of bias control. This is convenient to calculate, and is less sensitive to the assumption of constant variance in Note 1, but it should be noted that it is usually a more stringent criterion than described in 7.2.2.4. The laboratory is free to use a more stringent criterion (see Note 2), but the calculation described in 7.2.2.4 is necessary for exact equivalence.
NOTE 2 A laboratory is free to use a more stringent criterion than that described in 7.2.2.4.

12 © ISO 2017 – All rights reserved

ISO 21748:2017(E)
7.2.3 Detection of significant laboratory component of bias
As noted in the Scope, this document is applicable only where the laboratory component of bias is demonstrably under control. Where excessive bias is detected, it is assumed that action will be taken to bring the bias within the required range before proceeding with measurements. Such action will typically involve investigation and elimination of the cause of the bias.
7.3 Verification of repeatability
7.3.1 The test laboratory l should show that its repeatability is consistent with the repeatability standard deviation obtained in the course of the collaborative exercise. The demonstration of consistency should be achieved by replicate analysis of one or more suitable test materials, to obtain (by pooling results, if necessary) a repeatability standard deviation sl with νl degrees of freedom. The values of sl should be compared, using an F-test at the 95 % level of confidence, if necessary, with the repeatability standard deviation sr derived from the collaborative study. Where practical, sufficient replicates should be taken to obtain νl ≥ 15.
7.3.2 If sl is found to be significantly greater than sr, the laboratory concerned should either identify and correct the causes or use sl in place of sr in all uncertainty estimates calculated using this document. Note particularly that this will involve an increase in the estimated value of the reproducibility standard deviation sR, as sssRr=+L22 is replaced by =+sssRlL22, where s′R is the adjusted estimate of the reproducibility standard deviation. Conversely, where sl is significantly smaller than sr, the laboratory may also use sl in place of sr, giving a smaller estimate of uncertainty.
In all precision studies, it is important to confirm that the data are free from unexpected trends and to check whether the standard deviation sw is constant for different test items. Where the standard deviation sw is not constant, it may be appropriate to assess precision separately for each different class of items, or to derive a general model (such as in 8.5) for the dependence.
NOTE Where a specific value of precision is required, ISO Guide 33 provides details of a test based on cww022= swith σw0 set to the required precision value.
7.4 Continued verification of performance
In addition to preliminary estimation of bias and precision, the laboratory should take due measures to ensure that the measurement procedure remains in a state of statistical control. In particular, this will involve the following:
— appropriate quality control, including regular checks on bias and precision. These checks may use any relevant stable, homogeneous test item or material. Use of control charts is strongly recommended (see ISO 5725-5 and ISO 5725-6);
— quality assurance measures, including the use of appropriately trained and qualified staff operating within a suitable quality system.
Where control charts are in use, the standard deviation for quality control observations over a period of time should normally be less than the value of s′R calculated in 7.3.2 if precision and bias are under adequate control.
8 Establishing relevance to the test item
8.1 General
In a collaborative study or an estimation of intermediate measures of precision under ISO 5725-2 and ISO 5725-3, it is normal to measure values on homogeneous materials or test items of a small number of

© ISO 2017 – All rights reserved 13

ISO 21748:2017(E)
types. It is also common practice to distribute prepared materials. Routine test items, on the other hand, may vary widely, and may require additional treatment prior to testing. For example, environmental test samples are frequently supplied dried, finely powdered and homogenized for collaborative study purposes; routine samples are wet, inhomogeneous and coarsely divided. It is accordingly necessary to investigate, and if necessary allow for, these differences.
8.2 Sampling
8.2.1 Inclusion of sampling process
Collaborative studies rarely include a sampling step; if the method used in-house involves sub-sampling, or the procedure as used routinely is estimating a bulk property from a small sample, then the effects of sampling should be investigated. It may be helpful to refer to sampling documentation such as ISO 11648-1 or other standards for specific purposes.
8.2.2 Inhomogeneity
Inhomogeneity is typically investigated experimentally via homogeneity studies that can yield a variance estimate, usually from an analysis of variance (ANOVA) of replicate results on several test items, in which the inter-item component of variance sinh2 represents the effect of inhomogeneity. Where test materials are found to be significantly inhomogeneous (after any prescribed homogenization), this variance estimate should be converted directly to a standard uncertainty (i.e. uinh = sinh). In some circumstances, particularly when the inhomogeneity standard deviation found from a sample of Q test items from a larger batch and the mean result will be applied to other items in the batch, the uncertainty contribution is based on the prediction interval i.e. inhinhusQQ=+()()1. It is also possible to estimate inhomogeneity effects theoretically, using knowledge of the sampling process and appropriate assumptions about the sampling distribution.
8.3 Sample preparation and pre-treatment
In most studies, samples are homogenized, and may additionally be stabilized, before distribution. It may be necessary to investigate and allow for the effects of the particular pre-treatment procedures applied in-house. Typically, such investigations establish the effect of the procedure on the measurement result by studies on materials with approximately or accurately established properties. The effect may be a change in dispersion or a systematic effect. Significant changes in dispersion should be accommodated by adding an appropriate term to the uncertainty budget (assuming the effect is to increase the dispersion). Where a significant systematic effect is found, it is most convenient to establish an upper limit for the effect. Following the recommendations of the GUM, this may be treated as a limit of a rectangular or other appropriate finite symmetric distribution, and a standard uncertainty estimated by division of the half-width of the distribution by the appropriate factor.
8.4 Changes in test-item type
The uncertainty arising from changes in type or composition of test items compared to those used in the collaborative study should, where relevant, be investigated. Typically, such effects should either be predicted on the basis of established effects arising from bulk properties (which then lead to uncertainties evaluated using the basic approach in the GUM) or investigated by systematic or random change in test-item type or composition (see Annex B).
8.5 Variation of uncertainty with level of response
8.5.1 Adjusting sR
It is common to find that some or most contributions to uncertainty for a given measurement are dependent on the value of the measurand. ISO 5725-2 considers three simple cases where the

14 © ISO 2017 – All rights reserved

ISO 21748:2017(E)
reproducibility standard deviation for a particular positive value m is approximately described by one of the models, as shown in Formula (10), Formula (11) and Formula (12):
ˆ
sbmR= (10)
ˆ
sabmR=+ (11)
ˆ
scmRd= (12)
where
ˆ
sR
is the adjusted reproducibility standard deviation calculated from the approxi�mate model;
a, b, c and d
are empirical coefficients derived from a study of five or more different test items with different mean responses m (a, b and c are positive).
Where one of the Formula (10) to Formula (12) applies, the standard uncertainty should be based on a reproducibility estimate calculated using the appropriate model.
Where the provisions of 7.3 apply, ˆsRshould also reflect the changed contribution of the repeatability term sr. For most purposes, a simple proportional change in ˆsR should suffice, that is as given in Formula (13):

=+++sabmssssRl()LLw2222 (13)
where sR has the same meaning as in 7.3.
8.5.2 Changes in other contributions to uncertainty
In general, where any contribution to uncertainty changes with measured response in a predictable manner, the relevant standard uncertainty in y should be adjusted accordingly.
NOTE Where many contributions to uncertainty are strictly proportional to y, it is often convenient to express all significant effects in terms of multiplicative effects on y and all standard uncertainties in the form of relative standard deviations.
9 Additional factors
Clause 8 considers the main factors that are likely to change between collaborative study and routine testing. It is possible that other effects may operate in particular instances, either because the controlling variables were fortuitously or deliberately constant during the collaborative exercise, or because the full range of conditions attainable in routine practice was not adequately covered within the selection during the collaborative study.
The effects of factors which are held constant or which vary insufficiently during collaborative studies should be estimated separately, either from experimental variation or by prediction from established theory. Where these effects are not negligible, the uncertainty associated with such factors should be estimated, recorded and combined with other contributions in the normal way [i.e. following the summation principle in Formula (3)].

© ISO 2017 – All rights reserved 15

ISO 21748:2017(E)
10 General expression for combined standard uncertainty
Formula (3), taking into account the need to use the adjusted estimate ˆsR2 instead of sR2 to allow for factors discussed in Clause 8, leads to the general expression in Formula (14) for the estimation of the combined standard uncertainty u(y) associated with a result y:
uy
sucuxRiiin22221()=+()+()

= ˆˆ2 (14)
where uˆ () is calculated as specified in Formula (15) [see also Formula (A.8)]:
us
usnspuRrˆˆˆˆ μμ ()=+()=−−()+()2222112 (15)
where
p
is the number of laboratories;
n
is the number of replicates in each laboratory;
u
ˆμ()
is the standard uncertainty associated with the certified value ˆμ() used to estimate the bias in the collaborative study.
The variable u(B) does not appear in Formula (14) because sL, the standard uncertainty associated with B, is already included inˆ.sR2 The subscript “i” covers effects identified in Clause 7 and Clause 8 (assuming these have indices running contiguously from 1 to n′). Clearly, where any effects and uncertainties are small compared to sR, they may, for most practical purposes, be neglected. For example, standard uncertainties less than 0,2 sR lead to changes of under 0,02 sR in the overall uncertainty estimate.
NOTE 1 Where all uncertainty contributions are expressed in the form of relative standard deviations or percentages as suggested in the Note to 8.5.2, Formula (14) and Formula (15) can be applied directly to the relative values and the resulting uncertainty u(y) will be obtained in the form of a relative standard deviation or percentage.
NOTE 2 When bias of the measurement method is considered to be negligibly small and the same procedure as an inter-laboratory collaborative study is applied in the measurement of test items, the combined standard uncertainty is u(y) = sR.
11 Uncertainty budgets based on collaborative study data
This document assumes essentially only one model for the results of a measurement or test: that is given in Formula (3). The evidence required to support continued reliance on the model may come from a variety of sources, but where the uncertainties associated with the tests involved remain negligible, Formula (3) is used. However, there are some different situations for which the form of Formula (3) changes slightly, particularly where the reproducibility or repeatability terms depend on the response. The uncertainty budget where the uncertainty is essentially independent of the response over the range of interest is summarized in Table 1, while where the uncertainty depends on the response, in Table 2.

16 © ISO 2017 – All rights reserved

ISO 21748:2017(E)
Table 1 — Uncertainty contributions independent of response
Effect
Standard uncertaintya
associated with y
Comment
δ
u
ˆ ()
Only included if the collaborative study incorporates a correction for bias and the uncertainty is non-negligible.
B
sL
See Table 2.
e
sr
If an average of nr complete replicates of the methodb are used in practice on a test item, the standard uncertainty associated with e becomes ˆsnrr.
xi
||
cuxii()
See Clause 8 and Annex B.
a These standard uncertainties have the same units as y. They may also be expressed in relative terms (see Note to Clause 10).
b The method may itself mandate replication; nr relates to repetition of the whole method including any such replication.
Table 2 — Uncertainty contributions dependent on response
Effect
Standard uncertaintya,b associated with y
Comment
δ

()yuˆˆ
Only included if the collaborative study incorporates a correction for bias and the uncertainty is non-negligible. The differential is included to cover cases where the correction is not a simple addi�tion or subtraction.
B
ˆ
sabmLLL=+
aL and bL are the coefficients of a presumed linear relationship between sL and the mean response m, analogous to Formula (11).
This form is applicable only when the dependence of sL on m has been established. Where it has not, use the combined estimate associated with B and e in Table 1.
e
ˆ
sbmrrra=+
ar and br are the coefficients of a presumed linear relationship between sr and the mean response m, analogous to Formula (11).
If an average of nr complete replicates of the methodc is used in practice on a test item, the standard uncertainty associated with e becomes ˆsnrr.
This form is applicable only when the dependence of sr on m has been established. Where it has not, use the combined estimate associated with B and e in Table 1.
B, e
ˆ
sbmR=
or
ˆsabmR=+
or
ˆscmRd=
a and b are the coefficients of the appropriate established rela�tionship between sR and the mean response m, as specified in Formula (10), Formula (11) or Formula (12).
This combined estimate should be used instead of the separate estimates associated with B and e (see Table 1) when the separate dependencies of sL and sr on m have not been established.
xi
|ci|u(xi)
See Clause 8 and Annex B.
a These standard uncertainties have the same units as y. They may also be expressed in relative terms (see Note to Clause 10).
b The following assumes a simple linear dependence of the form in Equation (11).
c The method may itself mandate replication; nr relates to repetition of the whole method, including any such replication.

© ISO 2017 – All rights reserved 17

ISO 21748:2017(E)
12 Evaluation of uncertainty for a combined result
12.1 A “combined result” Y is formed from the results yi of a number of different tests, each characterized by collaborative study. For example, a calculation for “meat content” would typically combine a protein content, calculated from a nitrogen determination, with a fat and a moisture content, each determined by different standard methods.
12.2 Standard uncertainties u(yi) for each contributing result yi may be obtained by using the principles specified in this document, or directly by using Formula (A.1) or Formula (A.2), as appropriate. Where, as is often the case, the input values yi are independent, the combined standard uncertainty u(Y) for the result Y = g(y1, y2, ...) is given by Formula (16):
uY
cuyiii()=()

2 (16)
where the results yi are not independent, due allowance should be made for correlation by reference to the GUM [which uses Formula (A.2)].
13 Expression of uncertainty information
13.1 General expression
Uncertainties may be expressed as combined standard uncertainties u(y) or as expanded uncertainties, U(y) = ku(y), where k is a coverage factor (see 13.2), following the principles of the GUM. It may also be convenient to express uncertainties in relative terms; for example, as a coefficient of variation or an expanded uncertainty expressed as a percentage of the reported result.
13.2 Choice of coverage factor
13.2.1 General
In evaluating expanded uncertainty, the following considerations are relevant in choosing the coverage factor, k.
13.2.2 Level of confidence desired
For most practical purposes, expanded uncertainties should be quoted to correspond approximately to a level of confidence of 95 %. However, the choice of level of confidence is influenced by a range of factors, including the criticality of application, and the consequences of incorrect results. These factors, together with any guidance or legal requirement relating to the application, should be given due consideration when choosing k.
13.2.3 Degrees of freedom associated with the estimate
13.2.3.1 For most practical purposes, when approximately 95 % confidence is required and the degrees of freedom in the dominant contributions to uncertainty is large (>10), the choice of k = 2 provides a sufficiently reliable indication of the likely range of values. However, there are circumstances in which this might lead to significant underestimation, notably where one or more significant term(s) in Formula (14) is/are estimated with fewer than seven degrees of freedom.
13.2.3.2 Where one such term ui(y) with νi degrees of freedom is dominant [an indicative level is ui(y) ≥ 0,7 u(y)], it is normally sufficient to take the effective degrees of freedom νeff associated with u(y) as νi.

18 © ISO 2017 – All rights reserved

ISO 21748:2017(E)
13.2.3.3 Where several significant terms are of approximately equal size and all have limited degrees of freedom (i.e. νi << 10), apply the Welch-Satterthwaite equation [Formula (17)] to obtain the effective degrees of freedom νeff.
uy
uyiiiN441()=()= eff (17)
The value of k is then chosen from νeff by using the appropriate two-tailed value of Student’s t for the level of confidence required and νeff degrees of freedom. It is generally safest to round non-integer values of νeff downward to the next lower integer value.
NOTE In many fields of measurement and testing, the frequency of statistical outliers is sufficiently high compared to the expectation from the normal distribution to warrant extreme caution in extrapolating to high levels of confidence (>95 %) without good knowledge of the distribution concerned.
14 Comparison of method performance figures and uncertainty data
14.1 Basic assumptions for comparison
Evaluation of measurement uncertainty in accordance with this document will provide a combined standard uncertainty which, while based primarily on reproducibility or intermediate precision estimates, makes due allowance for factors that do not vary during the study on which these precision estimates are based. In principle, the resulting combined standard uncertainty u(y) should be identical to that formed from a detailed mathematical model of the measurement process. A comparison between the two separate estimates, if available, forms a useful test of the reliability of either estimate. The test procedure in 14.2 is recommended.
Note, however, that the procedure is based on two important assumptions.
— First, however a combined standard uncertainty u(y) with νeff effective degrees of freedom is estimated, it follows the usual distribution for a standard deviation s with n − 1 degrees of freedom [i.e. (n − 1)(s2/σ2) is distributed as χ2 with n − 1 degrees of freedom]. This assumption permits the use of an ordinary F-test. However, because combined standard uncertainties may include uncertainties associated with terms from a variety of distributions, and also terms with different variances, the test should be treated as indicative and the level of confidence implied should be viewed with due caution.
— Second, the two combined standard uncertainty estimates to be compared are entirely independent. This is also unlikely in practice, as some factors may be common to both estimates. A more subtle effect is the tendency for judgements about uncertainties to be influenced by known inter-laboratory performance; it is assumed that due care is taken to avoid this effect. Where significant factors are common to two estimates of combined uncertainty, the two estimates will clearly be similar far more often than chance alone would dictate. In such cases, where the following test fails to find a significant difference, the result should not be taken as strong evidence for measurement model reliability.
14.2 Comparison procedure
Compare the two estimates u(y)1 and u(y)2, chosen such that u(y)1 is the larger of the two, with effective degrees of freedom ν1 and ν2, respectively, using a level of confidence α (e.g. for 95 % confidence, α = 0,05), as follows.
a) Calculate F = [u(y)1/u(y)2]2.
b) Look up, or obtain from software, the one-sided upper critical value Fcrit = F(α/2, ν1, ν2). Where an upper and a lower value are given, take the upper value, which is always greater than 1.
c) If F > Fcrit, u(y)1 should be considered significantly greater than u(y)2.

© ISO 2017 – All rights reserved 19

ISO 21748:2017(E)
14.3 Reasons for differences
There may be a variety of reasons for a significant difference between combined standard uncertainty estimates. These include the following:
— genuine differences in performance between laboratories;
— failure of a model to include all the significant effects on the measurement;
— overestimation or underestimation of a significant contribution to the combined standard uncertainty.

20 © ISO 2017 – All rights reserved

ISO 21748:2017(E)
Annex A
(informative)
Approaches to uncertainty evaluation
A.1 GUM approach
The Guide to the expression of uncertainty in measurement (GUM) provides a methodology for evaluating the measurement uncertainty associated with a result y from a model of the measurement process. The GUM methodology is based on the recommendations of the International Bureau of Weights and Measures (BIPM), see Reference [20]. These recommendations first recognize that contributions to uncertainty may be estimated either by the statistical analysis of a series of observations (“Type A evaluation”) or by any other means (“Type B evaluation”), for example using data such as published reference material or measurement standard uncertainties or, where necessary, professional judgement. Separate contributions, however estimated, are expressed in the form of standard deviations, and, where necessary, combined as such.
The GUM implementation of the BIPM recommendations begins with a measurement model of the form y = f(x1, x2, ..., xN), which relates the measurement result y to the input quantities xi. The GUM then gives the uncertainty u(y) for the case of independent input quantities as specified in Formula (A.1):
uy
cuxiiiN()=()= 221 (A.1)
where
ci
is a sensitivity coefficient determined from ci = ∂y/∂xi, the partial differential of y with respect to xi;
u(xi) and u(y)
are standard uncertainties (that is, measurement uncertainties expressed in the form of standard deviations) in xi and y respectively.
Where the input quantities are not independent, the relationship is more complex, as specified in Formula (A.2):
uy
cuxccuxxiiiNijijjijNiN()=()+()== = 22111,, (A.2)
where
u(xi, xj)
is the covariance between xi and xj;
ci and cj
are the sensitivity coefficients as described for Formula (A.1).
In practice, the covariance is often related to the correlation coefficient rij as specified in Formula (A.3):
ux
xuxuxrijijij,()=()() (A.3)
where −1 ≤ rij ≤1.
In cases involving strong non-linearity in the measurement model, Formula (A.1) is expanded to include higher order terms; this issue is covered in more detail in the GUM.

© ISO 2017 – All rights reserved 21

ISO 21748:2017(E)
After calculation of the combined standard uncertainty using Formula (A.1) to Formula (A.3), an expanded uncertainty is calculated by multiplying u(y) by a coverage factor k, which may be chosen on the basis of the estimated degrees of freedom for u(y). This is dealt with in detail in Clause 13.
In general, it is implicit in the GUM approach that the input quantities are measured or assigned. Where effects arise that are not readily defined in terms of measurable quantities (such as operator effects), it is convenient either to form additional standard uncertainties u(y) that allow for such effects or to introduce additional variables into the expression f(x1, x2, ..., xN).
Because of the focus on individual input quantities, this approach is sometimes called a “bottom-up” approach to uncertainty evaluation.
The physical interpretation of u(y) is not entirely straightforward, since it may include terms which are estimated by judgement and u(y) may accordingly be best regarded as characterizing a “degree-of-belief” function, which may or may not be observable in practice. However, a more straightforward physical interpretation is provided by noting that the calculation performed to arrive at u(y) actually results in the standard deviation which would be obtained if all input variables were indeed to vary at random in the manner described by their assumed distributions. In principle, this would be observable and measurable under conditions in which all input quantities were allowed to vary at random.
A.2 Collaborative study approach
A.2.1 Basic model
Collaborative study design, organization and statistical treatment are described in detail in ISO 5725 series. The simplest model underlying the statistical treatment of collaborative study data is given (using the same symbols as ISO 5725 (all parts)] in Formula (A.4):
y = m + B + e (A.4)
where
m
is the expectation for y;
B
is the laboratory component of bias under repeatability conditions, assumed to be normally distributed with standard deviation σL;
e
is the random error under repeatability conditions, assumed to be normally distributed with standard deviation σw.
Additionally, B and e are assumed to be uncorrelated.
The application of Formula (A.1) to this simple model gives Formula (A.5) for a single result y:
uy
uBue222()=()+() (A.5)
Noting that L2 and w2 are the variances associated with B and e, respectively and that these are estimated by the between-laboratory variance sL2 and the repeatability variance sr2 obtained in an inter-laboratory study, so that u(B) = sL and u(e) = sr, gives Formula (A.6) for the combined standard uncertainty u(y) associated with the result:
uy
ssr22()=+L2 (A.6)
By comparison with ISO 5725-2, Formula (A.6) is just the estimated reproducibility standard deviation sR.

22 © ISO 2017 – All rights reserved

ISO 21748:2017(E)
Since this approach concentrates on the performance of the complete method, it is sometimes referred to as a “top-down” approach.
Note that each laboratory calculates its estimate of m from an equation y = f(x1, x2, ...) assumed to be the laboratory’s best estimate of the measurand value y. Now, if y = f(x1, x2, ...) is a complete measurement model used to describe the behaviour of the measurement system, it is expected that the variations characterized by sL and sr arise from variation in the quantities x1, ..., xn. If it is assumed that reproducibility conditions provide for random variation in all significant influence quantities, and taking into account the physical interpretation of u(y) above, it follows that u(y) in Formula (A.6) is an estimate of u(y) as described by Formula (A.1) or Formula (A.2).
The first principle on which this document is based on is that the reproducibility standard deviation obtained in a collaborative study is a valid basis for measurement uncertainty evaluation.
A.2.2 Incorporating trueness data
Trueness is generally measured as bias with respect to an established reference value. In some collaborative studies, the trueness of the method with respect to a particular measurement system (usually the SI) is examined by study of a certified reference material (CRM) or measurement standard with a certified value ˆμ expressed in that system’s units (ISO 5725-4). The resulting statistical model is specified by Formula (A.7):
yB
e=+++ˆμ (A.7)
where
ˆ
μ
is a reference value;
δ
is the “method bias”.
The collaborative study will lead to a measured bias, ˆ , with associated standard deviation, sˆ , calculated as specified in Formula (A.8):
s
snspRrˆ =−−()2211 (A.8)
where
p
is the number of laboratories;
n
is the number of replicates in each laboratory.
The standard uncertainty uˆ () associated with that bias is given by Formula (A.9):
us
u222ˆˆˆ μ()=+() (A.9)
where uˆμ()is the standard uncertainty associated with the certified value ˆμ() used for trueness estimation in the collaborative exercise.
Where the bias estimated during the trial is included in the calculation of results in laboratories, the uncertainty associated with the estimated bias should, if not negligible, be included in the uncertainty budget.

© ISO 2017 – All rights reserved 23

ISO 21748:2017(E)
A.2.3 Other effects — Combined model
In practice, of course, sR and uˆ () do not necessarily include variation in all the effects that influence a measurement result. Some important factors are missing by the nature of the collaborative study, and some may be absent or under-estimated by chance or design. The second principle on which this document is based on is that effects not observed within the context of the collaborative study shall be demonstrably negligible or explicitly allowed for.
This is most simply accomplished by considering the effects of deviations xi from the nominal value xi required to provide the estimate of y and assuming approximate linearity of effects. The combined model is then specified in Formula (A.10):
yB
cxeii=+++ + ˆμ (A.10)
where the summed term is over all effects other than those represented by B, δ and e.
Examples of such effects might include sampling effects, test item preparation, and variation in composition or type of individual test items. Strictly, this is a linearized form of the most general model; where necessary, it is possible to incorporate higher order terms or correlation terms exactly as described by the GUM.
Noting that centring xi has no effect on the u(xi), so that uxuxii()() =, it follows that the standard uncertainty associated with y estimated from Formula (A.10) is given by Formula (A.11):
uy
ssruciuxi222222()=++()+ ()Lˆ (A.11)
where the summation is limited to those effects not covered by other terms.
In the context of method-performance evaluation, it may be noted here that intermediate precision conditions can also be described by Formula (A.10), though the number of terms in the summation would be correspondingly larger because fewer variables would be expected to vary randomly under intermediate conditions than under reproducibility conditions. In general, however, Formula (A.10) applies to any precision conditions subject to suitable incorporation of effects within the summation. In an extreme case, of course, where the conditions are such that the terms sr and sL are zero and uncertainty in overall bias is not determined, Formula (A.11) becomes identical to Formula (A.1).
There are two corollaries.
— First, it is necessary to demonstrate that the quantitative data available from the collaborative study are directly relevant to the test results under consideration.
— Second, that even where the collaborative study data are directly relevant, additional studies and allowances may be necessary to establish a valid uncertainty estimate, making due allowance for additional effects [the xi in Formula (A.10)]. In allowing for additional effects, it is assumed that Formula (A.1) will apply.
Finally, this document, in asserting that a measurement uncertainty evaluation may be reliably obtained from a consideration of repeatability, reproducibility and trueness data obtained from the procedures in ISO 5725 series, makes the same assumptions as ISO 5725 series.
a) Where reproducibility data are used, it is assumed that all laboratories are performing similarly. In particular, their repeatability precision for a given test item is the same, and the laboratory component of bias [represented by the term B in Formula (A.10)] is drawn from the same population as sampled in the collaborative study.
b) The test material(s) distributed in the study is/are homogeneous and stable.

24 © ISO 2017 – All rights reserved

ISO 21748:2017(E)
A.3 Relationship between approaches
The foregoing discussion describes two apparently different approaches to the evaluation of uncertainty. The GUM approach, at one extreme, predicts the uncertainty in the form of a variance on the basis of variances associated with inputs to a mathematical model. The other uses the fact that, if those same influences vary representatively during the course of a reproducibility study, the observed variance is a direct estimate of the same uncertainty. In practice, the uncertainty values found by the different approaches are different for a variety of reasons, including the following
a) incomplete mathematical models (i.e. the presence of unknown effects);
b) incomplete or unrepresentative variation of all influences during reproducibility assessment.
Comparison of the two different estimates is therefore useful as an assessment of the completeness of measurement models. Note, however, that observed repeatability or some other precision estimate is very often taken as a separate contribution to uncertainty, even in the GUM approach. Similarly, individual effects are usually at least checked for significance or quantified prior to assessing reproducibility. Practical uncertainty evaluation therefore often uses some elements of both extremes.
Where an uncertainty estimate is provided with a result to aid interpretation, it is important that the deficiencies in each approach be remedied. The possibility of incomplete models is, in practice, usually addressed by the provision of conservative estimates, the explicit addition of allowances for model uncertainty. In this document, the possibility of inadequate variation of input effects is addressed by the assessment of the additional effects. This amounts to a hybrid approach, combining elements of both “top-down” and “bottom-up” approaches.

© ISO 2017 – All rights reserved 25

ISO 21748:2017(E)
Annex B
(informative)
Experimental uncertainty evaluation
B.1 Practical procedure for estimating sensitivity coefficients
Where an input quantity xi may be varied continuously throughout a relevant interval, it is convenient to study the effect of such changes directly. A simple procedure, assuming an approximately linear change of result with xi, is as follows.
a) Select a suitable interval over which to vary variable xi, which should centre on the best estimate (or on the value specified by the method).
b) Carry out the complete measurement procedure (or that part of it affected by xi) at each of five or more levels of xi, with replication if required.
c) Fit a linear model to the results, using xi as abscissa and the measurement result as ordinate.
d) Use the slope of the line so found as the coefficient ci in Formula (A.1) or Formula (14).
This approach may show different sensitivity coefficients for different test items. This may be an advantage in comprehensive studies of a particular item or class of test items. However, where the sensitivity coefficient is to be applied to a large range of different cases, it is important to verify that the different items behave sufficiently similarly.
B.2 Simple procedure for estimating uncertainty due to a random effect
Where an input quantity xj is discontinuous and/or not readily controllable, an associated uncertainty may be derived from analysis of experiments in which the variable varies at random. For example, the type of soil in environmental analysis may have unpredictable effects on analytical determinations. Where random errors are approximately independent of the level of the quantity of interest, it is possible to examine the dispersion of error arising from such variations, using a series of test items for which a definitive value is available or where a known change has been induced.
The general procedure is then as follows.
a) Carry out the complete measurement on a representative selection of test items, in replicate, under repeatability conditions, using equal numbers of replicates for each item.
b) For each observation, calculate the signed difference from the known value.
c) Analyse the results (classified by the quantity of interest) with ANOVA, using the resulting sums of squares to form estimates of the intra-group component of variance sw2 and the inter-group component of variance sb2. The standard uncertainty u(xj) arising from variation in xj is equal to sb.
NOTE When different test items or classes of test item react differently to the quantity concerned (i.e. the quantity and test item class interact), the interaction will increase the value of sb. A detailed treatment of this situation is beyond the scope of this document.

26 © ISO 2017 – All rights reserved

ISO 21748:2017(E)
Annex C
(informative)
Examples of uncertainty calculations
C.1 Measurement of carbon monoxide (CO) in automobile emissions
C.1.1 General
Before being put on the market, passenger cars are required to be type-tested to check that the vehicle type complies with regulatory requirements concerning the emission by the motor and the exhaust system of carbon monoxide pollutant gas. The upper limit for approval is specified as 2,2 g/km. The test method is described in Reference [21] where the following specifications appear.
— The driving cycle (Euro 96) is given as a function of the speed (in km/h), the time (in s) and engaged gear. The car to be tested is put on a specified roller bench to perform the cycle.
— The measuring equipment is a specified CO analysis unit.
— The environment is controlled by using a specified pollution-monitoring cell.
— The personnel have undergone specified training.
Such a test of compliance can be performed in the test laboratory of a production unit of a car manufacturer or in an independent test laboratory.
C.1.2 Collaborative study data
Before adopting and routinely using such a test method, it is necessary to evaluate the effects of experimental factors or sources of influence on the results of the test method (and consequently on the uncertainty of the test results). This is done from experiments conducted in different laboratories. In order to control the test method, an inter-laboratory experiment is designed and conducted according to ISO 5725-2. The purpose of this inter-laboratory experiment is to estimate the precision of the test method when applied routinely in a given set of test laboratories. The estimate of precision is made from the data collected with the inter-laboratory experiment, with statistical analysis conducted according to ISO 5725-2. The study is conducted such that every participant undertakes all the processes necessary to carry out the measurement, and all relevant influence factors are accordingly taken into account.
It has been established that the repeatabilities of the laboratories are not significantly different and that the repeatability standard deviation of the test method can be estimated as 0,22 g/km. The reproducibility standard deviation of the test method can be estimated as 0,28 g/km.
C.1.3 Control of bias
The evaluation of trueness (control of bias against a reference) poses methodological and technical questions. There is no “reference car” in the sense of a reference material; trueness shall accordingly be controlled by calibration of the test system. For example, the calibration of a CO analysis unit can be made with reference gas and the calibration of the roller bench can be made for quantities such as time, length, speed and acceleration. From a knowledge of emission rates at various speeds and from similar information, it is confirmed that the uncertainties associated with these calibrations do not lead to significant uncertainty contributions associated with the measurement result (that is, all calculated uncertainties are very much less than the reproducibility standard deviation). Bias is accordingly considered to be under due control.

© ISO 2017 – All rights reserved 27

ISO 21748:2017(E)
C.1.4 Precision
Typical duplicated test runs by a laboratory have established that the repeatability is approximately 0,20 g/km. This is within the repeatability range found in the inter-laboratory study; the precision is accordingly considered to be under good control.
C.1.5 Relevance of test items
The scope of the method establishes it as suitable for all vehicles within the scope of “passenger cars”. While most vehicles achieve compliance relatively easily, and the uncertainty tends to be smaller at lower emission levels, the uncertainty is important at levels close to the regulatory limit. It was therefore decided to take the uncertainty estimated near the regulatory limit as a reasonable, and somewhat conservative, estimate of uncertainty for lower levels of CO emission. Note that where a test shows a vehicle to have emitted substantially more than the limit, it might prove necessary to undertake additional uncertainty studies if comparisons are critical. In practice, however, such a vehicle would not in any case be offered for sale without modification.
C.1.6 Uncertainty estimate
Since the prior studies have established due control of bias and precision within the testing laboratory, and no factors arise from operations not conducted during the collaborative study, the reproducibility standard deviation is used for estimating the standard uncertainty, leading to an expanded uncertainty of U = 0,56 g/km, quoted with a coverage factor k = 2 which gives a level of confidence of approximately 95 %.
NOTE The interpretation of results with uncertainties in the field of compliance testing is considered in ISO 10576-1.
C.2 Determination of meat content
C.2.1 General
Meat products are regulated to ensure that the meat content is accurately declared. Meat content is determined as a combination of nitrogen content (converted to total protein) and fat content. The present example shows the principle of combining different contributions to uncertainty, each of which itself arises chiefly from reproducibility estimates, as described in Clause 12.
The examples given in this subclause are from References [23], [24], [25] and [26].
C.2.2 Basic equations
Total meat content wmeat is defined in Formula (C.1):
wmeat = wpro + wfat (C.1)
where
wpro
is the total meat protein, expressed as percentage by mass;
wfat
is the total fat content, expressed as percentage by mass.
Meat protein wpro is calculated from Formula (C.2):
wpro = 100 wmN / fN (C.2)
where

28 © ISO 2017 – All rights reserved

ISO 21748:2017(E)
fN
is a nitrogen factor specific to the material;
wmN
is the total meat nitrogen content.
In this instance, wmN is identical to the total nitrogen content, wtN, as determined by Kjeldahl analysis.
C.2.3 Experimental steps in meat-content determination
The experimental steps involved in the determination of the meat content are as follows.
a) Determine the fat content, wfat.
b) Determine the nitrogen content, wmN, using the Kjeldahl method (mean of duplicate measurements).
c) Calculate the total meat protein content, wpro, using fN [Formula (C.2)].
d) Calculate the total meat content, wmeat [Formula (C.1)].
C.2.4 Uncertainty components
The components of uncertainty to consider are those associated with each of the quantities listed in C.2.3. The most significant relate to wpro, which constitutes some 90 % by mass of wmeat. The largest uncertainties associated with wpro arise from the following:
a) uncertainty in the factor fN owing to incomplete knowledge of the material;
b) variations in the reproducibility of the method, both from run to run and in detailed execution over the long term;
c) uncertainty associated with method bias;
d) uncertainty in fat content wfat.
NOTE Uncertainties a), b) and c) are associated with the sample, the laboratory and the method, respectively. It is often convenient to consider each of these three factors when identifying gross uncertainties, as well as any necessary consideration of the individual steps in the procedure.
C.2.5 Estimating uncertainty components
C.2.5.1 Uncertainty associated with fN
The uncertainty associated with fN can be estimated from a published range of values. Reference [22] gives the results of an extensive study of nitrogen factors in beef, which show a clear variation between different sources and cuts of meat. Reference [22] also permits calculation of an observed standard deviation for fN of 0,052 and a relative standard deviation of 0,014 for a large range of sample types.
NOTE The nitrogen factors determined in Reference [22] used the Kjeldahl method and are accordingly directly applicable for the present purpose.
C.2.5.2 Uncertainty associated with wtN
Information in two collaborative trials[23][24] allows an estimate of the uncertainty arising from errors in the reproducibility or the execution of the method. Close examination of the trial conditions shows first that each was conducted over a broad range of sample types and with a good, representative range of competent laboratories and, second, that the reproducibility standard deviation sR correlates well with the level of nitrogen. For both trials, the best-fit line is given by sR = 0,021 wtN. The same study also shows that the repeatability standard deviation is approximately proportional to wtN, with sr = 0,018 wtN, and an inter-laboratory term sL = 0,011 wtN.
The method specifies that each measurement is duplicated and the average taken. The repeatability term, which is an estimate of the repeatability of single results, must accordingly be adjusted to account

© ISO 2017 – All rights reserved 29

ISO 21748:2017(E)
for the effect of averaging two results within the laboratory (see the comment relating to sr in Table 1). The standard uncertainty u(wtN) associated with the nitrogen content is as given in Formula (C.3):
uw
wsswwrtNtNL2tNtN0,0110,01820,017()=+=+=2222 (C.3)
Formula (C.3) forms the best estimate of the uncertainty in wtN arising from reasonable variations in execution of the method.
The repeatability value is also used as a criterion for accepting the individual laboratory’s precision; the method specifies that results should be rejected if the difference falls outside the relevant 95 % confidence interval (approximately equal to 1,962sr). This check ensures that the intra-laboratory precision for the laboratory undertaking the test is in accordance with that found in the collaborative study.
NOTE If this check fails more frequently than about 5 % of the time, it is likely that precision is not under sufficient control and action is required to amend the procedure.
Some consideration also needs to be given to uncertainty associated with wtN arising from unknown bias within the method. In the absence of reliable reference materials, comparison with alternative methods operating on substantially different principles is an established means of estimating bias. A comparison of Kjeldahl and combustion methods for total nitrogen across a range of different sample types established a difference of 0,01 wtN. This is well within the ISO Guide 33 criterion of 2σD [Formula (4)], confirming that uncertainties associated with bias are adequately accounted for within the reproducibility figures.
C.2.5.3 Uncertainty associated with wfat
Additional collaborative trial data for fat analysis[25] provide a reproducibility standard deviation estimate of 0,02 wfat. The analysis is again undertaken in duplicate and the results accepted only if the difference is within the appropriate repeatability limit, ensuring that the laboratory precision is under control. Prior verification work on a suitable reference material for fat determination establishes that uncertainties associated with bias are adequately accounted for by the reproducibility figures.
C.2.6 Combined standard uncertainty
Table C.1 shows the individual values and the uncertainties calculated using the above figures.
Table C.1 — Uncertainty budget for meat content
Quantity
Value of xi
% (mass fraction)
u(xi)
u(xi)/xi
Fat content, wfat
5,50
0,110
0,020
Nitrogen content, wmN
3,29
0,056
0,017
Nitrogen factor, fN
3,65
0,052
0,014
Meat protein, wpro
90,1
90,1 × 0,022 = 1,98
0
0170014002222,,,+=
Total meat content, wmeat
95,6
19
8011019822,,,+=
0,021
A level of confidence of approximately 95 % is required. This is provided by multiplying the combined standard uncertainty by a coverage factor k of 2, giving (on rounding to two significant figures) an expanded uncertainty U on the meat content of U = 4,0 %; that is, wmeat = 95,6 ± 4,0 %.
NOTE “Meat content” can legitimately exceed 100 % in some products.

30 © ISO 2017 – All rights reserved

ISO 21748:2017(E)
C.3 Uncertainty for measurements obtained by AOAC method 990.12: Aerobic plate count
C.3.1 General
The method is a microbiological method for monitoring microbial activity in foodstuffs.[27] The method uses bacterial culture plates of dry medium and water-soluble gel. Samples are added to culture plates at a rate of 1,0 ml per plate and spread over a growth area of approximately 20 cm2. Plates are incubated and colonies counted. The measurand is the number of colony-forming units found. For nonzero counts, the conventional reporting units are log10(count), that is, the logarithm to the base 10 of the number of colony-forming units (CFU) found. Uncertainty evaluations are desired for three food groups: shellfish, flour and vegetables.
The example here is based on published data from Reference [28] used by kind permission of the American Association for Laboratory Accreditation. See also Reference [27].
C.3.2 Collaborative study data
The method was validated by a collaborative study that used twelve laboratories, six foods with different levels of contamination, two samples per food, and two replicates per sample. The data analysis was consistent with ISO 5725-2, and the validation study included all steps in the testing process, except for a step involving choice of an exact sub-sample size (measured samples were provided in the collaborative study). Table C.2 shows the reported estimates of repeatability and reproducibility relative standard deviation for the three foods relevant to the uncertainty evaluation requirement, given as percentages.
Table C.2 — Selected collaborative study data for aerobic plate count
Food
Reproducibility
relative standard
deviation
%
Repeatability
relative standard
deviation
%
Shrimp
11,1
9,8
Vegetables
9,2
6,3
Flour
5,8
5,3
Note that the repeatability and reproducibility data are all expressed as relative standard deviations, relative to the mean observed value for log10(count). This is convenient for this particular method, which tends to show dispersion approximately proportional to level and approximately consistent relative standard deviation.
C.3.3 Control of bias
To establish whether laboratory bias is within that expected, the laboratory carries out a comparison study with a reference laboratory. Results for vegetables and shrimp are always within 10 % (corresponding to Δl < 0,1x, x being the mean of the relevant observations). A comparison with a flour sample shows results 5 % apart (corresponding to Δl ≤ 0,05x). These deviations are clearly consistent with the reproducibility standard deviations; bias is therefore judged to be acceptable.
C.3.4 Control of precision
To establish whether within-laboratory precision is within that expected, the laboratory generates estimates of repeatability standard deviation with a series of 10 replicates. The repeatability relative standard deviation for all foods is 5 % or less (sl < 0,05x). It is decided, therefore, that repeatability is not only acceptable, but that a lower adjusted reproducibility can be calculated, as described in 7.3.2. The revised reproducibility relative standard deviations are shown in Table C.3.

© ISO 2017 – All rights reserved 31

ISO 21748:2017(E)
Table C.3 — Adjusted reproducibility relative standard deviation
Food
Reproducibility relative standard deviation
Between-laboratory relative standard deviation
Repeatability relative standard deviation
Adjusted
reproducibility
relative standard
deviation
%
%
%
%
Shrimp
11,1
5,2
5,0
7,2
Vegetables
9,2
6,7
5,0
8,4
Flour
5,8
2,4
5,0
5,5
C.3.5 Establishing relevance to the test item
C.3.5.1 Sample preparation and pre-treatment
The collaborative study excluded a sampling stage. In consideration of this additional component, sample preparation (sub-sampling, weighing) has been estimated to contribute a further 3,0 % to the combined standard uncertainty (based on expert opinion). This contribution is included in Table C.4.
C.3.5.2 Variation of uncertainty with level of response
The reproducibility, repeatability and contribution of the additional sample preparation steps are all believed to be approximately proportional to the aerobic plate count. This suggests a basic model of the form of Formula (10), in which the coefficient b is set equal to the adjusted relative reproducibility standard deviation and the additional contribution from sampling is included as a proportional contribution. This is exactly equivalent to the simple approach, used above, of expressing all of the contributions to uncertainty in relative terms.
C.3.6 Combined standard uncertainty
The combined standard uncertainty (expressed as a relative standard deviation) is calculated for each food type as shown in Table C.4.
Table C.4 — Adjusted reproducibility relative standard deviation
Food
Between-laboratory relative standard
deviation
Repeatability relative standard deviation
Further contribu�tion to standard uncertainty from sample preparation
Combined standard uncertainty u(y) (expressed as relative standard deviation)
%
%
%
%
Shrimp
5,2
5,0
3,0
7,8
Vegetables
6,7
5,0
3,0
8,9
Flour
2,4
5,0
3,0
6,4
C.3.7 Expanded uncertainty
Expanded uncertainties are calculated using a coverage factor of 2, which gives a level of confidence of approximately 95 %, to give expanded uncertainties of 15,6 %, 17,8 % and 12,8 % [as a percentage of observed log10(count) for shrimp, vegetable and flour materials, respectively].
C.3.8 Additional considerations
Results for aerobic plate count are conventionally summarized as log10(count). However, for a single test item, it is often more useful to report an expanded uncertainty interval in units of CFU count. For quantities with uncertainties in the log10 domain, this is best done by calculating the expanded uncertainty in the log10 domain as in C.3.7 and transforming to CFU count afterwards. This can be

32 © ISO 2017 – All rights reserved

ISO 21748:2017(E)
illustrated by calculation of expanded uncertainty intervals for test materials at 150 CFU. The relevant calculations are summarized in Table C.5.
Table C.5 — Adjusted reproducibility relative standard deviation
Food
Standard uncer�tainty (as relative standard deviation)
Expanded un�certainty (U) as percentage of CFU count
Log10 of 150 CFU
Expanded uncertainty in log10
Uncertainty interval in log10 CFU count
Final uncertain�ty interval in CFU count
Shrimp
7,8
15,6
2,176 1
0,339 5
1,836 6 to 2,515 6
68 to 328
Vegetables
8,9
17,8
2,176 1
0,387 3
1,788 8 to 2,563 4
61 to 366
Flour
6,4
12,8
2,176 1
0,278 5
1,897 6 to 2,454 6
79 to 285
C.4 Uncertainty for crude fibre determination
C.4.1 General
The method is used for the determination of crude fibre in animal feeding stuffs. Crude fibre is defined as the amount of fat-free organic substances which are insoluble in acid and alkaline media. The fibre content of feeding stuffs is typically in the interval 2 % to 12 %, expressed as mass fraction.
C.4.2 Calculation of fibre concentration
The fibre content, Cfibre, as a percentage of the sample by mass (that is, mass fraction expressed as a percentage, denoted simply “%” for this example), is calculated from Formula (C.4):
C
mmmmmfibresdsabdbas=−()−−()×100 (C.4)
where
ms
is the mass of the sample (approximately 1 g of sample is taken for analysis), in grams;
msd
is the mass of the crucible and sample after drying to constant mass, in grams;
msa
is the mass of the crucible and sample after ashing, in grams;
mbd
is the mass of the crucible in the blank test after drying to constant mass, in grams;
mba
is the mass of the crucible in the blank test after ashing, in grams.
NOTE The blank test involves taking an empty crucible through all stages of the method.
A flow diagram illustrating the main stages in the method is presented in Figure C.1.
C.4.3 Collaborative study data
The method has been the subject of a collaborative trial run according to ISO 5725-2. Five different feeding stuff representing typical fibre and fat concentrations were analysed in the trial. Participants in the trial carried out all stages of the method, including grinding of the samples. The repeatability and reproducibility estimates obtained from the trial are presented in Table C.6.

© ISO 2017 – All rights reserved 33

ISO 21748:2017(E)
Table C.6 — Collaborative study data for crude fibre
Test
material
Mean fibre
content
Reproducibility standard
deviation (sR)
Reproducibility relative standard deviation
Repeatability standard
deviation (sr)
%
%
%
A
2,3
0,293
0,127
0,198
B
12,1
0,563
0,046 5
0,358
C
5,4
0,390
0,072 2
0,264
D
3,4
0,347
0,102
0,232
E
10,1
0,575
0,056 9
0,391
C.4.4 Control of bias
To establish whether laboratory bias is within that expected, the laboratory carries out a comparison study with a reference material certified by the method in question (this is essential, as the measurand is defined by reference to the specific method of analysis). The certified value is 93 g/kg ± 14 g/kg (9,3 %). The laboratory obtains a value of 9,16 %, corresponding to a laboratory bias Δl = −0,14 %. This is well within the interval that might be expected from the reproducibility standard deviation at a level near 9 %. The standard uncertainty in the certified value is approximately 7 g/kg (0,7 % as mass fraction); this is also small compared to the reproducibility standard deviation at similar fibre levels in Table C.6. The bias is therefore judged to be acceptable.
C.4.5 Control of precision
As part of the laboratory’s verification of the method, experiments were carried out to estimate the repeatability (within batch precision) for feeding stuffs with fibre concentrations similar to some of the samples analysed in the collaborative trial. The results are summarized in Table C.7. Comparison with Table C.6 shows that the laboratory is obtaining precision very similar to that found in the collaborative study.
Table C.7 — Repeatability data for crude fibre test materials
Test material
Mean fibre
content found
%
Repeatability standard deviation (sr)
%
F
3,0
0,198
G
5,5
0,264
H
12,0
0,358
C.4.6 Variation of uncertainty with level of response
The repeatability and reproducibility standard deviations in Table C.6 clearly increase with the level of crude fibre. However, there is also some evidence of a trend in reproducibility relative standard deviation, making a simple proportional model inappropriate. Instead, therefore, the laboratory chooses to base the uncertainty at different observed levels of fibre on the reproducibility found at similar levels in the collaborative study; for example, for fibre levels at or below 2,5 % (mass fraction), a reproducibility standard deviation of 0,29 % (mass fraction) is chosen from Table C.6.
C.4.7 Additional factors
The laboratory has undertaken experimental and other studies of the effects of the different influence quantities on the result for typical test materials. The resulting estimates of uncertainty are shown in Table C.8. None of the contributions is significant except the effect of drying to constant mass. The uncertainty associated with this part of the process was obtained from the specification of constant mass set by the laboratory; “constant mass” is not defined in the standard method and the laboratory

34 © ISO 2017 – All rights reserved

ISO 21748:2017(E)
chose to use a fixed-time method of drying shown to result in a final mass within 0,002 g of the mass obtained by extended drying. Dividing this maximum estimated deviation by 3 led to the estimated uncertainty of 0,115 % (mass fraction) fibre, assuming 1 g of sample is taken for analysis.
Table C.8 — Effects of influence quantities on crude fibre determination
Source of uncertainty
Value
Standard
uncertainty
Associated
uncertainty
expressed as
repeatability
standard deviation
Source of information
Mass of sample
1,0 g
0,000 20 g
0,000 20
Calibration certificate
Acid concentration


0,000 30
Published data on change in fibre con�tent with acid concentration
Alkali concentration


0,000 48
Published data on change in fibre con�tent with alkali concentration
Acid digestion time


0,009 0
Published data on change in fibre con�tent with digestion time
Alkali digestion time


0,007 2
Published data on change in fibre con�tent with digestion time
Drying to constant mass

0,001 15 g

Laboratory specification of constant mass
Ashing temperature and time

Negligible

Published data — no significant change in fibre content when ashing temperature and time varied
Loss of mass after ash�ing during the blank test

Negligible

Experimental studies
C.4.8 Combined standard uncertainty
Because the uncertainty associated with drying to constant mass is not proportional to crude fibre level, it is not possible to adopt a simple proportional model for uncertainty estimation. Instead, it is convenient to estimate the uncertainty associated with typical levels of crude fibre. The estimated uncertainties at representative levels are shown in Table C.9.
Table C.9 — Adjusted reproducibility relative standard deviation
Fibre content
Reproducibility
standard deviation
(sR)
Additional
contribution from drying
Combined standard uncertainty
u(y)
%
%
%
%
≤2,5
0,293
0,115
0,31
2,5 to 5
0,390
0,115
0,41
5 to 10
0,575
0,115
0,59
C.4.9 Expanded uncertainty
Expanded uncertainties are calculated using a coverage factor of 2, which gives a level of confidence of approximately 95 %, to give expanded uncertainties of 0,6 %, 0,8 % and 1,2 %, respectively, for the different fibre content ranges in Table C.9.

© ISO 2017 – All rights reserved 35

ISO 21748:2017(E)
Figure C.1 — Operations in estimating crude fibre

36 © ISO 2017 – All rights reserved

ISO 21748:2017(E)
Bibliography
[1] ISO 3534-1, Statistics — Vocabulary and symbols — Part 1: General statistical terms and terms used in probability
[2] ISO 3534-2, Statistics — Vocabulary and symbols — Part 2: Applied statistics
[3] ISO 3534-3, Statistics — Vocabulary and symbols — Part 3: Design of experiments
[4] ISO 5725-1, Accuracy (trueness and precision) of measurement methods and results — Part 1: General principles and definitions
[5] ISO 5725-2, Accuracy (trueness and precision) of measurement methods and results — Part 2: Basic method for the determination of repeatability and reproducibility of a standard measurement method
[6] ISO 5725-3, Accuracy (trueness and precision) of measurement methods and results — Part 3: Intermediate measures of the precision of a standard measurement method
[7] ISO 5725-4, Accuracy (trueness and precision) of measurement methods and results — Part 4: Basic methods for the determination of the trueness of a standard measurement method
[8] ISO 5725-5, Accuracy (trueness and precision) of measurement methods and results — Part 5: Alternative methods for the determination of the precision of a standard measurement method
[9] ISO 5725-6, Accuracy (trueness and precision) of measurement methods and results — Part 6: Use in practice of accuracy values
[10] ISO 7870-4, Control charts — Part 4: Cumulative sum charts
[11] ISO 7870-2, Control charts — Part 2: Shewhart control charts
[12] ISO 10576-1, Statistical methods — Guidelines for the evaluation of conformity with specified requirements — Part 1: General principles
[13] ISO 11648 (all parts), Statistical aspects of sampling from bulk materials
[14] ISO Guide 33, Reference materials — Good practice in using reference materials
[15] ISO/IEC 17025, General requirements for the competence of testing and calibration laboratories
[16] ISO/IEC Guide 98-3, Uncertainty of measurement — Part 3: Guide to the expression of uncertainty in measurement (GUM: 1995)
[17] ISO/IEC Guide 99:2007, International vocabulary of metrology — Basic and general concepts and associated terms (VIM)
[18] ISO/IEC 17043, Conformity assessment — General requirements for proficiency testing
[19] AFNOR FD X07-021 (October 1999), Normes fondamentales — Métrologie et applications de la statistique — Aide à la démarche pour l’estimation et l’utilisation de l’incertitude des mesures et des résultats d’essais
[20] Recommendation INC-1 (1980), BIPM
[21] European Directive 70/220, Measures to be taken against air pollution by emissions from motor vehicles
[22] Kaarls R. Procès-verbaux du Comité International des Poids et Mesures, 49, BIPM, 1981, pp. A.1-A.12
[23] Analytical Methods Committee. Analyst (Lond.). 1993, 118 p. 1217

© ISO 2017 – All rights reserved 37

ISO 21748:2017(E)
[24] Shure B., Corrao P.A., Glover A., Malinowski A.J. AOAC Int. 1982, 65 p. 1339
[25] King-Brink M., & Sebranek J.G.J. AOAC Int. 1993, 76 p. 787
[26] Breese Jones D. US Department of Agriculture Circular No. 183 (August 1931)
[27] Official Methods of Analysis. AOAC Int. Gaithersburg, MD, Twentieth Edition, 2016
[28] A2LA Guidance Document G108 — Guidelines for Estimating Uncertainty for Microbiological Counting Methods. American Association for Laboratory Accreditation, 2014

38 © ISO 2017 – All rights reserved


ISO 21748:2017(E)

© ISO 2017 – All rights reserved
ICS 17.020
Price based on 38 pages
This page has been left intentionally blank.