Solid mineral fuels - Evaluation of the measurement performance of on-line analysers

ISO 15239:2005 sets out practices for the evaluation of the measurement performance of all types of on-line analysers for solid mineral fuel. It presents information on the different types of analyser currently available and describes procedures for the evaluation of various aspects of measurement performance, appropriate methods of test and techniques for the statistical assessment of the data collected.

Combustibles minéraux solides — Évaluation de la performance de mesure des analyseurs en ligne

Trdna mineralna goriva – Vrednotenje natančnosti meritev on-line analizatorjev

General Information

Status
Published
Publication Date
11-Apr-2005
Current Stage
9093 - International Standard confirmed
Start Date
10-Mar-2024
Completion Date
13-Dec-2025

Overview

ISO 15239:2005 - Solid mineral fuels: Evaluation of the measurement performance of on-line analysers - provides standardized practices for assessing the measurement performance of all types of on-line analysers used for solid mineral fuels (e.g., coal, coke, lignite). The standard explains analyser types and installation configurations, defines key terms (bias, precision, interrogation zone, etc.), and prescribes procedures for testing, calibration confirmation and statistical assessment of analyser data.

Key topics and requirements

  • Scope and purpose: Establishes practices to evaluate the stability, calibration validity and operational precision of on-line analysers.
  • Analyser types: Describes major measurement principles - absorption/scattering, excitation, natural radiation emission, and property-change methods - and their typical interrogation geometries (transmission, backscatter).
  • Installation configurations: Differentiates mainstream and sub‑stream analyser installations and addresses presentation of bulk solids, slurries and conditioned samples.
  • Performance evaluation:
    • Instrument stability: Assessed by static replicate measurements at operational intervals.
    • Calibration validity: Confirmed by simultaneous comparative measurements against a reference test method across the operational range.
    • Operational precision: Evaluated by comparing analyser values with reference values from separate reference procedures.
  • Testing and statistics: Contains procedures and annexed material on comparative test methods, statistical assessment procedures (precision, bias, regression, repeatability) and specimen calculations.
  • Supporting guidance: Informative annexes on on-line techniques, sources of measurement variance and reference standards.

Practical applications

ISO 15239 is used to:

  • Prepare acceptance tests and commissioning protocols for new analyser installations.
  • Specify performance criteria (precision, bias, stability) in procurement contracts and service agreements.
  • Guide routine performance monitoring, calibration checks and troubleshooting of on-line coal analysers.
  • Provide a consistent basis for data quality assurance, regulatory compliance and inter‑laboratory/comparative evaluations.

Who should use this standard

  • Coal and coke producers, power stations and bulk fuel handlers implementing on-line quality monitoring.
  • Instrument manufacturers, system integrators and calibration engineers designing or installing analysers.
  • Quality control, process control and metrology teams responsible for analyser validation and performance audits.
  • Standards bodies and technical committees developing testing protocols or procurement specifications.

Related standards (normative references)

ISO 15239 references core documents on terminology, sampling and statistics such as ISO 1213-2, ISO 13909 (sampling) and ISO 3534-1 (statistics), which are relevant when applying the evaluation methods in practice.

Keywords: ISO 15239, on-line analysers, solid mineral fuels, coal analyser performance, calibration, precision, stability, statistical assessment.

Standard
ISO 15239:2006
English language
66 pages
sale 10% off
Preview
sale 10% off
Preview
e-Library read for
1 day
Standard
ISO 15239:2005 - Solid mineral fuels — Evaluation of the measurement performance of on-line analysers Released:4/12/2005
English language
61 pages
sale 15% off
Preview
sale 15% off
Preview

Standards Content (Sample)


SLOVENSKI STANDARD
01-oktober-2006
7UGQDPLQHUDOQDJRULYD±9UHGQRWHQMHQDWDQþQRVWLPHULWHYRQOLQHDQDOL]DWRUMHY
Solid mineral fuels -- Evaluation of the measurement performance of on-line analysers
Combustibles minéraux solides -- Évaluation de la performance de mesure des
analyseurs en ligne
Ta slovenski standard je istoveten z: ISO 15239:2005
ICS:
75.160.10 Trda goriva Solid fuels
2003-01.Slovenski inštitut za standardizacijo. Razmnoževanje celote ali delov tega standarda ni dovoljeno.

INTERNATIONAL ISO
STANDARD 15239
First edition
2005-04-01
Solid mineral fuels — Evaluation of the
measurement performance of on-line
analysers
Combustibles minéraux solides — Évaluation de la performance de
mesure des analyseurs en ligne

Reference number
©
ISO 2005
PDF disclaimer
This PDF file may contain embedded typefaces. In accordance with Adobe's licensing policy, this file may be printed or viewed but
shall not be edited unless the typefaces which are embedded are licensed to and installed on the computer performing the editing. In
downloading this file, parties accept therein the responsibility of not infringing Adobe's licensing policy. The ISO Central Secretariat
accepts no liability in this area.
Adobe is a trademark of Adobe Systems Incorporated.
Details of the software products used to create this PDF file can be found in the General Info relative to the file; the PDF-creation
parameters were optimized for printing. Every care has been taken to ensure that the file is suitable for use by ISO member bodies. In
the unlikely event that a problem relating to it is found, please inform the Central Secretariat at the address given below.

©  ISO 2005
All rights reserved. Unless otherwise specified, no part of this publication may be reproduced or utilized in any form or by any means,
electronic or mechanical, including photocopying and microfilm, without permission in writing from either ISO at the address below or
ISO's member body in the country of the requester.
ISO copyright office
Case postale 56 • CH-1211 Geneva 20
Tel. + 41 22 749 01 11
Fax + 41 22 749 09 47
E-mail copyright@iso.org
Web www.iso.org
Published in Switzerland
ii © ISO 2005 – All rights reserved

Contents Page
Foreword. iv
Introduction . v
1 Scope. 1
2 Normative references . 1
3 Terms and definitions. 1
4 Symbols and abbreviations . 4
5 Principle . 6
6 Analyser installations . 6
7 Evaluation techniques. 8
8 Instrument stability. 9
9 Calibration. 12
10 Operational measurement performance. 15
11 Application. 20
Annex A (informative) On-line analysis techniques for solid mineral fuels . 22
Annex B (informative) Sources of measurement variance. 26
Annex C (normative) Comparative test methods. 28
Annex D (normative) Statistical assessment procedures. 36
Annex E (informative) Reference standards . 47
Annex F (informative) Specimen calculations. 48
Bibliography . 61

Foreword
ISO (the International Organization for Standardization) is a worldwide federation of national standards bodies
(ISO member bodies). The work of preparing International Standards is normally carried out through ISO
technical committees. Each member body interested in a subject for which a technical committee has been
established has the right to be represented on that committee. International organizations, governmental and
non-governmental, in liaison with ISO, also take part in the work. ISO collaborates closely with the
International Electrotechnical Commission (IEC) on all matters of electrotechnical standardization.
International Standards are drafted in accordance with the rules given in the ISO/IEC Directives, Part 2.
The main task of technical committees is to prepare International Standards. Draft International Standards
adopted by the technical committees are circulated to the member bodies for voting. Publication as an
International Standard requires approval by at least 75 % of the member bodies casting a vote.
Attention is drawn to the possibility that some of the elements of this document may be the subject of patent
rights. ISO shall not be held responsible for identifying any or all such patent rights.
ISO 15239 was prepared by Technical Committee ISO/TC 27, Solid mineral fuels, Subcommittee SC 5,
Methods of analysis.
iv © ISO 2005 – All rights reserved

Introduction
There are now many instruments in use which have been developed to enable the rapid on-line measurement
of solid mineral fuels for a range of parameters that indicate coal quality. The principles on which they are
based differ from those currently in use for sampling and analysis and, in effect, constitute a completely
different approach to the measurement of solid mineral fuel quality.
This standard has been developed to specify methods by which the measurement performance of such
analysers can be evaluated.
INTERNATIONAL STANDARD ISO 15239:2005(E)

Solid mineral fuels — Evaluation of the measurement
performance of on-line analysers
1 Scope
This International Standard sets out practices for the evaluation of the measurement performance of all types
of on-line analysers for solid mineral fuel.
It presents information on the different types of analyser currently available and describes procedures for the
evaluation of various aspects of measurement performance, appropriate methods of test and techniques for
the statistical assessment of the data collected.
2 Normative references
The following referenced documents are indispensable for the application of this document. For dated
references, only the edition cited applies. For undated references, the latest edition of the referenced
document (including any amendments) applies.
ISO 1213-2, Solid mineral fuels — Vocabulary — Part 2: Terms relating to sampling, testing and analysis
ISO 1988, Hard coals — Sampling
ISO 2309, Coke — Sampling
ISO 3534-1, Statistics — Vocabulary and symbols — Part 1: Probability and general statistical terms
ISO 5069 (all parts), Brown coals and lignites — Principles of sampling
ISO 13909 (all parts):2001, Hard coal and coke — Mechanical sampling
3 Terms and definitions
For the purposes of this document, the definitions given in ISO 1213-2, ISO 3534-1 and ISO 13909-1 and the
following apply.
3.1
accuracy
closeness of agreement between an observation and the “true” value
[ISO 1213-2:1992]
3.2
analyser dynamic precision
closeness of agreement between analyser values, obtained from solid mineral fuel interrogated by the
analyser under dynamic conditions and determined by a comparative test method which eliminates random
errors attributable to the reference test method
3.3
analyser test method
method of analysis which gives, for a solid mineral fuel process stream, values arising from the operation of
the on-line analyser, which are estimates of the true values for specified measurands
3.4
analyser value
value of a specified measurand in a test unit that is obtained from a test carried out by an analyser test
method
3.5
backscatter geometry
arrangement of an interrogation process in which a source of incident energy and a detector system are on
the same, or adjacent, sides of the solid mineral fuel passing through the interrogation zone
3.6
bias
systematic error which leads to the average value of a series of results being persistently higher or
persistently lower than those which are obtained using a reference test method
[ISO 13909-1]
3.7
bias of scale
bias that varies as a function of the range of values measured
3.8
bias of location
bias that is constant and independent of the range of values measured
3.9
comparative dynamic precision
closeness of agreement between analyser values obtained from solid mineral fuel interrogated by the analyser
under dynamic conditions and those determined by a comparative test method, which includes random errors
attributable to the reference test method
3.10
comparative test method
method of testing in which analyser values are compared with corresponding reference values
3.11
comparison period
period of time, during which a test unit is interrogated by an analyser to give an analyser value and is sampled
by a reference test method to obtain a reference value, for a measurand
NOTE The period can be based on the typical time to produce a particular mass of solid mineral fuel, e.g. a trainload,
or on a period which coincides with operations, e.g. a shift, or some other period that is convenient to, or preferred for, a
specific evaluation procedure.
3.12
interrogation process
procedure which elicits from the solid mineral fuel process stream a measurable response related, specifically
or by inference, to the quantity of the measurand
3.13
interrogation volume
volume of the solid mineral fuel process stream in which the detected response to the interrogation process
originates
2 © ISO 2005 – All rights reserved

3.14
interrogation zone
part of the analyser installation through which the solid mineral fuel process stream passes and in which it is
subjected to the interrogation process
3.15
mainstream configuration
configuration in which the whole of the process stream to be analysed is presented to, although not
necessarily analysed by, an on-line analyser
3.16
on-line analyser
instrument for the measurement, continuously, of one or more quality indicators of solid mineral fuel while it is
undergoing processing or handling, to give data rapidly and automatically
3.17
precision
closeness of agreement between independent results obtained under stipulated conditions
[ISO 3534-1:1993]
NOTE For the purposes of this International Standard, the index of precision used is ± ts, where t is the value of
Student's t (95 % confidence level, two-sided) and s is the standard deviation of the observations about the mean value.
3.18
reference test method
method of sampling, sample preparation and analysis which is expected to give, for a solid mineral fuel
process stream, values which are unbiased estimates of the true values for specified measurands
3.19
reference value
value of a specified measurand in a test unit that is obtained from a test carried out by a reference test method
and which serves as a reference for comparison with an analyser value
NOTE For the purposes of this International Standard, reference values are considered to be conventional true
values.
3.20
sample
quantity of fuel, representative of a larger mass, for which the quality is to be determined
[ISO 13909-1]
3.21
static repeatability
closeness of agreement between replicate analyser values obtained from a reference standard in the
interrogation zone of the analyser
3.22
sub-stream configuration
configuration in which a part of the process stream to be analysed is diverted by means of a suitable sampling
system for presentation to an on-line analyser
3.23
test unit
quantity of solid mineral fuel chosen for the determination of analyser and reference values
3.24
transmission geometry
arrangement of an interrogation process in which a source of incident energy and a detector system are on
opposite sides of the solid mineral fuel passing through the interrogation zone.
4 Symbols and abbreviations
4.1 Mathematical
4.1.1 Primary
 β regression coefficient (slope)
 C Cochran's criterion
 d difference between pairs of values (other than duplicates)
 D duplicate 1 reference test method value
 D duplicate 2 reference test method value
 D mean of duplicate reference test method values
 δ test statistic (see D.16)
 EIV errors in variables
 E(ρ) expected number of runs
 F F-distribution
 f static/dynamic response factor
SDR
 L confidence level
C
 n number of values in a set
 P precision
 Q test statistic (see D.16)
 R reference test method value
 R reference test method 1 value
 R reference test method 2 value
 r linear correlation coefficient
 ρ run
 S reference standard 1 value
 S reference standard 2 value
4 © ISO 2005 – All rights reserved

 s standard deviation
 s the expected (guaranteed) value of precision of the analyser at one standard deviation
g
 s(ρ) standard error of number of runs
 σ population standard deviation
 t Student's t-distribution
 V variance
 ν degrees of freedom
 X analyser test method value
A
 x any value in a set
 x difference between pairs of duplicate values
dup
 χ chi-squared distribution
 Z test statistic (see D.16)
 z normal deviate
4.1.2 Subscripts
 A set of analyser test method values
 c critical value
 d set of differences
 dup set of duplicate differences
 Dy set of dynamic calibration values
 D1 set of duplicate reference 1 test method values
 D2 set of duplicate reference 2 test method values
 D set of means of duplicate reference test method values
 g guaranteed value
 i ith value
 max maximum value
 0 time zero
 R1 set of reference test method 1 values
 R2 set of reference test method 2 values
 St set of static calibration values
 S1 set of reference standard 1 values
 S2 set of reference standard 2 values
 τ time
 1 set 1
 2 set 2
4.2 Other abbreviations
 GHz gigahertz
 keV kilo-electron volt
 MeV mega-electron volt
 RF radiofrequency
5 Principle
The performance of an on-line analyser, which has been set up and calibrated, is evaluated by procedures
that address three main aspects of analyser operation. These are the stability of the instrumentation, the
validity of the calibration and the precision of measurement under operational conditions. Instrument stability
is assessed by static measurements made, in replicate, at operationally significant intervals of time. The
installed calibration is confirmed by making simultaneous comparative measurements with the analyser and a
reference method of analysis over a range of measurand values which encompasses at least the spread of
values encountered in normal operations. Operational performance is evaluated by comparison of analyser
values with reference values obtained from separate reference procedures.
6 Analyser installations
6.1 General
There are many types of analyser, based on a variety of measurement principles and possible installation
configurations, which have been designed to measure one or more indicators of quality in a range of products
that occur in solid mineral fuel process streams.
The measurement principles on which analysers are based may be divided into four classes, as outlined in 6.2.
6.2 Analyser types
6.2.1 Absorption/scattering processes
The majority of on-line analysers for solid mineral fuel depend upon the existence of a quantitative relationship
between the measurand and the degree of absorption and/or scattering of a beam of electromagnetic
radiation or neutrons incident upon the solid mineral fuel flowing through the interrogation zone of the analyser.
Incident electromagnetic radiation, in the X-, gamma, microwave or optical energy regions, or neutron
radiation may be used; source, sample and detector may be arranged in transmission or backscatter
geometry.
6 © ISO 2005 – All rights reserved

6.2.2 Excitation processes
A second group depends on a quantitative relationship between the measurand and the emission of specific
electromagnetic radiation, (X- or gamma rays) arising as a result of excitation by an outside source of X-,
gamma or neutron radiation.
6.2.3 Natural radiation emission
In this class, the gamma radiation emitted by naturally occurring radioisotopes, present in the measurand in
relatively constant proportions, is measured.
6.2.4 Property changes
A few analysers depend upon an effect of the measurand on a selected electrical or physical property that is
measurable on line.
NOTE Annex A gives information on techniques for on-line analysis.
6.3 Methods of presentation
The solid mineral fuel to be analysed may be transported through or past the analyser on a conveyor belt or
other supporting platform, or within the confines of a container, chute or pipe. In most designs, the analyser
detection system is physically non-invasive and non-contacting with the solid mineral fuel.
The condition of the solid mineral fuel presented to the analyser varies, among the methods of analysis, from
material as it occurs in the process stream, to crushed, mixed and possibly dried material which has been
carefully profiled.
The solid mineral fuel may be presented to the analyser as a bulk solid or as a fuel-water slurry.
Two basic installation configurations for on-line analysers are possible (see Figure 1). The choice between the
two for any particular application depends on the type of analyser appropriate to the measurand and certain
parameters of the product and the plant, such as particle size and flow rate.
6.4 Installation configurations
6.4.1 Mainstream
A mainstream configuration is a system in which the whole of the process stream for which the analytical
information is required is presented to the analyser. The system can contain conditioning steps, such as
mixing and profiling, prior to interrogation by the analyser.
6.4.2 Sub-stream
A sub-stream configuration is a system in which a portion of the process stream is diverted to the analyser by
means of a suitable sampling process. The diverted portion of the stream may be subsequently subjected to
sample preparation procedures, such as crushing, dividing and conditioning before presentation to the
analyser. After interrogation the sub-stream is normally returned to the main process stream.
Figure 1 — Analyser configurations
7 Evaluation techniques
The procedures described in this International Standard are designed to allow the evaluation of analyser
performance in a range of situations and conditions of operation.
They are intended to be applied to an analyser after it has been set up and calibrated as recommended by the
manufacturer, with all instrumental parameters at their normal operational values for the particular installation.
In order to make a full evaluation of on-line analyser performance, it is necessary to address three
interdependent aspects of analyser operation:
 instrument stability;
 calibration confirmation;
 operational measurement performance.
8 © ISO 2005 – All rights reserved

Since some of the measurement errors that are attributable to the analyser occur only as a result of
operations on, or interactions with, the moving process stream, it is essential for a full evaluation of
measurement performance to carry out tests under dynamic conditions.
Nevertheless, information from static tests, although more limited in its nature, is useful for monitoring some
aspects of analyser performance on a routine basis.
An understanding of the sources of variance that contribute to the errors of measurement of the analyser and
of any reference system with which it is compared, is necessary for the proper design of tests and the
evaluation of the results. Sources of variance are discussed in Annex B.
The procedures used vary with the situation but have many features in common. General considerations for
the design and operation of comparative tests are given in Annex C and techniques for the statistical analysis
of the data in Annex D.
The principal steps involved in an evaluation are as follows:
 decide which aspect of analyser operation is to be evaluated (see Note);
 choose an appropriate method of test and design a scheme of operation;
 carry out the test procedure;
— apply appropriate statistical treatment to the data obtained from the test.
NOTE Frequently a situation will require more than one aspect to be considered (see Clause 11).
8 Instrument stability
8.1 General
It is a pre-requisite to accurate measurement by an on-line analyser that the instrumentation be stable and
contribute as little as possible to the total error of measurement. Errors arising from the instrumentation may
be random or systematic.
An estimate of random variations attributable to the instrumentation is obtained by determining the static
repeatability. A significant increase in this value with time is an indicator of changes in instrumental
characteristics that may need investigation and could lead to a worsening of the measurement performance of
the analyser. Static repeatability is also an indicator of the limiting value of accuracy achievable (base-line
performance).
Systematic instrumentation changes, which could affect the calibration if they are sufficiently large, are
indicated by changes in the level of response from reference standards. These changes can provide the
information needed to compensate for systematic instrumentation errors. In some analysers this process is
carried out automatically at intervals and a correction applied.
Random and systematic variations originating in the instrumentation can be measured simultaneously by a
relatively simple procedure that is amenable to routine use.
8.2 Objectives
The test methods and methods of data analysis described in 8.3, 8.4 and 8.5 are designed to achieve three
objectives:
 to establish benchmarks against which subsequent tests for variations due to instrument instability may
be compared;
 to determine and monitor the contribution made to the overall measurement performance of the analyser
by random variations in response originating in the instrumentation;
 to monitor systematic changes in response, originating in the instrumentation, which may affect
calibration.
8.3 Test conditions
Test conditions require
 a measurement period that will return an adequate precision to allow the testing of the significance of any
changes from previous measurements,
 a minimum of 10 periods (see note).
NOTE A larger number of periods will increase the precision, but with diminishing returns; a practical maximum is
probably about 20.
8.4 Test procedure
Make an initial set of consecutive replicate measurements under the chosen test conditions with a reference
standard (see Notes 1 and 2) in the interrogation zone of the analyser. If it is required to detect systematic
changes, repeat with a second reference standard that offers a different level of response from the first one.
Designate these as measurements made at time 0. The levels of response from the standards chosen for the
detection of systematic changes should be representative of those which are obtained from values of
measurands close to each end of the calibration range. This will ensure that any shift of the calibration line
and/or change in slope is signalled.
After an interval, repeat the above procedure. Designate these as measurements made at time τ. A suitable
interval depends upon the usual operating programme of the analyser. The repeat test should normally be
part of a regular check routine at intervals of a shift or a day, for example. A special check should be
undertaken after any system changes (see Clause 11).
NOTE 1 Information on suitable reference standards is given in Annex E.
NOTE 2 Evaluation of instrument stability might not be possible with some designs of analysers that cannot accept a
reference standard. In others, a suitable response might be available from the empty interrogation zone.
8.5 Data analysis
8.5.1 General
Consider measurements made at time 0 to be benchmarks against which subsequent measurements made at
time τ are compared.
If, at time τ, the instrumental parameters in use when the benchmark performance was established have
changed, normalize the data obtained for the test at time τ with respect to those changes.
8.5.2 Measurement precision
8.5.2.1 For each set of observations calculate the following values:
 variance, V and V (see D.2);

S1 S2
 standard deviation, s and s (see D.3);

S1 S2
 precision, P and P (see D.4).

S1 S2
10 © ISO 2005 – All rights reserved

8.5.2.2 Test the following for significance:
 ratio of the variances, V and V (see D.5);
S1,0 S1,τ
 ratio of the variances, V and V (see D.5).
S2,0 S2,τ
8.5.3 Changes in response level
8.5.3.1 For each set of observations, calculate the following:
 mean values, S and S (see D.6);
1 2
 differences of the means, S and S S and S .
1,0 1,τ 2,0 2,τ
8.5.3.2 Test the following for significance:
 difference of the means, S and S (see D.7);
1,0 1,τ
 difference of the means, S and S (see D.7).
2,0 1,τ
8.6 Results and interpretation
Record the following information:
 the date of the test;
 the identity of the reference standard(s) used;
 relevant instrument parameters;
 the conditions of the test;
 the values of the individual observations;
 the values calculated in 8.5;
 the results of the significance tests.
The calculated value of P indicates the base-line performance of the analyser, i.e. a lower limiting value for
operational accuracy over a similar measurement period. In practice, this value cannot be achieved since
other sources of variation, outside the analyser instrumentation, also contribute to operational accuracy.
When an acceptable initial value for variance has been determined (see 11.2) or re-established (see 11.4),
designate it V . If a subsequent determination of variance, V , is not significantly different from V ,
S1,0 S1,τ S1,0
then the instrumentation may be considered to be stable with respect to its contribution to random errors in the
measurement of operational accuracy. If this is not the case, examine the instrumentation for the cause of the
change.
When acceptable initial values for variance have been determined or re-established, designate the mean
values of the observations as S and S . If, in a subsequent determination, at time τ , there is no
1,0 2,0
significant difference between S and S ,or between S and S then it can be concluded that the
,
1,0 1,τ 2,0 2,τ
instrumentation response has not changed in a way that will affect the calibration. If one or both differences
are significant, then there has been a change that could affect the calibration. Whether it will translate to a
detectable change in the calibration depends on the size of other sources of error outside the instrumentation
and the precision with which the calibration is defined. Consider confirmation of the calibration (see Clause 9).
9 Calibration
9.1 General
Since on-line analysers can only make measurements by reference to values obtained by some other method,
the establishment and maintenance of a sound calibration is vital to accurate measurements.
The procedures described under 8.3 and 8.4 provide information that can be used to make corrections to the
calibration line for errors due to instrumentation variations. However they offer no help in detecting systematic
changes due to other sources, such as changes in the quality of the fuel, in its presentation or in the reference
sampling and analysis scheme. This can be achieved only by undertaking a test described in this section.
A particular problem may occur with a well-controlled system that maintains product quality within a narrow
range of values. It can be very inconvenient or even difficult to obtain, under dynamic conditions, a range of
values that is wide enough to define a calibration with adequate precision. In that case, this International
Standard describes a procedure (the static calibration), that is based on samples obtained other than under
dynamic conditions.
9.2 Objective
To confirm that a previously installed calibration remains valid and appropriate to the current operating
conditions.
9.3 Dynamic calibration
9.3.1 Test conditions
Test conditions include
 a two-instrument test scheme set up in accordance with the principles outlined in C.2,
 a comparison period that is of sufficient duration to allow enough reference sample increments to be
taken to achieve a high degree of reference sample precision (see C.5 and C.6),
 a set of reference samples that adequately covers the full range of values to be expected under operating
conditions,
 a minimum of 15 comparison periods (see C.5).
It might not always be possible to obtain a full range of expected values within a continuous test period. If this
is the case, data obtained from further test periods may be added to the initial data provided that stability of
the instrumentation has been shown to be satisfactory (see Clause 8) and the new data are consistent with it.
(See ISO 13909-8:2001, 11.5, for tests of homogeneity of data).
9.3.2 Test procedure
For each comparison period defined in the conditions of test, collect duplicate reference samples as described
in ISO 13909-7:2001, Clause 7 and log the corresponding data, X ) from the analyser. Ensure that the
Ai
collection of samples and data are properly synchronised and that sets of data all relate properly to their
respective comparison periods. Prepare and analyse the duplicate test samples to give reference duplicate
values (D and D ).
1i 2i
12 © ISO 2005 – All rights reserved

9.4 Static calibration
9.4.1 General
The response of an analyser to solid mineral fuel presented under static conditions is unlikely to be the same
as that presented dynamically. In order to check the calibration using static samples, it is necessary first to
compare the analyser response under static and dynamic conditions and determine a correction factor
designated the static/dynamic response. This correction can then be applied to estimate the relationship
between analyser and reference values under dynamic conditions.
9.4.2 Test conditions
Test conditions include
 a two-instrument test scheme set up in accordance with the principles outlined in C.2,
 a measurement period that will minimize variations due to instrumentation,
 a minimum of 10 reference samples for static/dynamic response,
 a minimum of 15 samples of the product that adequately cover the full range of values to be expected
under operating conditions for static test.
9.4.3 Test procedure
9.4.3.1 Static/dynamic response
Collect reference samples under normal operating conditions. Log the corresponding analyser readings, X .
ADy,i
Prepare a static calibration sample from each reference sample. The quantity and condition of the static
calibration samples will depend on the design of the on-line analyser. Typically, for an on-belt ash analyser for
instance, about 5 kg of air-dried coal crushed to a top size of 1 mm is suitable. It is important that all samples
in a set be prepared in the same manner using the same equipment. Present the static calibration samples to
the analyser and log the analyser readings, X . Determine the static/dynamic response factor (f ; see D.8).
Ai SDR
9.4.3.2 Static test
Collect a set of samples, not necessarily under dynamic conditions. From each sample prepare a static
calibration sample and duplicate analysis test samples.
Present each static calibration sample to the analyser, under static conditions, for the chosen measurement
period. Log the data, X , and analyse each duplicate test sample, D and D . Correct the results by adding
Ai 1i 2i
the f (see 9.4.3.1) to each static analyser value.
SDR
9.5 Data analysis
9.5.1 General
Calculate the following:
 mean duplicate reference values D ;
i
 differences, d, between the analyser values X (corrected in the case of static test) and the mean
i Ai
duplicate reference values D .
i
9.5.2 Visual assessment
Plot a graph of the analyser values, X , against the mean duplicate reference sample values D , and assess
i
Ai
visually for the presence of possible outliers and other problems (see D.9.2).
It is recommended that other graphical procedures outlined in D.9 also be carried out before a full statistical
analysis of the data is undertaken.
9.5.3 Outliers
Check for outliers by the statistical procedure of Cochran’s criterion (see D.10.2). Consider possible outliers
identified in this way for removal in accordance with the criteria defined in D.10.3. Remove those values that
meet the criteria.
9.5.4 Independence of differences
Test the list of differences for independence between observations: (see D.11).
If the test is satisfactory, continue with the test for bias. Otherwise, discard the data and examine the scheme
for possible causes. Modify the test scheme if appropriate and repeat the test.
9.5.5 Tests for bias
Calculate the slope of the linear regression of the analyser values, X , on the mean duplicate reference
Ai
values D using an “errors-in-variables” (EIV) method (see D.12.1)
i
Calculate the variance of this slope (see D.12.2)
Test for existence of significant bias of scale (see D.13)
If there is no significant bias of scale, test for bias of location (see D.14)
9.6 Results and interpretation
Record the following information:
 date of the test;
 identity of the product;
 relevant instrument parameters;
 conditions of the test;
 values of the individual observations;
 results of the tests in 9.5.
Tests for bias of scale and total bias are made with reference to the current calibration that is described in this
test by the relationshipXD= . In the case of commissioning tests (see 11.2), the current calibration is the
i
Ai
preliminary calibration installed in accordance with the manufacturer's instructions. Otherwise it is a calibration
installed following a previous test of this kind.
If there is a significant change in bias of scale or of bias of location, which cannot be attributed to systematic
instrumentation variations (see 8.6), examine the reference method scheme for possible sources of variation
which have arisen since the previous confirmation of the calibration. Changes in solid mineral fuel quality are
14 © ISO 2005 – All rights reserved

also potential sources of systematic error to be considered. If the additional source of error cannot be
identified and eliminated, install a new calibration based on the relationship found between analyser and
reference values in the current test.
10 Operational measurement performance
10.1 General
When the analyser calibration has been established and the absence of significant bias demonstrated, the
precision of the measurement under operational (dynamic) conditions becomes the manifestation of the
accuracy of measurement of the analyser.
Depending on the particular application, one or both of two performance indicators may be considered for its
evaluation:
 analyser dynamic precision (initial performance testing);
 comparative dynamic precision (maintenance of measurement performance).
10.2 Determination of analyser dynamic precision
10.2.1 Test methods
The analyser dynamic precision is the definitive measure of analyser performance and is determined by either
of two comparative test methods (see C.2) that eliminate the errors due to the reference test method from the
result.
 The two-instrument test: this is the less complex test but it is susceptible to error if there is any calibration
bias present. Such errors will report to the analyser precision.
 The three-instrument test: this test requires a second, independent reference sampling system and
eliminates bias errors. However, care must be taken to ensure that all the relevant constraints on the test
procedure are fully met.
NOTE Provided that one of the reference test methods used is that which will also be used in subsequent routine
operations, it will also allow the measurement of comparative dynamic precision (see 10.3).
10.2.2 Objectives
The test methods described in 10.2.1 are designed to achieve two objectives:
 determination of the definitive measurement performance of the analyser;
 its comparison with a guaranteed value.
10.2.3 Test conditions
Test conditions include the following:
 two- or three-instrument test scheme set up in accordance with the principles outlined in C.2;
 comparison period chosen with reference to the conditions of the specified guaranteed performance.
(Note that if this application is required and if the performance guarantee is to be checked, the period
should be chosen with reference to the conditions of the specified performance. However, due regard
should be paid to the need to constrain the precision of the reference test method to be similar to the
expected value for the analyser; see C.5 and C.6);
 minimum of 15 periods for a two-instrument test and a minimum of 40 periods for a three-instrument test
(see C.5).
10.2.4 Test procedures
10.2.4.1 Two-instrument test
Collect duplicate reference samples as described in 9.3.2.
10.2.4.2 Three-instrument test
Collect reference samples, from both reference test methods, for each comparison period defined in the
conditions of test and log the corresponding data from the analyser, X . Ensure that the collection of samples
Ai
and data are synchronised and that the sets of data all relate properly to their respective comparison periods.
From each reference sample, prepare and analyse a single test sample (R and R ).
1i 2i
10.2.5 Data analysis
10.2.5.1 Two-instrument test
10.2.5.1.1 Visual assessment
Plot a graph of analyser values, X , against the mean duplicate reference values D .
i
Ai
Assess visually for the presence of possible outliers and other problems (see D.9.2).
It is recommended that the other graphical procedures outlined in D.9 also be carried out before a full
statistical analysis of the data is undertaken.
10.2.5.1.2 Outliers
Check each set of values for outliers by the statistical procedure of Cochran’s criterion (see D.10.2). Consider
possible outliers identified in this way for removal in accordance with the criteria defined in D.10.3. Remove
those values that meet the criteria.
10.2.5.1.3 Precision
Calculate the following:
 differences, x , between the duplicate reference values D and D ;
dupi 1i 2i
 variance of these differences, V (see D.15);
dup
 means of the duplicate reference values, D ;
i
 differences, d , between the analyser values, X , and the mean duplicate reference values, D ;
i
i Ai
 variance of these differences, V (see D.2);
d
 variance due to the analyser, V , calculated as V = V − V ;
A A d dup
 analyser standard deviation, s .
.
A
16 © ISO 2005 – All rights reserved

10.2.5.1.4 Performance guarantee
Test for significance the difference between the performance indicator of the analyser, s , and the
A
manufacturer's declared guarantee, s (see D.16).
g
10.2.5.2 Three-instrument test
10.2.5.2.1 Visual assessment
Plot graphs of
 analyser values, X , against reference 1 values, R ;
Ai 1
 analyser values, X , against reference 2 values, R ;
Ai 2i
 reference 1 values, R , against reference 2 values, R .
1i 2i
Assess visually for the presence of possible outliers and other problems (see D.9.2).
It is recommended that the other graphical procedures outlined in D.9 also be carried out before a full
statistical analysis of the data is undertaken.
10.2.5.2.2 Outliers
Check for outliers by the statistical procedure of Cochran’s criterion (see D.10.2). Consider possible outliers
identified in this way for removal in accordance with the criteria defined in D.10.3. Remove those values that
meet the criteria.
10.2.5.2.3 Precision
Using the technique of Grubbs' estimators, determine the following for the analyser and the two reference
methods:
 the variance, V , V , V (see D.17);
A R1 R2
 the standard deviation, S , S , S (see D.17);
A R1 R
 the precision, P , P , P (see D.17).
A R1 R2
10.2.5.2.4 Performance guarantee
Test for significance the difference between the performance indicator of the analyser, s , and the
A
manufacturer's declared guarantee, s (see D.16).
g
10.2.6 Results and interpretation
Record the following information:
 date of the test;
 identity of the product;
...


INTERNATIONAL ISO
STANDARD 15239
First edition
2005-04-01
Solid mineral fuels — Evaluation of the
measurement performance of on-line
analysers
Combustibles minéraux solides — Évaluation de la performance de
mesure des analyseurs en ligne

Reference number
©
ISO 2005
PDF disclaimer
This PDF file may contain embedded typefaces. In accordance with Adobe's licensing policy, this file may be printed or viewed but
shall not be edited unless the typefaces which are embedded are licensed to and installed on the computer performing the editing. In
downloading this file, parties accept therein the responsibility of not infringing Adobe's licensing policy. The ISO Central Secretariat
accepts no liability in this area.
Adobe is a trademark of Adobe Systems Incorporated.
Details of the software products used to create this PDF file can be found in the General Info relative to the file; the PDF-creation
parameters were optimized for printing. Every care has been taken to ensure that the file is suitable for use by ISO member bodies. In
the unlikely event that a problem relating to it is found, please inform the Central Secretariat at the address given below.

©  ISO 2005
All rights reserved. Unless otherwise specified, no part of this publication may be reproduced or utilized in any form or by any means,
electronic or mechanical, including photocopying and microfilm, without permission in writing from either ISO at the address below or
ISO's member body in the country of the requester.
ISO copyright office
Case postale 56 • CH-1211 Geneva 20
Tel. + 41 22 749 01 11
Fax + 41 22 749 09 47
E-mail copyright@iso.org
Web www.iso.org
Published in Switzerland
ii © ISO 2005 – All rights reserved

Contents Page
Foreword. iv
Introduction . v
1 Scope. 1
2 Normative references . 1
3 Terms and definitions. 1
4 Symbols and abbreviations . 4
5 Principle . 6
6 Analyser installations . 6
7 Evaluation techniques. 8
8 Instrument stability. 9
9 Calibration. 12
10 Operational measurement performance. 15
11 Application. 20
Annex A (informative) On-line analysis techniques for solid mineral fuels . 22
Annex B (informative) Sources of measurement variance. 26
Annex C (normative) Comparative test methods. 28
Annex D (normative) Statistical assessment procedures. 36
Annex E (informative) Reference standards . 47
Annex F (informative) Specimen calculations. 48
Bibliography . 61

Foreword
ISO (the International Organization for Standardization) is a worldwide federation of national standards bodies
(ISO member bodies). The work of preparing International Standards is normally carried out through ISO
technical committees. Each member body interested in a subject for which a technical committee has been
established has the right to be represented on that committee. International organizations, governmental and
non-governmental, in liaison with ISO, also take part in the work. ISO collaborates closely with the
International Electrotechnical Commission (IEC) on all matters of electrotechnical standardization.
International Standards are drafted in accordance with the rules given in the ISO/IEC Directives, Part 2.
The main task of technical committees is to prepare International Standards. Draft International Standards
adopted by the technical committees are circulated to the member bodies for voting. Publication as an
International Standard requires approval by at least 75 % of the member bodies casting a vote.
Attention is drawn to the possibility that some of the elements of this document may be the subject of patent
rights. ISO shall not be held responsible for identifying any or all such patent rights.
ISO 15239 was prepared by Technical Committee ISO/TC 27, Solid mineral fuels, Subcommittee SC 5,
Methods of analysis.
iv © ISO 2005 – All rights reserved

Introduction
There are now many instruments in use which have been developed to enable the rapid on-line measurement
of solid mineral fuels for a range of parameters that indicate coal quality. The principles on which they are
based differ from those currently in use for sampling and analysis and, in effect, constitute a completely
different approach to the measurement of solid mineral fuel quality.
This standard has been developed to specify methods by which the measurement performance of such
analysers can be evaluated.
INTERNATIONAL STANDARD ISO 15239:2005(E)

Solid mineral fuels — Evaluation of the measurement
performance of on-line analysers
1 Scope
This International Standard sets out practices for the evaluation of the measurement performance of all types
of on-line analysers for solid mineral fuel.
It presents information on the different types of analyser currently available and describes procedures for the
evaluation of various aspects of measurement performance, appropriate methods of test and techniques for
the statistical assessment of the data collected.
2 Normative references
The following referenced documents are indispensable for the application of this document. For dated
references, only the edition cited applies. For undated references, the latest edition of the referenced
document (including any amendments) applies.
ISO 1213-2, Solid mineral fuels — Vocabulary — Part 2: Terms relating to sampling, testing and analysis
ISO 1988, Hard coals — Sampling
ISO 2309, Coke — Sampling
ISO 3534-1, Statistics — Vocabulary and symbols — Part 1: Probability and general statistical terms
ISO 5069 (all parts), Brown coals and lignites — Principles of sampling
ISO 13909 (all parts):2001, Hard coal and coke — Mechanical sampling
3 Terms and definitions
For the purposes of this document, the definitions given in ISO 1213-2, ISO 3534-1 and ISO 13909-1 and the
following apply.
3.1
accuracy
closeness of agreement between an observation and the “true” value
[ISO 1213-2:1992]
3.2
analyser dynamic precision
closeness of agreement between analyser values, obtained from solid mineral fuel interrogated by the
analyser under dynamic conditions and determined by a comparative test method which eliminates random
errors attributable to the reference test method
3.3
analyser test method
method of analysis which gives, for a solid mineral fuel process stream, values arising from the operation of
the on-line analyser, which are estimates of the true values for specified measurands
3.4
analyser value
value of a specified measurand in a test unit that is obtained from a test carried out by an analyser test
method
3.5
backscatter geometry
arrangement of an interrogation process in which a source of incident energy and a detector system are on
the same, or adjacent, sides of the solid mineral fuel passing through the interrogation zone
3.6
bias
systematic error which leads to the average value of a series of results being persistently higher or
persistently lower than those which are obtained using a reference test method
[ISO 13909-1]
3.7
bias of scale
bias that varies as a function of the range of values measured
3.8
bias of location
bias that is constant and independent of the range of values measured
3.9
comparative dynamic precision
closeness of agreement between analyser values obtained from solid mineral fuel interrogated by the analyser
under dynamic conditions and those determined by a comparative test method, which includes random errors
attributable to the reference test method
3.10
comparative test method
method of testing in which analyser values are compared with corresponding reference values
3.11
comparison period
period of time, during which a test unit is interrogated by an analyser to give an analyser value and is sampled
by a reference test method to obtain a reference value, for a measurand
NOTE The period can be based on the typical time to produce a particular mass of solid mineral fuel, e.g. a trainload,
or on a period which coincides with operations, e.g. a shift, or some other period that is convenient to, or preferred for, a
specific evaluation procedure.
3.12
interrogation process
procedure which elicits from the solid mineral fuel process stream a measurable response related, specifically
or by inference, to the quantity of the measurand
3.13
interrogation volume
volume of the solid mineral fuel process stream in which the detected response to the interrogation process
originates
2 © ISO 2005 – All rights reserved

3.14
interrogation zone
part of the analyser installation through which the solid mineral fuel process stream passes and in which it is
subjected to the interrogation process
3.15
mainstream configuration
configuration in which the whole of the process stream to be analysed is presented to, although not
necessarily analysed by, an on-line analyser
3.16
on-line analyser
instrument for the measurement, continuously, of one or more quality indicators of solid mineral fuel while it is
undergoing processing or handling, to give data rapidly and automatically
3.17
precision
closeness of agreement between independent results obtained under stipulated conditions
[ISO 3534-1:1993]
NOTE For the purposes of this International Standard, the index of precision used is ± ts, where t is the value of
Student's t (95 % confidence level, two-sided) and s is the standard deviation of the observations about the mean value.
3.18
reference test method
method of sampling, sample preparation and analysis which is expected to give, for a solid mineral fuel
process stream, values which are unbiased estimates of the true values for specified measurands
3.19
reference value
value of a specified measurand in a test unit that is obtained from a test carried out by a reference test method
and which serves as a reference for comparison with an analyser value
NOTE For the purposes of this International Standard, reference values are considered to be conventional true
values.
3.20
sample
quantity of fuel, representative of a larger mass, for which the quality is to be determined
[ISO 13909-1]
3.21
static repeatability
closeness of agreement between replicate analyser values obtained from a reference standard in the
interrogation zone of the analyser
3.22
sub-stream configuration
configuration in which a part of the process stream to be analysed is diverted by means of a suitable sampling
system for presentation to an on-line analyser
3.23
test unit
quantity of solid mineral fuel chosen for the determination of analyser and reference values
3.24
transmission geometry
arrangement of an interrogation process in which a source of incident energy and a detector system are on
opposite sides of the solid mineral fuel passing through the interrogation zone.
4 Symbols and abbreviations
4.1 Mathematical
4.1.1 Primary
 β regression coefficient (slope)
 C Cochran's criterion
 d difference between pairs of values (other than duplicates)
 D duplicate 1 reference test method value
 D duplicate 2 reference test method value
 D mean of duplicate reference test method values
 δ test statistic (see D.16)
 EIV errors in variables
 E(ρ) expected number of runs
 F F-distribution
 f static/dynamic response factor
SDR
 L confidence level
C
 n number of values in a set
 P precision
 Q test statistic (see D.16)
 R reference test method value
 R reference test method 1 value
 R reference test method 2 value
 r linear correlation coefficient
 ρ run
 S reference standard 1 value
 S reference standard 2 value
4 © ISO 2005 – All rights reserved

 s standard deviation
 s the expected (guaranteed) value of precision of the analyser at one standard deviation
g
 s(ρ) standard error of number of runs
 σ population standard deviation
 t Student's t-distribution
 V variance
 ν degrees of freedom
 X analyser test method value
A
 x any value in a set
 x difference between pairs of duplicate values
dup
 χ chi-squared distribution
 Z test statistic (see D.16)
 z normal deviate
4.1.2 Subscripts
 A set of analyser test method values
 c critical value
 d set of differences
 dup set of duplicate differences
 Dy set of dynamic calibration values
 D1 set of duplicate reference 1 test method values
 D2 set of duplicate reference 2 test method values
 D set of means of duplicate reference test method values
 g guaranteed value
 i ith value
 max maximum value
 0 time zero
 R1 set of reference test method 1 values
 R2 set of reference test method 2 values
 St set of static calibration values
 S1 set of reference standard 1 values
 S2 set of reference standard 2 values
 τ time
 1 set 1
 2 set 2
4.2 Other abbreviations
 GHz gigahertz
 keV kilo-electron volt
 MeV mega-electron volt
 RF radiofrequency
5 Principle
The performance of an on-line analyser, which has been set up and calibrated, is evaluated by procedures
that address three main aspects of analyser operation. These are the stability of the instrumentation, the
validity of the calibration and the precision of measurement under operational conditions. Instrument stability
is assessed by static measurements made, in replicate, at operationally significant intervals of time. The
installed calibration is confirmed by making simultaneous comparative measurements with the analyser and a
reference method of analysis over a range of measurand values which encompasses at least the spread of
values encountered in normal operations. Operational performance is evaluated by comparison of analyser
values with reference values obtained from separate reference procedures.
6 Analyser installations
6.1 General
There are many types of analyser, based on a variety of measurement principles and possible installation
configurations, which have been designed to measure one or more indicators of quality in a range of products
that occur in solid mineral fuel process streams.
The measurement principles on which analysers are based may be divided into four classes, as outlined in 6.2.
6.2 Analyser types
6.2.1 Absorption/scattering processes
The majority of on-line analysers for solid mineral fuel depend upon the existence of a quantitative relationship
between the measurand and the degree of absorption and/or scattering of a beam of electromagnetic
radiation or neutrons incident upon the solid mineral fuel flowing through the interrogation zone of the analyser.
Incident electromagnetic radiation, in the X-, gamma, microwave or optical energy regions, or neutron
radiation may be used; source, sample and detector may be arranged in transmission or backscatter
geometry.
6 © ISO 2005 – All rights reserved

6.2.2 Excitation processes
A second group depends on a quantitative relationship between the measurand and the emission of specific
electromagnetic radiation, (X- or gamma rays) arising as a result of excitation by an outside source of X-,
gamma or neutron radiation.
6.2.3 Natural radiation emission
In this class, the gamma radiation emitted by naturally occurring radioisotopes, present in the measurand in
relatively constant proportions, is measured.
6.2.4 Property changes
A few analysers depend upon an effect of the measurand on a selected electrical or physical property that is
measurable on line.
NOTE Annex A gives information on techniques for on-line analysis.
6.3 Methods of presentation
The solid mineral fuel to be analysed may be transported through or past the analyser on a conveyor belt or
other supporting platform, or within the confines of a container, chute or pipe. In most designs, the analyser
detection system is physically non-invasive and non-contacting with the solid mineral fuel.
The condition of the solid mineral fuel presented to the analyser varies, among the methods of analysis, from
material as it occurs in the process stream, to crushed, mixed and possibly dried material which has been
carefully profiled.
The solid mineral fuel may be presented to the analyser as a bulk solid or as a fuel-water slurry.
Two basic installation configurations for on-line analysers are possible (see Figure 1). The choice between the
two for any particular application depends on the type of analyser appropriate to the measurand and certain
parameters of the product and the plant, such as particle size and flow rate.
6.4 Installation configurations
6.4.1 Mainstream
A mainstream configuration is a system in which the whole of the process stream for which the analytical
information is required is presented to the analyser. The system can contain conditioning steps, such as
mixing and profiling, prior to interrogation by the analyser.
6.4.2 Sub-stream
A sub-stream configuration is a system in which a portion of the process stream is diverted to the analyser by
means of a suitable sampling process. The diverted portion of the stream may be subsequently subjected to
sample preparation procedures, such as crushing, dividing and conditioning before presentation to the
analyser. After interrogation the sub-stream is normally returned to the main process stream.
Figure 1 — Analyser configurations
7 Evaluation techniques
The procedures described in this International Standard are designed to allow the evaluation of analyser
performance in a range of situations and conditions of operation.
They are intended to be applied to an analyser after it has been set up and calibrated as recommended by the
manufacturer, with all instrumental parameters at their normal operational values for the particular installation.
In order to make a full evaluation of on-line analyser performance, it is necessary to address three
interdependent aspects of analyser operation:
 instrument stability;
 calibration confirmation;
 operational measurement performance.
8 © ISO 2005 – All rights reserved

Since some of the measurement errors that are attributable to the analyser occur only as a result of
operations on, or interactions with, the moving process stream, it is essential for a full evaluation of
measurement performance to carry out tests under dynamic conditions.
Nevertheless, information from static tests, although more limited in its nature, is useful for monitoring some
aspects of analyser performance on a routine basis.
An understanding of the sources of variance that contribute to the errors of measurement of the analyser and
of any reference system with which it is compared, is necessary for the proper design of tests and the
evaluation of the results. Sources of variance are discussed in Annex B.
The procedures used vary with the situation but have many features in common. General considerations for
the design and operation of comparative tests are given in Annex C and techniques for the statistical analysis
of the data in Annex D.
The principal steps involved in an evaluation are as follows:
 decide which aspect of analyser operation is to be evaluated (see Note);
 choose an appropriate method of test and design a scheme of operation;
 carry out the test procedure;
— apply appropriate statistical treatment to the data obtained from the test.
NOTE Frequently a situation will require more than one aspect to be considered (see Clause 11).
8 Instrument stability
8.1 General
It is a pre-requisite to accurate measurement by an on-line analyser that the instrumentation be stable and
contribute as little as possible to the total error of measurement. Errors arising from the instrumentation may
be random or systematic.
An estimate of random variations attributable to the instrumentation is obtained by determining the static
repeatability. A significant increase in this value with time is an indicator of changes in instrumental
characteristics that may need investigation and could lead to a worsening of the measurement performance of
the analyser. Static repeatability is also an indicator of the limiting value of accuracy achievable (base-line
performance).
Systematic instrumentation changes, which could affect the calibration if they are sufficiently large, are
indicated by changes in the level of response from reference standards. These changes can provide the
information needed to compensate for systematic instrumentation errors. In some analysers this process is
carried out automatically at intervals and a correction applied.
Random and systematic variations originating in the instrumentation can be measured simultaneously by a
relatively simple procedure that is amenable to routine use.
8.2 Objectives
The test methods and methods of data analysis described in 8.3, 8.4 and 8.5 are designed to achieve three
objectives:
 to establish benchmarks against which subsequent tests for variations due to instrument instability may
be compared;
 to determine and monitor the contribution made to the overall measurement performance of the analyser
by random variations in response originating in the instrumentation;
 to monitor systematic changes in response, originating in the instrumentation, which may affect
calibration.
8.3 Test conditions
Test conditions require
 a measurement period that will return an adequate precision to allow the testing of the significance of any
changes from previous measurements,
 a minimum of 10 periods (see note).
NOTE A larger number of periods will increase the precision, but with diminishing returns; a practical maximum is
probably about 20.
8.4 Test procedure
Make an initial set of consecutive replicate measurements under the chosen test conditions with a reference
standard (see Notes 1 and 2) in the interrogation zone of the analyser. If it is required to detect systematic
changes, repeat with a second reference standard that offers a different level of response from the first one.
Designate these as measurements made at time 0. The levels of response from the standards chosen for the
detection of systematic changes should be representative of those which are obtained from values of
measurands close to each end of the calibration range. This will ensure that any shift of the calibration line
and/or change in slope is signalled.
After an interval, repeat the above procedure. Designate these as measurements made at time τ. A suitable
interval depends upon the usual operating programme of the analyser. The repeat test should normally be
part of a regular check routine at intervals of a shift or a day, for example. A special check should be
undertaken after any system changes (see Clause 11).
NOTE 1 Information on suitable reference standards is given in Annex E.
NOTE 2 Evaluation of instrument stability might not be possible with some designs of analysers that cannot accept a
reference standard. In others, a suitable response might be available from the empty interrogation zone.
8.5 Data analysis
8.5.1 General
Consider measurements made at time 0 to be benchmarks against which subsequent measurements made at
time τ are compared.
If, at time τ, the instrumental parameters in use when the benchmark performance was established have
changed, normalize the data obtained for the test at time τ with respect to those changes.
8.5.2 Measurement precision
8.5.2.1 For each set of observations calculate the following values:
 variance, V and V (see D.2);

S1 S2
 standard deviation, s and s (see D.3);

S1 S2
 precision, P and P (see D.4).

S1 S2
10 © ISO 2005 – All rights reserved

8.5.2.2 Test the following for significance:
 ratio of the variances, V and V (see D.5);
S1,0 S1,τ
 ratio of the variances, V and V (see D.5).
S2,0 S2,τ
8.5.3 Changes in response level
8.5.3.1 For each set of observations, calculate the following:
 mean values, S and S (see D.6);
1 2
 differences of the means, S and S S and S .
1,0 1,τ 2,0 2,τ
8.5.3.2 Test the following for significance:
 difference of the means, S and S (see D.7);
1,0 1,τ
 difference of the means, S and S (see D.7).
2,0 1,τ
8.6 Results and interpretation
Record the following information:
 the date of the test;
 the identity of the reference standard(s) used;
 relevant instrument parameters;
 the conditions of the test;
 the values of the individual observations;
 the values calculated in 8.5;
 the results of the significance tests.
The calculated value of P indicates the base-line performance of the analyser, i.e. a lower limiting value for
operational accuracy over a similar measurement period. In practice, this value cannot be achieved since
other sources of variation, outside the analyser instrumentation, also contribute to operational accuracy.
When an acceptable initial value for variance has been determined (see 11.2) or re-established (see 11.4),
designate it V . If a subsequent determination of variance, V , is not significantly different from V ,
S1,0 S1,τ S1,0
then the instrumentation may be considered to be stable with respect to its contribution to random errors in the
measurement of operational accuracy. If this is not the case, examine the instrumentation for the cause of the
change.
When acceptable initial values for variance have been determined or re-established, designate the mean
values of the observations as S and S . If, in a subsequent determination, at time τ , there is no
1,0 2,0
significant difference between S and S ,or between S and S then it can be concluded that the
,
1,0 1,τ 2,0 2,τ
instrumentation response has not changed in a way that will affect the calibration. If one or both differences
are significant, then there has been a change that could affect the calibration. Whether it will translate to a
detectable change in the calibration depends on the size of other sources of error outside the instrumentation
and the precision with which the calibration is defined. Consider confirmation of the calibration (see Clause 9).
9 Calibration
9.1 General
Since on-line analysers can only make measurements by reference to values obtained by some other method,
the establishment and maintenance of a sound calibration is vital to accurate measurements.
The procedures described under 8.3 and 8.4 provide information that can be used to make corrections to the
calibration line for errors due to instrumentation variations. However they offer no help in detecting systematic
changes due to other sources, such as changes in the quality of the fuel, in its presentation or in the reference
sampling and analysis scheme. This can be achieved only by undertaking a test described in this section.
A particular problem may occur with a well-controlled system that maintains product quality within a narrow
range of values. It can be very inconvenient or even difficult to obtain, under dynamic conditions, a range of
values that is wide enough to define a calibration with adequate precision. In that case, this International
Standard describes a procedure (the static calibration), that is based on samples obtained other than under
dynamic conditions.
9.2 Objective
To confirm that a previously installed calibration remains valid and appropriate to the current operating
conditions.
9.3 Dynamic calibration
9.3.1 Test conditions
Test conditions include
 a two-instrument test scheme set up in accordance with the principles outlined in C.2,
 a comparison period that is of sufficient duration to allow enough reference sample increments to be
taken to achieve a high degree of reference sample precision (see C.5 and C.6),
 a set of reference samples that adequately covers the full range of values to be expected under operating
conditions,
 a minimum of 15 comparison periods (see C.5).
It might not always be possible to obtain a full range of expected values within a continuous test period. If this
is the case, data obtained from further test periods may be added to the initial data provided that stability of
the instrumentation has been shown to be satisfactory (see Clause 8) and the new data are consistent with it.
(See ISO 13909-8:2001, 11.5, for tests of homogeneity of data).
9.3.2 Test procedure
For each comparison period defined in the conditions of test, collect duplicate reference samples as described
in ISO 13909-7:2001, Clause 7 and log the corresponding data, X ) from the analyser. Ensure that the
Ai
collection of samples and data are properly synchronised and that sets of data all relate properly to their
respective comparison periods. Prepare and analyse the duplicate test samples to give reference duplicate
values (D and D ).
1i 2i
12 © ISO 2005 – All rights reserved

9.4 Static calibration
9.4.1 General
The response of an analyser to solid mineral fuel presented under static conditions is unlikely to be the same
as that presented dynamically. In order to check the calibration using static samples, it is necessary first to
compare the analyser response under static and dynamic conditions and determine a correction factor
designated the static/dynamic response. This correction can then be applied to estimate the relationship
between analyser and reference values under dynamic conditions.
9.4.2 Test conditions
Test conditions include
 a two-instrument test scheme set up in accordance with the principles outlined in C.2,
 a measurement period that will minimize variations due to instrumentation,
 a minimum of 10 reference samples for static/dynamic response,
 a minimum of 15 samples of the product that adequately cover the full range of values to be expected
under operating conditions for static test.
9.4.3 Test procedure
9.4.3.1 Static/dynamic response
Collect reference samples under normal operating conditions. Log the corresponding analyser readings, X .
ADy,i
Prepare a static calibration sample from each reference sample. The quantity and condition of the static
calibration samples will depend on the design of the on-line analyser. Typically, for an on-belt ash analyser for
instance, about 5 kg of air-dried coal crushed to a top size of 1 mm is suitable. It is important that all samples
in a set be prepared in the same manner using the same equipment. Present the static calibration samples to
the analyser and log the analyser readings, X . Determine the static/dynamic response factor (f ; see D.8).
Ai SDR
9.4.3.2 Static test
Collect a set of samples, not necessarily under dynamic conditions. From each sample prepare a static
calibration sample and duplicate analysis test samples.
Present each static calibration sample to the analyser, under static conditions, for the chosen measurement
period. Log the data, X , and analyse each duplicate test sample, D and D . Correct the results by adding
Ai 1i 2i
the f (see 9.4.3.1) to each static analyser value.
SDR
9.5 Data analysis
9.5.1 General
Calculate the following:
 mean duplicate reference values D ;
i
 differences, d, between the analyser values X (corrected in the case of static test) and the mean
i Ai
duplicate reference values D .
i
9.5.2 Visual assessment
Plot a graph of the analyser values, X , against the mean duplicate reference sample values D , and assess
i
Ai
visually for the presence of possible outliers and other problems (see D.9.2).
It is recommended that other graphical procedures outlined in D.9 also be carried out before a full statistical
analysis of the data is undertaken.
9.5.3 Outliers
Check for outliers by the statistical procedure of Cochran’s criterion (see D.10.2). Consider possible outliers
identified in this way for removal in accordance with the criteria defined in D.10.3. Remove those values that
meet the criteria.
9.5.4 Independence of differences
Test the list of differences for independence between observations: (see D.11).
If the test is satisfactory, continue with the test for bias. Otherwise, discard the data and examine the scheme
for possible causes. Modify the test scheme if appropriate and repeat the test.
9.5.5 Tests for bias
Calculate the slope of the linear regression of the analyser values, X , on the mean duplicate reference
Ai
values D using an “errors-in-variables” (EIV) method (see D.12.1)
i
Calculate the variance of this slope (see D.12.2)
Test for existence of significant bias of scale (see D.13)
If there is no significant bias of scale, test for bias of location (see D.14)
9.6 Results and interpretation
Record the following information:
 date of the test;
 identity of the product;
 relevant instrument parameters;
 conditions of the test;
 values of the individual observations;
 results of the tests in 9.5.
Tests for bias of scale and total bias are made with reference to the current calibration that is described in this
test by the relationshipXD= . In the case of commissioning tests (see 11.2), the current calibration is the
i
Ai
preliminary calibration installed in accordance with the manufacturer's instructions. Otherwise it is a calibration
installed following a previous test of this kind.
If there is a significant change in bias of scale or of bias of location, which cannot be attributed to systematic
instrumentation variations (see 8.6), examine the reference method scheme for possible sources of variation
which have arisen since the previous confirmation of the calibration. Changes in solid mineral fuel quality are
14 © ISO 2005 – All rights reserved

also potential sources of systematic error to be considered. If the additional source of error cannot be
identified and eliminated, install a new calibration based on the relationship found between analyser and
reference values in the current test.
10 Operational measurement performance
10.1 General
When the analyser calibration has been established and the absence of significant bias demonstrated, the
precision of the measurement under operational (dynamic) conditions becomes the manifestation of the
accuracy of measurement of the analyser.
Depending on the particular application, one or both of two performance indicators may be considered for its
evaluation:
 analyser dynamic precision (initial performance testing);
 comparative dynamic precision (maintenance of measurement performance).
10.2 Determination of analyser dynamic precision
10.2.1 Test methods
The analyser dynamic precision is the definitive measure of analyser performance and is determined by either
of two comparative test methods (see C.2) that eliminate the errors due to the reference test method from the
result.
 The two-instrument test: this is the less complex test but it is susceptible to error if there is any calibration
bias present. Such errors will report to the analyser precision.
 The three-instrument test: this test requires a second, independent reference sampling system and
eliminates bias errors. However, care must be taken to ensure that all the relevant constraints on the test
procedure are fully met.
NOTE Provided that one of the reference test methods used is that which will also be used in subsequent routine
operations, it will also allow the measurement of comparative dynamic precision (see 10.3).
10.2.2 Objectives
The test methods described in 10.2.1 are designed to achieve two objectives:
 determination of the definitive measurement performance of the analyser;
 its comparison with a guaranteed value.
10.2.3 Test conditions
Test conditions include the following:
 two- or three-instrument test scheme set up in accordance with the principles outlined in C.2;
 comparison period chosen with reference to the conditions of the specified guaranteed performance.
(Note that if this application is required and if the performance guarantee is to be checked, the period
should be chosen with reference to the conditions of the specified performance. However, due regard
should be paid to the need to constrain the precision of the reference test method to be similar to the
expected value for the analyser; see C.5 and C.6);
 minimum of 15 periods for a two-instrument test and a minimum of 40 periods for a three-instrument test
(see C.5).
10.2.4 Test procedures
10.2.4.1 Two-instrument test
Collect duplicate reference samples as described in 9.3.2.
10.2.4.2 Three-instrument test
Collect reference samples, from both reference test methods, for each comparison period defined in the
conditions of test and log the corresponding data from the analyser, X . Ensure that the collection of samples
Ai
and data are synchronised and that the sets of data all relate properly to their respective comparison periods.
From each reference sample, prepare and analyse a single test sample (R and R ).
1i 2i
10.2.5 Data analysis
10.2.5.1 Two-instrument test
10.2.5.1.1 Visual assessment
Plot a graph of analyser values, X , against the mean duplicate reference values D .
i
Ai
Assess visually for the presence of possible outliers and other problems (see D.9.2).
It is recommended that the other graphical procedures outlined in D.9 also be carried out before a full
statistical analysis of the data is undertaken.
10.2.5.1.2 Outliers
Check each set of values for outliers by the statistical procedure of Cochran’s criterion (see D.10.2). Consider
possible outliers identified in this way for removal in accordance with the criteria defined in D.10.3. Remove
those values that meet the criteria.
10.2.5.1.3 Precision
Calculate the following:
 differences, x , between the duplicate reference values D and D ;
dupi 1i 2i
 variance of these differences, V (see D.15);
dup
 means of the duplicate reference values, D ;
i
 differences, d , between the analyser values, X , and the mean duplicate reference values, D ;
i
i Ai
 variance of these differences, V (see D.2);
d
 variance due to the analyser, V , calculated as V = V − V ;
A A d dup
 analyser standard deviation, s .
.
A
16 © ISO 2005 – All rights reserved

10.2.5.1.4 Performance guarantee
Test for significance the difference between the performance indicator of the analyser, s , and the
A
manufacturer's declared guarantee, s (see D.16).
g
10.2.5.2 Three-instrument test
10.2.5.2.1 Visual assessment
Plot graphs of
 analyser values, X , against reference 1 values, R ;
Ai 1
 analyser values, X , against reference 2 values, R ;
Ai 2i
 reference 1 values, R , against reference 2 values, R .
1i 2i
Assess visually for the presence of possible outliers and other problems (see D.9.2).
It is recommended that the other graphical procedures outlined in D.9 also be carried out before a full
statistical analysis of the data is undertaken.
10.2.5.2.2 Outliers
Check for outliers by the statistical procedure of Cochran’s criterion (see D.10.2). Consider possible outliers
identified in this way for removal in accordance with the criteria defined in D.10.3. Remove those values that
meet the criteria.
10.2.5.2.3 Precision
Using the technique of Grubbs' estimators, determine the following for the analyser and the two reference
methods:
 the variance, V , V , V (see D.17);
A R1 R2
 the standard deviation, S , S , S (see D.17);
A R1 R
 the precision, P , P , P (see D.17).
A R1 R2
10.2.5.2.4 Performance guarantee
Test for significance the difference between the performance indicator of the analyser, s , and the
A
manufacturer's declared guarantee, s (see D.16).
g
10.2.6 Results and interpretation
Record the following information:
 date of the test;
 identity of the product;
 relevant instrument parameters;
 conditions of the test;
 values of the individual observations;
 values calculated in 10.2.5;
 results of the performance guarantee tests.
In the absence of bias, the precision, P , is the measure of the accuracy of the analyser, for the test units
A
interrogated during the chosen comparison period. P and P are the corresponding values of precision for
R1 R2
the two reference test methods.
10.3 Determination of comparative dynamic precision
10.3.1 General
The comparative dynamic precision of the analyser is a less rigorous measure of analyser performance and is
determined by a comparative test method that includes the errors due to the reference test method. This
involves a simpler procedure than for analyser dynamic precision; it is more readily applied to regular
monitoring of measurement performance.
10.3.2 Objectives
The test method and methods of data analysis described in 10.3 are designed to achieve two objectives:
 determination of an indicator of measurement performance;
 testing it for significant change from a previous value.
10.3.3 Test conditio
...

Questions, Comments and Discussion

Ask us and Technical Secretary will try to provide an answer. You can facilitate discussion about the standard in here.

Loading comments...

Frequently Asked Questions

ISO 15239:2005 is a standard published by the International Organization for Standardization (ISO). Its full title is "Solid mineral fuels - Evaluation of the measurement performance of on-line analysers". This standard covers: ISO 15239:2005 sets out practices for the evaluation of the measurement performance of all types of on-line analysers for solid mineral fuel. It presents information on the different types of analyser currently available and describes procedures for the evaluation of various aspects of measurement performance, appropriate methods of test and techniques for the statistical assessment of the data collected.

ISO 15239:2005 sets out practices for the evaluation of the measurement performance of all types of on-line analysers for solid mineral fuel. It presents information on the different types of analyser currently available and describes procedures for the evaluation of various aspects of measurement performance, appropriate methods of test and techniques for the statistical assessment of the data collected.

ISO 15239:2005 is classified under the following ICS (International Classification for Standards) categories: 75.160.10 - Solid fuels. The ICS classification helps identify the subject area and facilitates finding related standards.

You can purchase ISO 15239:2005 directly from iTeh Standards. The document is available in PDF format and is delivered instantly after payment. Add the standard to your cart and complete the secure checkout process. iTeh Standards is an authorized distributor of ISO standards.