Analyser systems - Maintenance management

IEC TR 62010:2016(E) provides an understanding of analyser maintenance principles and approaches. It is designed as a reference source for individuals closely involved with maintenance of analytical instrumentation, and provides guidance on performance target setting, strategies to improve reliability, methods to measure effective performance, and the organisations, resources and systems that need to be in place to allow this to occur.

General Information

Status
Published
Publication Date
14-Dec-2016
Current Stage
PPUB - Publication issued
Start Date
15-Dec-2016
Completion Date
28-Feb-2017
Ref Project

Relations

Technical report
IEC TR 62010:2016 - Analyser systems - Maintenance management Released:12/15/2016 Isbn:9782832236840
English language
69 pages
sale 15% off
Preview
sale 15% off
Preview
Technical report
IEC TR 62010:2016 - Analyser systems - Maintenance management
English language
69 pages
sale 15% off
Preview
sale 15% off
Preview

Standards Content (Sample)


IEC TR 62010 ®
Edition 2.0 2016-12
TECHNICAL
REPORT
colour
inside
Analyser systems – Maintenance management

All rights reserved. Unless otherwise specified, no part of this publication may be reproduced or utilized in any form
or by any means, electronic or mechanical, including photocopying and microfilm, without permission in writing from
either IEC or IEC's member National Committee in the country of the requester. If you have any questions about IEC
copyright or have an enquiry about obtaining additional rights to this publication, please contact the address below or
your local IEC member National Committee for further information.

IEC Central Office Tel.: +41 22 919 02 11
3, rue de Varembé Fax: +41 22 919 03 00
CH-1211 Geneva 20 info@iec.ch
Switzerland www.iec.ch
About the IEC
The International Electrotechnical Commission (IEC) is the leading global organization that prepares and publishes
International Standards for all electrical, electronic and related technologies.

About IEC publications
The technical content of IEC publications is kept under constant review by the IEC. Please make sure that you have the
latest edition, a corrigenda or an amendment might have been published.

IEC Catalogue - webstore.iec.ch/catalogue Electropedia - www.electropedia.org
The stand-alone application for consulting the entire The world's leading online dictionary of electronic and
bibliographical information on IEC International Standards, electrical terms containing 20 000 terms and definitions in
Technical Specifications, Technical Reports and other English and French, with equivalent terms in 15 additional
documents. Available for PC, Mac OS, Android Tablets and languages. Also known as the International Electrotechnical
iPad. Vocabulary (IEV) online.

IEC publications search - www.iec.ch/searchpub IEC Glossary - std.iec.ch/glossary
The advanced search enables to find IEC publications by a 65 000 electrotechnical terminology entries in English and
variety of criteria (reference number, text, technical French extracted from the Terms and Definitions clause of
committee,…). It also gives information on projects, replaced IEC publications issued since 2002. Some entries have been
and withdrawn publications. collected from earlier publications of IEC TC 37, 77, 86 and

CISPR.
IEC Just Published - webstore.iec.ch/justpublished

Stay up to date on all new IEC publications. Just Published IEC Customer Service Centre - webstore.iec.ch/csc
details all new publications released. Available online and If you wish to give us your feedback on this publication or
also once a month by email. need further assistance, please contact the Customer Service
Centre: csc@iec.ch.
IEC TR 62010 ®
Edition 2.0 2016-12
TECHNICAL
REPORT
colour
inside
Analyser systems – Maintenance management

INTERNATIONAL
ELECTROTECHNICAL
COMMISSION
ICS 71.040.01 ISBN 978-2-8322-3684-0

– 2 – IEC TR 62010:2016  IEC 2016
CONTENTS
FOREWORD . 5
INTRODUCTION . 7
1 Scope . 9
1.1 Purpose . 9
1.2 Questions to be addressed . 9
2 Normative references . 10
3 Terms and definitions . 10
4 Classifying analysers using a risk based approach . 15
4.1 General . 15
4.2 Safety protection . 17
4.3 Environmental protection . 17
4.4 Asset protection . 19
4.5 Profit maximisation . 19
4.6 Performance target . 20
4.7 Maintenance priority . 21
4.8 Support priority . 21
5 Maintenance strategies . 21
5.1 General . 21
5.2 Reliability centred maintenance (RCM) . 21
5.2.1 General . 21
5.2.2 Reactive maintenance . 22
5.2.3 Preventative or planned maintenance (PM) . 22
5.2.4 Condition based strategy . 23
5.2.5 Proactive maintenance . 23
5.2.6 Optimising maintenance strategy . 23
5.3 Management systems/organisation . 24
5.4 Training/competency . 26
5.4.1 General . 26
5.4.2 Training needs . 26
5.4.3 Selecting trainees . 26
5.4.4 Types of training . 26
5.4.5 Vendor training . 27
5.4.6 Classroom training . 27
5.4.7 Technical societies . 27
5.4.8 User training . 27
5.4.9 Retraining . 28
5.5 Optimal resourcing . 28
5.5.1 General . 28
5.5.2 Equivalent analyser per technician (EQAT) calculation method . 29
5.5.3 Ideal number of technicians . 29
5.5.4 In-house or contracted out maintenance . 30
5.5.5 Off-site technical support requirement . 31
5.6 Best practice benchmarking . 31
5.7 Annual analyser key performance indicator (KPI) review . 31
6 Analyser performance monitoring . 32
6.1 General . 32

6.2 Recording failures – reason/history codes . 33
6.2.1 General . 33
6.2.2 Typical failure pattern . 33
6.3 SPC/proof checking . 35
6.3.1 Analyser control charting . 35
6.3.2 Control chart uncertainty limits . 37
6.4 Analyser performance indicators . 38
6.4.1 Key performance indicators (KPI) . 38
6.4.2 Additional analyser performance indicators . 39
6.4.3 Points to consider in measurement of analyser availability . 40
6.4.4 Points to consider in measurement of operator utilisation . 42
6.4.5 Points to consider in measurement of analyser benefit value . 43
6.4.6 Deriving availability, utilisation and benefit measurement . 43
6.4.7 Optimising analyser performance targets . 44
6.4.8 Analyser maintenance cost against benefit . 48
6.5 Analyser performance reporting . 48
Annex A (informative) Equivalent analyser per technician (EQAT) . 50
A.1 Part 1 – Calculated technician number worksheet . 50
A.2 Part 2 – Equivalent analyser inventory worksheet calculation methodology . 50
A.3 Part 3 – Equivalent analyser inventory worksheet . 52
Annex B (informative) Example interpretation of control chart readings . 57
Annex C (informative) Determination of control chart limits by measuring standard
deviations of differences . 59
Annex D (informative) Adopting a maintenance strategy . 61
Annex E (informative) Examples of analyser cost against benefit and analyser
performance monitoring reports . 62
Annex F (informative) Typical reports for analyser performance monitoring. 67
Bibliography . 69

Figure 1 – Flow path detailing interrelationships of subject matter in IEC TR 62010 . 7
Figure 2 – Generalized risk graph . 16
Figure 3 – Failure mode pattern . 24
Figure 4 – Organisation of analyser functions . 25
Figure 5 – Relative maintenance costs . 30
Figure 6 – Life cycle diagram . 34
Figure 7 – Reliability centred maintenance failure patterns . 35
Figure 8 – Control charting diagram . 36
Figure 9 – Examples of analyser results . 37
Figure 10 – Example of control charting with linear interpretation. 41
Figure 11 – Deriving availability, utilisation and benefit measurement . 43
Figure B.1 – Example of accurately distributed control chart reading . 57
Figure B.2 – Example of biased control chart reading . 57
Figure B.3 – Example of drifting control chart reading . 58
Figure B.4 – Example of control chart reading, value outside warning limit . 58
Figure C.1 – Example determination of control chart limits by measuring standard
deviations . 60

– 4 – IEC TR 62010:2016  IEC 2016
Figure D.1 – Determining appropriate maintenance strategy . 61
Figure E.1 – Achievable availability against manning . 66
Figure E.2 – Achievable benefit against manning . 66
Figure F.1 – Uptime in Plant "A". 67

Table 1 – Typical application of elements in the risk graph. 17
Table 2 – Best practice availability targets . 20
Table 3 – Example agenda for a KPI review meeting . 32
Table C.1 – Example distillation analyser data for determining control chart limits . 59
Table E.1 – Analyser costs versus benefits . 62
Table E.2 – Analyser technician resources. 64
Table E.3 – Technician skill and experience data . 64
Table E.4 – Variation of availability with manning levels and overtime . 64
Table E.5 – Sitewide average analyser data . 65
Table F.1 – Results of analyser performance in Plant "A". 68

INTERNATIONAL ELECTROTECHNICAL COMMISSION
____________
ANALYSER SYSTEMS – MAINTENANCE MANAGEMENT

FOREWORD
1) The International Electrotechnical Commission (IEC) is a worldwide organization for standardization comprising
all national electrotechnical committees (IEC National Committees). The object of IEC is to promote
international co-operation on all questions concerning standardization in the electrical and electronic fields. To
this end and in addition to other activities, IEC publishes International Standards, Technical Specifications,
Technical Reports, Publicly Available Specifications (PAS) and Guides (hereafter referred to as “IEC
Publication(s)”). Their preparation is entrusted to technical committees; any IEC National Committee interested
in the subject dealt with may participate in this preparatory work. International, governmental and non-
governmental organizations liaising with the IEC also participate in this preparation. IEC collaborates closely
with the International Organization for Standardization (ISO) in accordance with conditions determined by
agreement between the two organizations.
2) The formal decisions or agreements of IEC on technical matters express, as nearly as possible, an international
consensus of opinion on the relevant subjects since each technical committee has representation from all
interested IEC National Committees.
3) IEC Publications have the form of recommendations for international use and are accepted by IEC National
Committees in that sense. While all reasonable efforts are made to ensure that the technical content of IEC
Publications is accurate, IEC cannot be held responsible for the way in which they are used or for any
misinterpretation by any end user.
4) In order to promote international uniformity, IEC National Committees undertake to apply IEC Publications
transparently to the maximum extent possible in their national and regional publications. Any divergence
between any IEC Publication and the corresponding national or regional publication shall be clearly indicated in
the latter.
5) IEC itself does not provide any attestation of conformity. Independent certification bodies provide conformity
assessment services and, in some areas, access to IEC marks of conformity. IEC is not responsible for any
services carried out by independent certification bodies.
6) All users should ensure that they have the latest edition of this publication.
7) No liability shall attach to IEC or its directors, employees, servants or agents including individual experts and
members of its technical committees and IEC National Committees for any personal injury, property damage or
other damage of any nature whatsoever, whether direct or indirect, or for costs (including legal fees) and
expenses arising out of the publication, use of, or reliance upon, this IEC Publication or any other IEC
Publications.
8) Attention is drawn to the Normative references cited in this publication. Use of the referenced publications is
indispensable for the correct application of this publication.
9) Attention is drawn to the possibility that some of the elements of this IEC Publication may be the subject of
patent rights. IEC shall not be held responsible for identifying any or all such patent rights.
The main task of IEC technical committees is to prepare International Standards. However, a
technical committee may propose the publication of a Technical Report when it has collected
data of a different kind from that which is normally published as an International Standard, for
example "state of the art".
IEC TR 62010, which is a Technical Report, has been prepared by subcommittee 65B:
Measurement and control devices, of IEC technical committee 65: Industrial-process
measurement, control and automation.
This second edition cancels and replaces the first edition published in 2005, This edition
constitutes a technical revision.
This edition includes the following significant technical changes with respect to the previous
edition:
a) addition of data, examples and clarifications.
EEMUA Publication 187: 2013 – Analyser systems: A guide to maintenance management, has
served as a basis for the elaboration of this Technical Report, with the permission of the
Engineering and Equipment Users Association.

– 6 – IEC TR 62010:2016  IEC 2016
The text of this Technical Report is based on the following documents:
Enquiry draft Report on voting
65B/990/DTR 65B/1063/RVC
Full information on the voting for the approval of this Technical Report can be found in the
report on voting indicated in the above table.
This document has been drafted in accordance with the ISO/IEC Directives, Part 2.
The committee has decided that the contents of this document will remain unchanged until the
stability date indicated on the IEC website under "http://webstore.iec.ch" in the data related to
the specific document. At this date, the document will be
• reconfirmed,
• withdrawn,
• replaced by a revised edition, or
• amended.
A bilingual version of this publication may be issued at a later date.

IMPORTANT – The 'colour inside' logo on the cover page of this publication indicates
that it contains colours which are considered to be useful for the correct
understanding of its contents. Users should therefore print this document using a
colour printer.
INTRODUCTION
This document covers best practices for the maintenance of on-line analysers. Analysers are
used in industry to measure variables which significantly contribute to safety, environmental,
asset protection and profit maximisation.
Maintenance organisation, prioritising of maintenance effort, maintenance methods, correct
resourcing, performance monitoring and reporting all play an important role in successful
application of on-line analysers.
The ultimate effectiveness of the contribution of on-line analysers is measured by the ability to
perform their functional requirements upon demand. This document gives guidance on
performance target setting, strategies to improve reliability, methods to measure effective
performance, and the organisations, resources and systems that need to be in place to allow
this to occur.
The various subjects covered in this document are discrete items and can appear unrelated in
the overall scheme of analyser maintenance procedures and strategies. The following flow
path in Figure 1 ties the clauses together in a logical sequence of approach.
Establishing analyser
criticality
Clause 4
Review criticality and
Define maintenance
maintenance strategy with
strategy
operations/customers
Subclauses
5.5,
Clause 5
5.6 and 5.7
Monitoring of analyser
performance using defined
measurement parameters
Clause 6
IEC
Figure 1 – Flow path detailing interrelationships
of subject matter in IEC TR 62010
This document provides a mechanism by which the criticality of an analyser can be
determined by means of a risk assessment. The risk assessment is based on consideration of
the consequence of the loss of the analysis to the operation of a process unit, or group of
process units, personnel/plant safety and the environment.
Determination of a criticality rating for the analyser allows target values for reliability to be set
for each criticality classification and prioritisation for maintenance and support. Such
approaches are covered in Clause 4.

– 8 – IEC TR 62010:2016  IEC 2016
A numbers strategy designed to allow the target reliabilities calculated by the risk
assessments to be met are defined in Clause 5.
Finally, mechanisms for tracking analyser performance and quantifying the performance as
meaningful measures are presented in Clause 6.

ANALYSER SYSTEMS – MAINTENANCE MANAGEMENT

1 Scope
1.1 Purpose
This document is written with the intention of providing an understanding of analyser
maintenance principles and approaches. It is designed as a reference source for individuals
closely involved with maintenance of analytical instrumentation, and provides guidance on
performance target setting, strategies to improve reliability, methods to measure effective
performance, and the organisations, resources and systems that need to be in place to allow
this to occur.
Effective management of on-line analysers is only possible when key criteria have been
identified and tools for measuring these criteria established.
On-line analysers are used in industry for the following reasons:
• Safety and environmental. One category of on-line analyser is those used to control and
monitor safety and environmental systems. The key measured parameter for this category
of analyser is on-line time. This is essentially simpler to measure than an analyser’s
contribution to profits but as with process analysers applied for profit maximisation, the
contribution will be dependent upon ability to perform its functional requirements on
demand.
• Asset protection and profit maximisation. On-line analysers falling into this category
are normally those impacting directly on process control. They can impact directly on
protection of assets (e.g. corrosion, catalyst contamination) or product quality, or can be
used to optimise the operation of the process (e.g. energy efficiency). For this category of
analysers, the key measured parameter is either the cost of damage to plant or the direct
effect on overall profit of the process unit. Justification as to whether an analyser is
installed on the process can be sought by quantifying the payback time of the analyser,
the pass/fail target typically being 18 months. The contribution of the analyser to reduction
in extent of damage to, or the profit of, the process unit, is difficult to measure. However,
this contribution will be dependent upon the analyser’s ability to perform its functional
requirements upon demand.
This document focuses on the cost/benefits associated with traditional analyser maintenance
organisations. Due to the complexity of modern analysers, support can be required from
laboratory or product quality specialists, for example for chemometric models, who can work
for other parts of the organisation. Inclusion of their costs in the overall maintenance cost is
therefore important.
1.2 Questions to be addressed
When considering on-line analyser systems and their maintenance, the following key points
list is useful in helping decide where gaps exist in the maintenance strategy.
• What is the uptime of each critical analyser? Do you measure uptime and maintain
records? Do you know the value provided by each analyser and therefore which ones are
critical? Do you meet regularly with operations (‘the customer’) to review priorities?
• What is the value delivered by each analyser in terms of process performance
improvement (i.e. improved yield values, improved quality, improved manufacturing
cycle time and/or process cycle time, process safety (e.g. interlocks), environmental
importance)? Is this information readily available and agreed to in meetings with
operations? Is the value updated periodically?

– 10 – IEC TR 62010:2016  IEC 2016
• What is the utilisation of each critical analyser? That is, if the analyser is used in a
control loop, what percentage of the time is the loop on manual due to questions about the
analyser data? Do you keep records on the amount of time that analyser loops are in
automatic? Do you meet regularly with operations to review the operator’s views about the
plausibility of the analyser data?
• Do you have a regular preventive maintenance programme set up for each analyser
which includes regular calibrations? Does the calibration/validation procedure include
statistical process control (SPC) concepts – upper/lower limits and measurement of
analyser variability (or noise)? Is the procedure well documented? Do you conduct it
regularly, even when things are running well?
• Do you have trained personnel (capable of performing all required procedures and
troubleshooting the majority of analyser problems) who are assigned responsibility
for the analysers? Do the trained personnel understand the process? Do they understand
any lab measurements which relate to the analyser results?
• Do the trained maintenance personnel have access to higher level technical support
as necessary for difficult analyser and/or process problems? Do they have ready
access to the individual who developed the application? Do they have ready access to the
vendor? Can higher level support personnel connect remotely to the analyser to observe
and troubleshoot?
• Do you have a maintenance record keeping systems, which documents all activity
involving the analysers, including all calibration/validation records, all repairs
and/or adjustments?
• Do you use the record keeping system to identify repetitive failure modes and to
determine the root cause of failures? Do you track the average time-to-repair analyser
problems? Do you track average time-between-failures for each analyser?
• Do you periodically review the analysers with higher level technical resources to
identify opportunities to significantly improve performance by upgrading the
analyser system with improved technology or a simpler/more reliable approach?
• Do you meet regularly with operations personnel to review analyser performance,
update priorities, and understand production goals?
• Do you have a management framework that understands the value of the analysers
and are committed to and supportive of reliable analysers?
• Do you know how much the maintenance programme costs each year and is there a
solid justification for it?
Consideration of the above questions will help to identify opportunities for continuously
improving the reliability of installed process analysers. Once the opportunities are identified
the following clauses are intended to give guidance in achieving the solutions with the aim of:
• maximising performance and benefit of installed analysers;
• achieving full operator confidence in the use of on-line analysers;
• analyser output data becoming reliable enough to be used by operators, control systems,
and other users, in order to improve plant operation versus world class manufacturing
metrics to become the best process analysers possible.
2 Normative references
There are no normative references in this document.
3 Terms and definitions
For the purposes of this document, the following terms and definitions apply.

ISO and IEC maintain terminological databases for use in standardization at the following
addresses:
• IEC Electropedia: available at http://www.electropedia.org/
• ISO Online browsing platform: available at http://www.iso.org/obp
3.1
availability
ability of an item to be in a state to perform a required function under given conditions at a
given instant of time or over a given time interval, assuming that the required external
resources are provided
3.2
catastrophic failure
failure of a component, equipment or system in which its particular performance characteristic
moves completely to one or the other of the extreme limits outside the normal specification
range
3.3
consequence
measure of the expected effects of an incident outcome case
3.4
control system
system which responds to input signals from the process and/or from an operator and
generates signals causing the equipment under control (EUC) to operate in the desired
manner
3.5
diversity
performance of the same overall function by a number of independent and different means
3.6
error
discrepancy between a computed, observed or measured value or condition and the true,
specified or theoretically correct value or condition
[SOURCE: IEC 60050-192:2015, 192-03-02, modified — the notes have been deleted]
3.7
failure
termination of the ability of an item to perform a required function
[SOURCE: IEC 60050-603:1986, 603-05-06]
3.8
fault
state of an item characterized by the inability to perform a required function, excluding the
inability during preventive maintenance or other planned actions, or due to lack of external
resources
3.9
design fault
fault in the design caused by a mistake in the design phase of a system
Note 1 to entry: A design fault causes an error, remaining undetected in a part of the system until specific
conditions affecting that part of the system are such that the produced result does not conform to the intended
function. This results in a failure of that part of the system. If the conditions appear again, the same results will be
produced.
– 12 – IEC TR 62010:2016  IEC 2016
3.10
undetected fault
fault which is not detected by a diagnostic check
3.11
mistake
human error
human action that produces an unintended result
3.12
failed state
condition of a component, equipment or system during the time it is subject to a failure
3.13
fault tree analysis
analysis to determine which fault modes of the sub-items or external events, or combinations
thereof, may result in a stated fault mode of the item, presented in the form of a fault tree
3.14
functional safety
ability of a safety related system to carry out the actions necessary to achieve a safe state for
the EUC or to maintain the safe state for the EUC
3.15
hazard
physical situation with a potential for human injury
3.16
maintainability
ability of an item under given conditions of use to be retained in or restored to a state in which
it can perform a required function, when maintenance is performed under given conditions
and using stated procedures and resources
[SOURCE: IEC 60050-192:2015, 192-01-27, modified]
3.17
mean time between failures
MTBF
expectation of the duration of the operating time between failures
[SOURCE: IEC 60050-192:2015, 192-05-13, modified — "operating" is omitted from the
definition and the note has been deleted]
3.18
mean time to failure
MTTF
expectation of the operating time to failure
[SOURCE: IEC 60050-192:2015, 192-05-11, modified — "operating" is omitted from the
definition and the notes have been deleted]
3.19
mean time to restoration
MTTR
expectation of the time to restoration
[SOURCE: IEC 60050-192:2015, 192-07-23, modified — the note has beend deleted]

3.20
proof testing
method of ensuring that a component, equipment or system possesses all the required
performance characteristics and is capable of responding in the manner desired
3.21
random hardware failure
failure occurring at a random time, which results from one or more of the possible degradation
mechanisms in the hardware
Note 1 to entry: There are many degradation mechanisms occurring at different rates in different components,
and, since manufacturing tolerances cause components to fail due to these mechanisms after different times in
operation, failures of equipment comprising many components occur at predictable rates but at unpredictable (i.e.
random) times.
Note 2 to entry: A major distinguishing feature between random hardware failures and systematic failures, is that
system failure rates (or other appropriate measures), arising from random hardware failures, can be predicted with
reasonable accuracy but systematic failures, by their very nature cannot be predicted. That is, system failure
arising from random hardware failure rates can be quantified with reasonable accuracy but those arising from
systematic failure cannot be accurately quantified because events leading to them cannot easily be predicted.
3.22
redundancy
in an item, the existence of more than one means for performing a required function
[SOURCE: IEC 60050-351:2013, 351-42-28, modified — the notes have been deleted]
3.23
reliability
ability of an item to perform a required function under given conditions for a given time
interval
[SOURCE: IEC 60050-395:2014, 395-07-131, modified — the notes have been deleted]
3.24
risk
probable rate of occurrence of a hazard causing harm and the degree of severity of harm
Note 1 to entry: The concept of risk always has two elements: the frequency or probability at which a hazard
occurs and the consequences of the hazard event.
3.25
safety
freedom from unacceptable risk of harm
3.26
safety integrity
SI
probability of a safety related system satisfactorily performing the required safety functions
under all the stated conditions within a stated period of time
3.27
safety integrity level
SIL
one of four possible discrete levels for specifying the safety integrity requirements of the
safety functions to be allocated to the safety related systems
Note 1 to entry: SIL 4 has the highest level of safety integrity; SIL 1 has the lowest.

– 14 – IEC TR 62010:2016  IEC 2016
3.28
safety-related system
system that:
• implements the required safety functions to achieve a safe state for the EUC or to
maintain a safe state for the EUC;
• is intended to achieve, on its own, or with other safety-related systems, the necessary
level of integrity for the implementation of the required safety functions
3.29
safety-related control system
system which carries out active control of the EUC and which has the potential, if not in
accordance with its design intent, to enter an unsafe state
3.30
safety-related protection system
SRPS
system designed to respond to conditions on the EUC, which may also be hazardous, or if no
action was taken, could give rise to hazardous events, and to generate the correct outputs to
mitigate the hazardous consequences or prevent the hazardous events
3.31
safety requirements specification
specification that contains all the requirements of the safety functions that have to be
performed by the safety-related systems
Note 1 to entry: The specification is divided into:
• safety functions requirement specification;
• safety integrity requirement specification.
3.32
software
intellectual creation comprising the programmes, procedures, rules and any associated
documentation pertaining to the operation of a data processing system
3.33
system
set of components which interact according to a design
Note 1 to entry: A component may be another system (a subsystem). Such components (subsystems) may be,
depending on the level:
• a controlling or controller system,
• hardware, software, human interaction.
3.34
systematic failure
failure related in a deterministic way to a certain cause, which can only be eliminated by a
modification of the design or of the manufacturing process, operational procedures,
documentation or other relevant factors
[SOURCE: IEC 60050-395:2014, 395-07-133]
3.35
system life cycle
activities occurring during a period of time that starts when a system is conceived and ends
when the system is no longer available

3.36
top event
unwanted event or incident at the ’top’ of a fault tree that is traced downward to more basic
failures using logic gates to determine its causes and likelihood
3.37
validation
confirmation by examination and provision of objective evidence that the particular
requirements for a specific intended use are fulfilled
3.38
verification
confirmation by examination and provision of objective evidence that the specified
requirements have been fulfilled
4 Classifying analysers using a risk based approach
4.1 General
Defining on-line analysers as being related to the functional categories of safety,
environmental, asset protection or profit maximisation necessitates that the capability exists
to determine the required priority for performance target setting and maintenance direction of
each instrument by designed functional category. This can be achieved using a risk graph,
whereby the target category rating of an analyser is calculated based upon the required risk
factor. The hazard rate of the event the analyser is designed to protect against (the so called
top event) and the consequence of the top event should be known.
The method takes the principle and general format of the risk graph approach for
IEC 61508-5 [2] . However, as this document is aimed at analyser maintenance priorities, it
should be noted that:
• where analysers are part of a safety system it is not an alternative approach to
determining safety integrity levels (SILs) and where SILs demand certain proof checking
periods, duplication of analysers etc., these will take precedence;
• the ranking system adopted is in line with accepted analyser maintenance practice, i.e.
highest priority is ’1’ and lowest priority is ‘3’.
_________
Numbers in square brackets refer to the Bibliography.

– 16 – IEC TR 62010:2016  IEC 2016
PD PD PD
1 2 3
a a 1
I
E I
R 1 2
a 1 2
E
I
1 2 3
I
Starting point
E
R 1
for priority
E
assessment
I
2 3 b
I
E
R
I
E 1
3 b b
I
R = Risk parameter
a = Special maintenance consideration
E = Exposure parameter
1, 2, 3 = Priority level
I = Intervention parameter
B = No particular priority requirement
PD = Process demand parameter
IEC
Figure 2 – Generalized risk graph
Using the generalized risk graph shown in Figure 2, each design functional category is
considered in turn. The risk graph for each analyser function should be ’calibrated’. This is
best achieved by defining the consequences for failures, then evaluating a number of
scenarios. The exercise will establish if the outcome in terms of risk reduction is appropriate
to the applications.
At the starting point of working through the risk graph towards priority setting it is necessary
to establish the initial element which is the risk parameter (R), i.e. the main area of impact
associated with analyser failure, for example plant damage, loss of profit, environmental
damage, and serious injury/loss of life. The second element is applied on judgements of
importance of the analyser in keeping the plant running and is termed the exposure
parameter (E), for example high risk of immediate/short term damage, plant control scheme
ability to function, environmental consent limitations, area sensitivity, or frequency of
exposure of personnel to hazard. The third element is the intervention parameter (I), which is
an assessment of whether operator intervention can mitigate the impact of the failure or not.
The graph then leads to the prioritisation box which gives priority choice based on the process
demand parameter (PD), i.e. the likelihood of the process requiring the measurement when a
failure occurs. The following Table 1 summarises a typical application of elements in the risk
graph and explanatory notes are given in 4.1, 4.2, 4.3 and 4.4.

Table 1 – Typical application of elements in the risk graph
Safety Environmental Asset protection Profit maximisation
R Multiple fatalities on or Release causing Damage with major Production profit margins
off site permanent damage or replacement costs high
major clean-up costs
R Fatality on or off site, Release causing Damage with moderate Production profit margins
injury (resulting in temporary damage replacement costs medium
hospitalisation to a requiring significant
member of the public or clean-up
staff)
R Minor injury with lost time Release with minor Damage with minor Production profit margins
impact damage which should be replacement costs or no low
recorded, or failure to damage
record critical data
E Frequency of exposure to Consent restrictions and High risk of immediate / Control scheme cannot
the hazard is more / or sensitive area short term damage function
frequent to permanent
E Frequency of exposure to No consent restrictions Low risk of immediate / Control scheme can
hazard is rare to more and / or non-sensitive short term damage function in short term
often area
I Unlikely that operator action will prevent or mitigate circumstances
I Possible for operator to take acti
...


IEC TR 62010 ®
Edition 2.0 2016-12
TECHNICAL
REPORT
colour
inside
Analyser systems – Maintenance management

All rights reserved. Unless otherwise specified, no part of this publication may be reproduced or utilized in any form
or by any means, electronic or mechanical, including photocopying and microfilm, without permission in writing from
either IEC or IEC's member National Committee in the country of the requester. If you have any questions about IEC
copyright or have an enquiry about obtaining additional rights to this publication, please contact the address below or
your local IEC member National Committee for further information.

IEC Central Office Tel.: +41 22 919 02 11
3, rue de Varembé Fax: +41 22 919 03 00
CH-1211 Geneva 20 info@iec.ch
Switzerland www.iec.ch
About the IEC
The International Electrotechnical Commission (IEC) is the leading global organization that prepares and publishes
International Standards for all electrical, electronic and related technologies.

About IEC publications
The technical content of IEC publications is kept under constant review by the IEC. Please make sure that you have the
latest edition, a corrigenda or an amendment might have been published.

IEC Catalogue - webstore.iec.ch/catalogue Electropedia - www.electropedia.org
The stand-alone application for consulting the entire The world's leading online dictionary of electronic and
bibliographical information on IEC International Standards, electrical terms containing 20 000 terms and definitions in
Technical Specifications, Technical Reports and other English and French, with equivalent terms in 15 additional
documents. Available for PC, Mac OS, Android Tablets and languages. Also known as the International Electrotechnical
iPad. Vocabulary (IEV) online.

IEC publications search - www.iec.ch/searchpub IEC Glossary - std.iec.ch/glossary
The advanced search enables to find IEC publications by a 65 000 electrotechnical terminology entries in English and
variety of criteria (reference number, text, technical French extracted from the Terms and Definitions clause of
committee,…). It also gives information on projects, replaced IEC publications issued since 2002. Some entries have been
and withdrawn publications. collected from earlier publications of IEC TC 37, 77, 86 and

CISPR.
IEC Just Published - webstore.iec.ch/justpublished

Stay up to date on all new IEC publications. Just Published IEC Customer Service Centre - webstore.iec.ch/csc
details all new publications released. Available online and If you wish to give us your feedback on this publication or
also once a month by email. need further assistance, please contact the Customer Service
Centre: csc@iec.ch.
IEC TR 62010 ®
Edition 2.0 2016-12
TECHNICAL
REPORT
colour
inside
Analyser systems – Maintenance management

INTERNATIONAL
ELECTROTECHNICAL
COMMISSION
ICS 71.040.01 ISBN 978-2-8322-3684-0

– 2 – IEC TR 62010:2016  IEC 2016
CONTENTS
FOREWORD . 5
INTRODUCTION . 7
1 Scope . 9
1.1 Purpose . 9
1.2 Questions to be addressed . 9
2 Normative references . 10
3 Terms and definitions . 10
4 Classifying analysers using a risk based approach . 15
4.1 General . 15
4.2 Safety protection . 17
4.3 Environmental protection . 17
4.4 Asset protection . 19
4.5 Profit maximisation . 19
4.6 Performance target . 20
4.7 Maintenance priority . 21
4.8 Support priority . 21
5 Maintenance strategies . 21
5.1 General . 21
5.2 Reliability centred maintenance (RCM) . 21
5.2.1 General . 21
5.2.2 Reactive maintenance . 22
5.2.3 Preventative or planned maintenance (PM) . 22
5.2.4 Condition based strategy . 23
5.2.5 Proactive maintenance . 23
5.2.6 Optimising maintenance strategy . 23
5.3 Management systems/organisation . 24
5.4 Training/competency . 26
5.4.1 General . 26
5.4.2 Training needs . 26
5.4.3 Selecting trainees . 26
5.4.4 Types of training . 26
5.4.5 Vendor training . 27
5.4.6 Classroom training . 27
5.4.7 Technical societies . 27
5.4.8 User training . 27
5.4.9 Retraining . 28
5.5 Optimal resourcing . 28
5.5.1 General . 28
5.5.2 Equivalent analyser per technician (EQAT) calculation method . 29
5.5.3 Ideal number of technicians . 29
5.5.4 In-house or contracted out maintenance . 30
5.5.5 Off-site technical support requirement . 31
5.6 Best practice benchmarking . 31
5.7 Annual analyser key performance indicator (KPI) review . 31
6 Analyser performance monitoring . 32
6.1 General . 32

6.2 Recording failures – reason/history codes . 33
6.2.1 General . 33
6.2.2 Typical failure pattern . 33
6.3 SPC/proof checking . 35
6.3.1 Analyser control charting . 35
6.3.2 Control chart uncertainty limits . 37
6.4 Analyser performance indicators . 38
6.4.1 Key performance indicators (KPI) . 38
6.4.2 Additional analyser performance indicators . 39
6.4.3 Points to consider in measurement of analyser availability . 40
6.4.4 Points to consider in measurement of operator utilisation . 42
6.4.5 Points to consider in measurement of analyser benefit value . 43
6.4.6 Deriving availability, utilisation and benefit measurement . 43
6.4.7 Optimising analyser performance targets . 44
6.4.8 Analyser maintenance cost against benefit . 48
6.5 Analyser performance reporting . 48
Annex A (informative) Equivalent analyser per technician (EQAT) . 50
A.1 Part 1 – Calculated technician number worksheet . 50
A.2 Part 2 – Equivalent analyser inventory worksheet calculation methodology . 50
A.3 Part 3 – Equivalent analyser inventory worksheet . 52
Annex B (informative) Example interpretation of control chart readings . 57
Annex C (informative) Determination of control chart limits by measuring standard
deviations of differences . 59
Annex D (informative) Adopting a maintenance strategy . 61
Annex E (informative) Examples of analyser cost against benefit and analyser
performance monitoring reports . 62
Annex F (informative) Typical reports for analyser performance monitoring. 67
Bibliography . 69

Figure 1 – Flow path detailing interrelationships of subject matter in IEC TR 62010 . 7
Figure 2 – Generalized risk graph . 16
Figure 3 – Failure mode pattern . 24
Figure 4 – Organisation of analyser functions . 25
Figure 5 – Relative maintenance costs . 30
Figure 6 – Life cycle diagram . 34
Figure 7 – Reliability centred maintenance failure patterns . 35
Figure 8 – Control charting diagram . 36
Figure 9 – Examples of analyser results . 37
Figure 10 – Example of control charting with linear interpretation. 41
Figure 11 – Deriving availability, utilisation and benefit measurement . 43
Figure B.1 – Example of accurately distributed control chart reading . 57
Figure B.2 – Example of biased control chart reading . 57
Figure B.3 – Example of drifting control chart reading . 58
Figure B.4 – Example of control chart reading, value outside warning limit . 58
Figure C.1 – Example determination of control chart limits by measuring standard
deviations . 60

– 4 – IEC TR 62010:2016  IEC 2016
Figure D.1 – Determining appropriate maintenance strategy . 61
Figure E.1 – Achievable availability against manning . 66
Figure E.2 – Achievable benefit against manning . 66
Figure F.1 – Uptime in Plant "A". 67

Table 1 – Typical application of elements in the risk graph. 17
Table 2 – Best practice availability targets . 20
Table 3 – Example agenda for a KPI review meeting . 32
Table C.1 – Example distillation analyser data for determining control chart limits . 59
Table E.1 – Analyser costs versus benefits . 62
Table E.2 – Analyser technician resources. 64
Table E.3 – Technician skill and experience data . 64
Table E.4 – Variation of availability with manning levels and overtime . 64
Table E.5 – Sitewide average analyser data . 65
Table F.1 – Results of analyser performance in Plant "A". 68

INTERNATIONAL ELECTROTECHNICAL COMMISSION
____________
ANALYSER SYSTEMS – MAINTENANCE MANAGEMENT

FOREWORD
1) The International Electrotechnical Commission (IEC) is a worldwide organization for standardization comprising
all national electrotechnical committees (IEC National Committees). The object of IEC is to promote
international co-operation on all questions concerning standardization in the electrical and electronic fields. To
this end and in addition to other activities, IEC publishes International Standards, Technical Specifications,
Technical Reports, Publicly Available Specifications (PAS) and Guides (hereafter referred to as “IEC
Publication(s)”). Their preparation is entrusted to technical committees; any IEC National Committee interested
in the subject dealt with may participate in this preparatory work. International, governmental and non-
governmental organizations liaising with the IEC also participate in this preparation. IEC collaborates closely
with the International Organization for Standardization (ISO) in accordance with conditions determined by
agreement between the two organizations.
2) The formal decisions or agreements of IEC on technical matters express, as nearly as possible, an international
consensus of opinion on the relevant subjects since each technical committee has representation from all
interested IEC National Committees.
3) IEC Publications have the form of recommendations for international use and are accepted by IEC National
Committees in that sense. While all reasonable efforts are made to ensure that the technical content of IEC
Publications is accurate, IEC cannot be held responsible for the way in which they are used or for any
misinterpretation by any end user.
4) In order to promote international uniformity, IEC National Committees undertake to apply IEC Publications
transparently to the maximum extent possible in their national and regional publications. Any divergence
between any IEC Publication and the corresponding national or regional publication shall be clearly indicated in
the latter.
5) IEC itself does not provide any attestation of conformity. Independent certification bodies provide conformity
assessment services and, in some areas, access to IEC marks of conformity. IEC is not responsible for any
services carried out by independent certification bodies.
6) All users should ensure that they have the latest edition of this publication.
7) No liability shall attach to IEC or its directors, employees, servants or agents including individual experts and
members of its technical committees and IEC National Committees for any personal injury, property damage or
other damage of any nature whatsoever, whether direct or indirect, or for costs (including legal fees) and
expenses arising out of the publication, use of, or reliance upon, this IEC Publication or any other IEC
Publications.
8) Attention is drawn to the Normative references cited in this publication. Use of the referenced publications is
indispensable for the correct application of this publication.
9) Attention is drawn to the possibility that some of the elements of this IEC Publication may be the subject of
patent rights. IEC shall not be held responsible for identifying any or all such patent rights.
The main task of IEC technical committees is to prepare International Standards. However, a
technical committee may propose the publication of a Technical Report when it has collected
data of a different kind from that which is normally published as an International Standard, for
example "state of the art".
IEC TR 62010, which is a Technical Report, has been prepared by subcommittee 65B:
Measurement and control devices, of IEC technical committee 65: Industrial-process
measurement, control and automation.
This second edition cancels and replaces the first edition published in 2005, This edition
constitutes a technical revision.
This edition includes the following significant technical changes with respect to the previous
edition:
a) addition of data, examples and clarifications.
EEMUA Publication 187: 2013 – Analyser systems: A guide to maintenance management, has
served as a basis for the elaboration of this Technical Report, with the permission of the
Engineering and Equipment Users Association.

– 6 – IEC TR 62010:2016  IEC 2016
The text of this Technical Report is based on the following documents:
Enquiry draft Report on voting
65B/990/DTR 65B/1063/RVC
Full information on the voting for the approval of this Technical Report can be found in the
report on voting indicated in the above table.
This document has been drafted in accordance with the ISO/IEC Directives, Part 2.
The committee has decided that the contents of this document will remain unchanged until the
stability date indicated on the IEC website under "http://webstore.iec.ch" in the data related to
the specific document. At this date, the document will be
• reconfirmed,
• withdrawn,
• replaced by a revised edition, or
• amended.
A bilingual version of this publication may be issued at a later date.

IMPORTANT – The 'colour inside' logo on the cover page of this publication indicates
that it contains colours which are considered to be useful for the correct
understanding of its contents. Users should therefore print this document using a
colour printer.
INTRODUCTION
This document covers best practices for the maintenance of on-line analysers. Analysers are
used in industry to measure variables which significantly contribute to safety, environmental,
asset protection and profit maximisation.
Maintenance organisation, prioritising of maintenance effort, maintenance methods, correct
resourcing, performance monitoring and reporting all play an important role in successful
application of on-line analysers.
The ultimate effectiveness of the contribution of on-line analysers is measured by the ability to
perform their functional requirements upon demand. This document gives guidance on
performance target setting, strategies to improve reliability, methods to measure effective
performance, and the organisations, resources and systems that need to be in place to allow
this to occur.
The various subjects covered in this document are discrete items and can appear unrelated in
the overall scheme of analyser maintenance procedures and strategies. The following flow
path in Figure 1 ties the clauses together in a logical sequence of approach.
Establishing analyser
criticality
Clause 4
Review criticality and
Define maintenance
maintenance strategy with
strategy
operations/customers
Subclauses
5.5,
Clause 5
5.6 and 5.7
Monitoring of analyser
performance using defined
measurement parameters
Clause 6
IEC
Figure 1 – Flow path detailing interrelationships
of subject matter in IEC TR 62010
This document provides a mechanism by which the criticality of an analyser can be
determined by means of a risk assessment. The risk assessment is based on consideration of
the consequence of the loss of the analysis to the operation of a process unit, or group of
process units, personnel/plant safety and the environment.
Determination of a criticality rating for the analyser allows target values for reliability to be set
for each criticality classification and prioritisation for maintenance and support. Such
approaches are covered in Clause 4.

– 8 – IEC TR 62010:2016  IEC 2016
A numbers strategy designed to allow the target reliabilities calculated by the risk
assessments to be met are defined in Clause 5.
Finally, mechanisms for tracking analyser performance and quantifying the performance as
meaningful measures are presented in Clause 6.

ANALYSER SYSTEMS – MAINTENANCE MANAGEMENT

1 Scope
1.1 Purpose
This document is written with the intention of providing an understanding of analyser
maintenance principles and approaches. It is designed as a reference source for individuals
closely involved with maintenance of analytical instrumentation, and provides guidance on
performance target setting, strategies to improve reliability, methods to measure effective
performance, and the organisations, resources and systems that need to be in place to allow
this to occur.
Effective management of on-line analysers is only possible when key criteria have been
identified and tools for measuring these criteria established.
On-line analysers are used in industry for the following reasons:
• Safety and environmental. One category of on-line analyser is those used to control and
monitor safety and environmental systems. The key measured parameter for this category
of analyser is on-line time. This is essentially simpler to measure than an analyser’s
contribution to profits but as with process analysers applied for profit maximisation, the
contribution will be dependent upon ability to perform its functional requirements on
demand.
• Asset protection and profit maximisation. On-line analysers falling into this category
are normally those impacting directly on process control. They can impact directly on
protection of assets (e.g. corrosion, catalyst contamination) or product quality, or can be
used to optimise the operation of the process (e.g. energy efficiency). For this category of
analysers, the key measured parameter is either the cost of damage to plant or the direct
effect on overall profit of the process unit. Justification as to whether an analyser is
installed on the process can be sought by quantifying the payback time of the analyser,
the pass/fail target typically being 18 months. The contribution of the analyser to reduction
in extent of damage to, or the profit of, the process unit, is difficult to measure. However,
this contribution will be dependent upon the analyser’s ability to perform its functional
requirements upon demand.
This document focuses on the cost/benefits associated with traditional analyser maintenance
organisations. Due to the complexity of modern analysers, support can be required from
laboratory or product quality specialists, for example for chemometric models, who can work
for other parts of the organisation. Inclusion of their costs in the overall maintenance cost is
therefore important.
1.2 Questions to be addressed
When considering on-line analyser systems and their maintenance, the following key points
list is useful in helping decide where gaps exist in the maintenance strategy.
• What is the uptime of each critical analyser? Do you measure uptime and maintain
records? Do you know the value provided by each analyser and therefore which ones are
critical? Do you meet regularly with operations (‘the customer’) to review priorities?
• What is the value delivered by each analyser in terms of process performance
improvement (i.e. improved yield values, improved quality, improved manufacturing
cycle time and/or process cycle time, process safety (e.g. interlocks), environmental
importance)? Is this information readily available and agreed to in meetings with
operations? Is the value updated periodically?

– 10 – IEC TR 62010:2016  IEC 2016
• What is the utilisation of each critical analyser? That is, if the analyser is used in a
control loop, what percentage of the time is the loop on manual due to questions about the
analyser data? Do you keep records on the amount of time that analyser loops are in
automatic? Do you meet regularly with operations to review the operator’s views about the
plausibility of the analyser data?
• Do you have a regular preventive maintenance programme set up for each analyser
which includes regular calibrations? Does the calibration/validation procedure include
statistical process control (SPC) concepts – upper/lower limits and measurement of
analyser variability (or noise)? Is the procedure well documented? Do you conduct it
regularly, even when things are running well?
• Do you have trained personnel (capable of performing all required procedures and
troubleshooting the majority of analyser problems) who are assigned responsibility
for the analysers? Do the trained personnel understand the process? Do they understand
any lab measurements which relate to the analyser results?
• Do the trained maintenance personnel have access to higher level technical support
as necessary for difficult analyser and/or process problems? Do they have ready
access to the individual who developed the application? Do they have ready access to the
vendor? Can higher level support personnel connect remotely to the analyser to observe
and troubleshoot?
• Do you have a maintenance record keeping systems, which documents all activity
involving the analysers, including all calibration/validation records, all repairs
and/or adjustments?
• Do you use the record keeping system to identify repetitive failure modes and to
determine the root cause of failures? Do you track the average time-to-repair analyser
problems? Do you track average time-between-failures for each analyser?
• Do you periodically review the analysers with higher level technical resources to
identify opportunities to significantly improve performance by upgrading the
analyser system with improved technology or a simpler/more reliable approach?
• Do you meet regularly with operations personnel to review analyser performance,
update priorities, and understand production goals?
• Do you have a management framework that understands the value of the analysers
and are committed to and supportive of reliable analysers?
• Do you know how much the maintenance programme costs each year and is there a
solid justification for it?
Consideration of the above questions will help to identify opportunities for continuously
improving the reliability of installed process analysers. Once the opportunities are identified
the following clauses are intended to give guidance in achieving the solutions with the aim of:
• maximising performance and benefit of installed analysers;
• achieving full operator confidence in the use of on-line analysers;
• analyser output data becoming reliable enough to be used by operators, control systems,
and other users, in order to improve plant operation versus world class manufacturing
metrics to become the best process analysers possible.
2 Normative references
There are no normative references in this document.
3 Terms and definitions
For the purposes of this document, the following terms and definitions apply.

ISO and IEC maintain terminological databases for use in standardization at the following
addresses:
• IEC Electropedia: available at http://www.electropedia.org/
• ISO Online browsing platform: available at http://www.iso.org/obp
3.1
availability
ability of an item to be in a state to perform a required function under given conditions at a
given instant of time or over a given time interval, assuming that the required external
resources are provided
3.2
catastrophic failure
failure of a component, equipment or system in which its particular performance characteristic
moves completely to one or the other of the extreme limits outside the normal specification
range
3.3
consequence
measure of the expected effects of an incident outcome case
3.4
control system
system which responds to input signals from the process and/or from an operator and
generates signals causing the equipment under control (EUC) to operate in the desired
manner
3.5
diversity
performance of the same overall function by a number of independent and different means
3.6
error
discrepancy between a computed, observed or measured value or condition and the true,
specified or theoretically correct value or condition
[SOURCE: IEC 60050-192:2015, 192-03-02, modified — the notes have been deleted]
3.7
failure
termination of the ability of an item to perform a required function
[SOURCE: IEC 60050-603:1986, 603-05-06]
3.8
fault
state of an item characterized by the inability to perform a required function, excluding the
inability during preventive maintenance or other planned actions, or due to lack of external
resources
3.9
design fault
fault in the design caused by a mistake in the design phase of a system
Note 1 to entry: A design fault causes an error, remaining undetected in a part of the system until specific
conditions affecting that part of the system are such that the produced result does not conform to the intended
function. This results in a failure of that part of the system. If the conditions appear again, the same results will be
produced.
– 12 – IEC TR 62010:2016  IEC 2016
3.10
undetected fault
fault which is not detected by a diagnostic check
3.11
mistake
human error
human action that produces an unintended result
3.12
failed state
condition of a component, equipment or system during the time it is subject to a failure
3.13
fault tree analysis
analysis to determine which fault modes of the sub-items or external events, or combinations
thereof, may result in a stated fault mode of the item, presented in the form of a fault tree
3.14
functional safety
ability of a safety related system to carry out the actions necessary to achieve a safe state for
the EUC or to maintain the safe state for the EUC
3.15
hazard
physical situation with a potential for human injury
3.16
maintainability
ability of an item under given conditions of use to be retained in or restored to a state in which
it can perform a required function, when maintenance is performed under given conditions
and using stated procedures and resources
[SOURCE: IEC 60050-192:2015, 192-01-27, modified]
3.17
mean time between failures
MTBF
expectation of the duration of the operating time between failures
[SOURCE: IEC 60050-192:2015, 192-05-13, modified — "operating" is omitted from the
definition and the note has been deleted]
3.18
mean time to failure
MTTF
expectation of the operating time to failure
[SOURCE: IEC 60050-192:2015, 192-05-11, modified — "operating" is omitted from the
definition and the notes have been deleted]
3.19
mean time to restoration
MTTR
expectation of the time to restoration
[SOURCE: IEC 60050-192:2015, 192-07-23, modified — the note has beend deleted]

3.20
proof testing
method of ensuring that a component, equipment or system possesses all the required
performance characteristics and is capable of responding in the manner desired
3.21
random hardware failure
failure occurring at a random time, which results from one or more of the possible degradation
mechanisms in the hardware
Note 1 to entry: There are many degradation mechanisms occurring at different rates in different components,
and, since manufacturing tolerances cause components to fail due to these mechanisms after different times in
operation, failures of equipment comprising many components occur at predictable rates but at unpredictable (i.e.
random) times.
Note 2 to entry: A major distinguishing feature between random hardware failures and systematic failures, is that
system failure rates (or other appropriate measures), arising from random hardware failures, can be predicted with
reasonable accuracy but systematic failures, by their very nature cannot be predicted. That is, system failure
arising from random hardware failure rates can be quantified with reasonable accuracy but those arising from
systematic failure cannot be accurately quantified because events leading to them cannot easily be predicted.
3.22
redundancy
in an item, the existence of more than one means for performing a required function
[SOURCE: IEC 60050-351:2013, 351-42-28, modified — the notes have been deleted]
3.23
reliability
ability of an item to perform a required function under given conditions for a given time
interval
[SOURCE: IEC 60050-395:2014, 395-07-131, modified — the notes have been deleted]
3.24
risk
probable rate of occurrence of a hazard causing harm and the degree of severity of harm
Note 1 to entry: The concept of risk always has two elements: the frequency or probability at which a hazard
occurs and the consequences of the hazard event.
3.25
safety
freedom from unacceptable risk of harm
3.26
safety integrity
SI
probability of a safety related system satisfactorily performing the required safety functions
under all the stated conditions within a stated period of time
3.27
safety integrity level
SIL
one of four possible discrete levels for specifying the safety integrity requirements of the
safety functions to be allocated to the safety related systems
Note 1 to entry: SIL 4 has the highest level of safety integrity; SIL 1 has the lowest.

– 14 – IEC TR 62010:2016  IEC 2016
3.28
safety-related system
system that:
• implements the required safety functions to achieve a safe state for the EUC or to
maintain a safe state for the EUC;
• is intended to achieve, on its own, or with other safety-related systems, the necessary
level of integrity for the implementation of the required safety functions
3.29
safety-related control system
system which carries out active control of the EUC and which has the potential, if not in
accordance with its design intent, to enter an unsafe state
3.30
safety-related protection system
SRPS
system designed to respond to conditions on the EUC, which may also be hazardous, or if no
action was taken, could give rise to hazardous events, and to generate the correct outputs to
mitigate the hazardous consequences or prevent the hazardous events
3.31
safety requirements specification
specification that contains all the requirements of the safety functions that have to be
performed by the safety-related systems
Note 1 to entry: The specification is divided into:
• safety functions requirement specification;
• safety integrity requirement specification.
3.32
software
intellectual creation comprising the programmes, procedures, rules and any associated
documentation pertaining to the operation of a data processing system
3.33
system
set of components which interact according to a design
Note 1 to entry: A component may be another system (a subsystem). Such components (subsystems) may be,
depending on the level:
• a controlling or controller system,
• hardware, software, human interaction.
3.34
systematic failure
failure related in a deterministic way to a certain cause, which can only be eliminated by a
modification of the design or of the manufacturing process, operational procedures,
documentation or other relevant factors
[SOURCE: IEC 60050-395:2014, 395-07-133]
3.35
system life cycle
activities occurring during a period of time that starts when a system is conceived and ends
when the system is no longer available

3.36
top event
unwanted event or incident at the ’top’ of a fault tree that is traced downward to more basic
failures using logic gates to determine its causes and likelihood
3.37
validation
confirmation by examination and provision of objective evidence that the particular
requirements for a specific intended use are fulfilled
3.38
verification
confirmation by examination and provision of objective evidence that the specified
requirements have been fulfilled
4 Classifying analysers using a risk based approach
4.1 General
Defining on-line analysers as being related to the functional categories of safety,
environmental, asset protection or profit maximisation necessitates that the capability exists
to determine the required priority for performance target setting and maintenance direction of
each instrument by designed functional category. This can be achieved using a risk graph,
whereby the target category rating of an analyser is calculated based upon the required risk
factor. The hazard rate of the event the analyser is designed to protect against (the so called
top event) and the consequence of the top event should be known.
The method takes the principle and general format of the risk graph approach for
IEC 61508-5 [2] . However, as this document is aimed at analyser maintenance priorities, it
should be noted that:
• where analysers are part of a safety system it is not an alternative approach to
determining safety integrity levels (SILs) and where SILs demand certain proof checking
periods, duplication of analysers etc., these will take precedence;
• the ranking system adopted is in line with accepted analyser maintenance practice, i.e.
highest priority is ’1’ and lowest priority is ‘3’.
_________
Numbers in square brackets refer to the Bibliography.

– 16 – IEC TR 62010:2016  IEC 2016
PD PD PD
1 2 3
a a 1
I
E I
R 1 2
a 1 2
E
I
1 2 3
I
Starting point
E
R 1
for priority
E
assessment
I
2 3 b
I
E
R
I
E 1
3 b b
I
R = Risk parameter
a = Special maintenance consideration
E = Exposure parameter
1, 2, 3 = Priority level
I = Intervention parameter
B = No particular priority requirement
PD = Process demand parameter
IEC
Figure 2 – Generalized risk graph
Using the generalized risk graph shown in Figure 2, each design functional category is
considered in turn. The risk graph for each analyser function should be ’calibrated’. This is
best achieved by defining the consequences for failures, then evaluating a number of
scenarios. The exercise will establish if the outcome in terms of risk reduction is appropriate
to the applications.
At the starting point of working through the risk graph towards priority setting it is necessary
to establish the initial element which is the risk parameter (R), i.e. the main area of impact
associated with analyser failure, for example plant damage, loss of profit, environmental
damage, and serious injury/loss of life. The second element is applied on judgements of
importance of the analyser in keeping the plant running and is termed the exposure
parameter (E), for example high risk of immediate/short term damage, plant control scheme
ability to function, environmental consent limitations, area sensitivity, or frequency of
exposure of personnel to hazard. The third element is the intervention parameter (I), which is
an assessment of whether operator intervention can mitigate the impact of the failure or not.
The graph then leads to the prioritisation box which gives priority choice based on the process
demand parameter (PD), i.e. the likelihood of the process requiring the measurement when a
failure occurs. The following Table 1 summarises a typical application of elements in the risk
graph and explanatory notes are given in 4.1, 4.2, 4.3 and 4.4.

Table 1 – Typical application of elements in the risk graph
Safety Environmental Asset protection Profit maximisation
R Multiple fatalities on or Release causing Damage with major Production profit margins
off site permanent damage or replacement costs high
major clean-up costs
R Fatality on or off site, Release causing Damage with moderate Production profit margins
injury (resulting in temporary damage replacement costs medium
hospitalisation to a requiring significant
member of the public or clean-up
staff)
R Minor injury with lost time Release with minor Damage with minor Production profit margins
impact damage which should be replacement costs or no low
recorded, or failure to damage
record critical data
E Frequency of exposure to Consent restrictions and High risk of immediate / Control scheme cannot
the hazard is more / or sensitive area short term damage function
frequent to permanent
E Frequency of exposure to No consent restrictions Low risk of immediate / Control scheme can
hazard is rare to more and / or non-sensitive short term damage function in short term
often area
I Unlikely that operator action will prevent or mitigate circumstances
I Possible for operator to take acti
...

Questions, Comments and Discussion

Ask us and Technical Secretary will try to provide an answer. You can facilitate discussion about the standard in here.

Loading comments...