Analyser systems - Guidance for maintenance management

provides an understanding of analyser maintenance to individuals from a non-engineering background. It is also designed as a reference source to individuals more closely involved with maintenance of analytical instrumentation, and provides guidance on performance target-setting, strategies to improve reliability, methods to measure effective performance, and the organizations, resources and systems that need to be in place to allow this to occur.

General Information

Status
Published
Publication Date
27-Oct-2005
Current Stage
DELPUB - Deleted Publication
Start Date
15-Dec-2016
Completion Date
26-Oct-2025
Ref Project

Relations

Technical report
IEC TR 62010:2005 - Analyser systems - Guidance for maintenance management Released:10/28/2005 Isbn:2831882451
English language
71 pages
sale 15% off
Preview
sale 15% off
Preview

Standards Content (Sample)


TECHNICAL IEC
REPORT TR 62010
First edition
2005-10
Analyser systems –
Guidance for maintenance management

Reference number
IEC/TR 62010:2005(E)
Publication numbering
As from 1 January 1997 all IEC publications are issued with a designation in the
60000 series. For example, IEC 34-1 is now referred to as IEC 60034-1.
Consolidated editions
The IEC is now publishing consolidated versions of its publications. For example,
edition numbers 1.0, 1.1 and 1.2 refer, respectively, to the base publication, the
base publication incorporating amendment 1 and the base publication incorporating
amendments 1 and 2.
Further information on IEC publications
The technical content of IEC publications is kept under constant review by the IEC,
thus ensuring that the content reflects current technology. Information relating to
this publication, including its validity, is available in the IEC Catalogue of
publications (see below) in addition to new editions, amendments and corrigenda.
Information on the subjects under consideration and work in progress undertaken
by the technical committee which has prepared this publication, as well as the list
of publications issued, is also available from the following:
• IEC Web Site (www.iec.ch)
• Catalogue of IEC publications
The on-line catalogue on the IEC web site (www.iec.ch/searchpub) enables you to
search by a variety of criteria including text searches, technical committees
and date of publication. On-line information is also available on recently issued
publications, withdrawn and replaced publications, as well as corrigenda.
• IEC Just Published
This summary of recently issued publications (www.iec.ch/online_news/ justpub)
is also available by email. Please contact the Customer Service Centre (see
below) for further information.
• Customer Service Centre
If you have any questions regarding this publication or need further assistance,
please contact the Customer Service Centre:

Email: custserv@iec.ch
Tel: +41 22 919 02 11
Fax: +41 22 919 03 00
TECHNICAL IEC
REPORT TR 62010
First edition
2005-10
Analyser systems –
Guidance for maintenance management

 IEC 2005  Copyright - all rights reserved
No part of this publication may be reproduced or utilized in any form or by any means, electronic or
mechanical, including photocopying and microfilm, without permission in writing from the publisher.
International Electrotechnical Commission, 3, rue de Varembé, PO Box 131, CH-1211 Geneva 20, Switzerland
Telephone: +41 22 919 02 11 Telefax: +41 22 919 03 00 E-mail: inmail@iec.ch Web: www.iec.ch
PRICE CODE
Commission Electrotechnique Internationale XB

International Electrotechnical Commission
МеждународнаяЭлектротехническаяКомиссия
For price, see current catalogue

– 2 – TR 62010  IEC:2005(E)
CONTENTS
FOREWORD.3

1 Scope and object.7
1.1 Purpose of this technical report .7
1.2 Safety and environment.7
2 Normative references .9
3 Terms and definitions .9
4 Classifying analysers using a risk-based approach .13
4.1 Introduction .13
4.2 Safety protection .15
4.3 Environmental protection.15
4.4 Asset protection .17
4.5 Profit maximization.18
4.6 Performance target.18
4.7 Maintenance priority.19
4.8 Support priority.19
5 Maintenance strategies .19
5.1 Introduction .19
5.2 Reliability centred maintenance (RCM).20
5.3 Management systems/organization.23
5.4 Training/competency .24
5.5 Optimal resourcing .27
5.6 Best-practice benchmarking .29
5.7 Annual analyser key performance indicator (KPI) review .30
6 Analyser performance monitoring .30
6.1 Introduction .30
6.2 Recording failures – Reason/history codes .31
6.3 SPC/proof-checking.33
6.4 Analyser performance indicators .36
6.5 Analyser performance reporting.46

Appendix 1 Step 1 – EQuivalent analyser per technician (eqat) – Calculation methodology .48
Appendix 2 Step 2 – Equivalent analyser per technician (EQAT) – Calculation methodology .49
Appendix 3 SPC techniques applied to analysers – Interpreting control-chart readings .56
Appendix 4 Adopting a strategy .60
Appendix 5 Analyser benchmark by key success factor analysis .61
Appendix 6 Analyser maintenance cost against benefit example .66
Appendix 7 Analyser performance typical results .70

– 3 – TR 62010  IEC:2005(E)
INTERNATIONAL ELECTROTECHNICAL COMMISSION
____________
ANALYSER SYSTEMS –
GUIDANCE FOR MAINTENANCE MANAGEMENT

FOREWORD
1) The International Electrotechnical Commission (IEC) is a worldwide organization for standardization comprising
all national electrotechnical committees (IEC National Committees). The object of IEC is to promote
international co-operation on all questions concerning standardization in the electrical and electronic fields. To
this end and in addition to other activities, IEC publishes International Standards, Technical Specifications,
Technical Reports, Publicly Available Specifications (PAS) and Guides (hereafter referred to as “IEC
Publication(s)”). Their preparation is entrusted to technical committees; any IEC National Committee interested
in the subject dealt with may participate in this preparatory work. International, governmental and non-
governmental organizations liaising with the IEC also participate in this preparation. IEC collaborates closely
with the International Organization for Standardization (ISO) in accordance with conditions determined by
agreement between the two organizations.
2) The formal decisions or agreements of IEC on technical matters express, as nearly as possible, an international
consensus of opinion on the relevant subjects since each technical committee has representation from all
interested IEC National Committees.
3) IEC Publications have the form of recommendations for international use and are accepted by IEC National
Committees in that sense. While all reasonable efforts are made to ensure that the technical content of IEC
Publications is accurate, IEC cannot be held responsible for the way in which they are used or for any
misinterpretation by any end user.
4) In order to promote international uniformity, IEC National Committees undertake to apply IEC Publications
transparently to the maximum extent possible in their national and regional publications. Any divergence
between any IEC Publication and the corresponding national or regional publication shall be clearly indicated in
the latter.
5) IEC provides no marking procedure to indicate its approval and cannot be rendered responsible for any
equipment declared to be in conformity with an IEC Publication.
6) All users should ensure that they have the latest edition of this publication.
7) No liability shall attach to IEC or its directors, employees, servants or agents including individual experts and
members of its technical committees and IEC National Committees for any personal injury, property damage or
other damage of any nature whatsoever, whether direct or indirect, or for costs (including legal fees) and
expenses arising out of the publication, use of, or reliance upon, this IEC Publication or any other IEC
Publications.
8) Attention is drawn to the Normative references cited in this publication. Use of the referenced publications is
indispensable for the correct application of this publication.
9) Attention is drawn to the possibility that some of the elements of this IEC Publication may be the subject of
patent rights. IEC shall not be held responsible for identifying any or all such patent rights.
The main task of IEC technical committees is to prepare International Standards. However, a
technical committee may propose the publication of a technical report when it has collected
data of a different kind from that which is normally published as an International Standard, for
example "state of the art".
IEC 62010, which is a technical report, has been prepared by subcommittee 65D: Analysing
equipment, of IEC technical committee 65: Industrial-process measurement and control.
This document has been provided by the Engineering Equipment and Materials Users
Association (EEMUA) with their copyright  2000.
The text of this technical report is based on the following documents:
Enquiry draft Report on voting
65D/109/DTR 65D/122/RVC
Full information on the voting for the approval of this technical report can be found in the
report on voting indicated in the above table.

– 4 – TR 62010  IEC:2005(E)
This publication has been drafted in accordance with the ISO/IEC Directives, Part 2.
The committee has decided that the contents of this publication will remain unchanged until
the maintenance result date indicated on the IEC web site under "http://webstore.iec.ch" in
the data related to the specific publication. At this date, the publication will be
• reconfirmed,
• withdrawn,
• replaced by a revised edition, or
• amended.
A bilingual version of this publication may be issued at a later date.

– 5 – TR 62010  IEC:2005(E)
0 Introduction
In connection with the publication of EEMUA 187, the following text is related to the legal
aspects of its publication in the U.K.
0.1 Legal aspects
In order to ensure that nothing in this publication can in any manner offend against, or be
affected by, the provisions of the Restrictive Trade Practices Act 1976, the recommendations
which it contains will not take effect until the day following that on which its particulars are
furnished to the Office of Fair Trading.
As the subject dealt with seems likely to be of wide interest, this publication is also being
made available for sale to non-members of the Association. Any person who encounters an
inaccuracy or ambiguity when making use of this publication is asked to notify EEMUA without
delay so that the matter may be investigated and appropriate action taken.
It has been assumed in the preparation of this publication that the user will ensure selection
of those parts of its contents appropriate to the intended application and that such selection
and application are correctly carried out by appropriately qualified and experienced people for
whose guidance the publication has been prepared. EEMUA does not, and indeed cannot,
make any representation or give any warranty or guarantee in connection with material
contained in its publications, and expressly disclaims any liability or responsibility for damage
or loss resulting from their use. Any recommendations contained herein are based on the
most authoritative information available at the time of writing and on good engineering
practice, but it is essential for the user to take account of pertinent subsequent developments
or legislation.
All rights are reserved. No part of this publication may be reproduced, stored in a retrieval
system, or transmitted in any form or by any means: electronic, mechanical, photocopying,
recording, or other.
Infringing the copyright is not only breaking the law, but also, through reduction in the
Association’s income, jeopardizes the availability of future publications.
0.2 Overview
This guidance defines the best practices in the maintenance of on-line analysers. Analysers
are used in industry to measure variables which significantly contribute to safety,
environmental, asset protection and profit maximization.
Maintenance organization, prioritizing of maintenance effort, maintenance methods, correct
resourcing, performance monitoring and reporting all play an important role in successful
application of on-line analysers.
The ultimate effectiveness of the contribution of on-line analysers is measured by the ability to
perform their functional requirements upon demand. This technical report gives guidance on
performance target-setting, strategies to improve reliability, methods to measure effective
performance, and the organizations, resource and systems that need to be in place to allow
this to occur.
The various subjects covered in this document are discrete items and can appear unrelated in
the overall scheme of analyser maintenance procedures and strategies. The following flow
path ties the sections together in a logical sequence of approach.

– 6 – TR 62010  IEC:2005(E)
0.3 Flowpath detailing inter-relationships of document subject-matter

ESTABLISH
ANALYSER
CRITICALITY
Section 3
REVIEW CRITICALITY DEFINE
AND MAINTENANCE MAINTENANCE
STRATEGY WITH STRATEGIES
OPERATIONS /
CUSTOMERS
Section 4
Sections
4.2.5, 4.6
and 4.7
MONITORING OF
ANALYSER
PERFORMANCE USING
DEFINED
MEASUREMENT
PARAMETERS.
Section 5
IEC  1684/05
Figure 1 – Flowpath
This technical report provides a mechanism by which the critically of an analyser can be
determined by means of a risk assessment, the risk assessment being based upon
consideration of the consequence of the loss of the analysis to the operation of a process
unit, or group of process units, personnel/plant safety and the environment.
Determination of a criticality rating for the analyser allows target values for reliability to be set
for each criticality classification and prioritization for maintenance and support. Such
approaches are covered in Clause 4.
A number of strategies designed to allow the target reliabilities calculated by the risk
assessments to be met are defined in Clause 5.
Finally, mechanisms for tracking analyser performance and quantifying the performance as
meaningful measures are presented in Clause 6.

– 7 – TR 62010  IEC:2005(E)
ANALYSER SYSTEMS –
GUIDANCE FOR MAINTENANCE MANAGEMENT

1 Scope and object
This technical report applies to analyser systems.
1.1 Purpose of this technical report
This technical report is written with the intention of providing an understanding of analyser
maintenance to individuals from a non-engineering background. It is also designed as a
reference source to individuals more closely involved with maintenance of analytical
instrumentation, and provides guidance on performance target-setting, strategies to improve
reliability, methods to measure effective performance, and the organizations, resources and
systems that need to be in place to allow this to occur.
Effective management of on-line analysers is only possible when key criteria have been
identified, and tools for measuring these criteria established.
On-line analysers are used in industry for one of the following reasons.
1.2 Safety and environment
One category of analysers are those used to control and monitor safety and environmental
systems. The key measured parameter for this category of analyser is on-line time. This is
essentially simpler to measure than an analyser’s contribution to profits but, as with process
analysers applied for profit maximization, the contribution will be dependent upon the ability to
perform its functional requirements upon demand.
1.2.1 Asset protection and profit maximization
On-line analysers falling into this category are normally those impacting directly on process
control. They may impact directly on protection of assets (for example, corrosion, catalyst
contamination) or product quality, or may be used to optimize the operation of the process (for
example, energy efficiency).
For this category of analysers, the key measured parameter is either the cost of damage to
plant or the direct effect on overall profit of the process unit. Justification as to whether an
analyser should be installed on the process may be sought by quantifying the payback time of
the analyser, the pass/fail target typically being 18 months, although it should be noted that
the contribution of the analyser to reduction in the extent of damage to, or the profit of, the
process unit is difficult to measure. However, this contribution will be dependent upon the
analyser’s ability to perform its functional requirements upon demand.
This technical report focuses on the cost/benefits associated with traditional analyser
maintenance organizations. In a modern set-up, the complexity of analysers demands on
occasion data from chemotricians and scientists who may be owned by other parts of the
organization, and, as such, care must be exercised to include their costs.
1.2.2 Questions that need to be addressed
When considering on-line analyser systems and their maintenance, the following list of key
points is useful in helping decide where gaps exist in the maintenance strategy. Additionally,
a structured mechanism by which the “health” of an analyser organization can be appraised is
provided in Appendix 5.
– 8 – TR 62010  IEC:2005(E)
1. What is the UPTIME of each critical analyser? (Do you measure UPTIME and maintain
records? Do you know the value provided by each analyser and therefore which ones are
critical? Do you meet regularly with operations ("the customer") to review priorities?)
2. What is the VALUE delivered by each analyser in terms of process performance
improvement (i.e. improved yield figures, improved quality, improved manufacturing cycle
time and/or process cycle time, process safety (for example, interlocks), environmental
importance)? (Is this information readily available and agreed to in meetings with
operations? Is the value updated periodically?)
3. What is the "utilization" of each critical analyser – that is, if the analyser is used in a
control loop, what percentage of the time is the loop on manual due to questions about the
analyser data? (Do you keep records on the amount of time that analyser loops are in
automatic? Do you meet regularly with operations to review the operators feelings about
the "believability" of the analyser data?)
4. Do you have a regular preventive maintenance programme set up for each analyser which
includes regular calibrations? (Does the calibration/validation procedure include statistical
process control concepts – upper/lower limits and measurement of analyser variability (or
noise)? Is the procedure well documented? Do you conduct it regularly? Even when things
are running well?)
5. Do you have trained personnel (capable of performing all required procedures and
troubleshooting the majority of analyser problems) who are assigned responsibility for the
analysers? (Do the trained personnel understand the process? Do they understand any
laboratory measurements which relate to the analyser results?)
6. Do the trained maintenance personnel have access to higher level technical support as
necessary for difficult analyser and/or process problems? (Do they have ready access to
the individual who developed the application? Do they have ready access to the vendor?
Can higher level support personnel connect remotely to the analyser to observe and
troubleshoot?)
7 Do you have a maintenance record keeping systems which documents all activity involving
the analysers, including all calibration/validation records, all repairs and/or adjustments?
(Do you use the record-keeping system to identify repetitive failure modes and to
determine the root cause of failures? Do you track the average time-to-repair analyser
problems? Do you track average time-between-failures for each analyser?)
8. Do you periodically review the analysers with higher level technical resources to identify
opportunities to significantly improve performance by upgrading the analyser system with
improved technology or a simpler/more reliable approach?
9. Do you meet regularly with operations to review analyser performance, update priorities,
and understand production goals?
10 Do you have management who understand the value of the analysers and are committed
to, and supportive of, reliable analysers?
11. Do you know how much the maintenance programme costs each year and is there solid
justification for it?
Consideration of the above questions will help to identify opportunities for continuously
improving the reliability of installed process analysers. Once the opportunities are identified,
the following sections are intended to give guidance in achieving the solutions with the aim of
• maximising performance and benefit of installed analysers;
• achieving full operator confidence in the use of on-line analysers;
• analyser output data becoming reliable enough to be used by operators, control systems,
and other users to improve plant operation versus world-class manufacturing metrics and
become best-of-the-best.
– 9 – TR 62010  IEC:2005(E)
2 Normative references
The following referenced documents are indispensable for the application of this document.
For dated references, only the edition cited applies. For undated references, the latest edition
of the referenced document (including any amendments) applies.
IEC 61508 (all parts), Functional safety of electrical/electronic/programmable electronic
safety-related systems
IEC 61508-5, Functional safety of electrical/electronic/programmable electronic safety-related
systems – Part 5: Examples of methods for the determination of safety integrity levels
IEC 61649, Goodness-of-fit tests, confidence intervals and lower confidence limits for Weibull
distributed data
IEC 61710, Power law model – Goodness-of-fit tests and estimation methods
3 Terms and definitions
For the purposes of this document, the following terms and definitions apply.
3.1
availability
ability of an item to be in a state to perform a required function under given conditions at a
given instant of time or over a given time interval, assuming that the required external
resources are provided
[IEV 191-02-05]
3.2
catastrophic failure
failure of a component, equipment or system in which its particular performance characteristic
moves completely to one or the other of the extreme limits outside the normal specification
range
3.3
consequence
measure of the expected effects of an incident outcome case
3.4
control system
system which responds to input signals from the process and/or from an operator and
generates signals causing the EUC to operate in the desired manner
3.5
diversity
performance of the same overall function by a number of independent and different means
3.6
error/fault/failure/mistake
• fault: state of an item characterized by inability to perform a required function, excluding
the inability during preventive maintenance or other planned actions, or due to lack of
external resources [IEV 191-05-01]
—————————
IEC 60050-191, International Electrotechnical Vocabulary (IEV) – Chapter 191: Dependability and quality of
service.
– 10 – TR 62010  IEC:2005(E)
• undetected fault: fault which is not detected by a diagnostic check
• design fault: fault in the design caused by a mistake in the design phase of a system.
A design fault causes an error, remaining undetected in a part of the system until specific
conditions affecting that part of the system are such that the produced result does not
conform to the intended function. This results in a failure of that part of the system. If the
conditions appear again, the same results will be produced
• error: discrepancy between a computed, observed or measured value or condition and the
true, specified or theoretically correct value or condition [IEV 191-05-24]
• failure: termination of an item to perform a required function [IEV 191-04-01]
• mistake/human error: human action that produces an unintended result [IEV 191-05-25]
• failed state: condition of a component, equipment or system during the time when it is
subject to a failure
3.7
fault tree analysis
analysis to determine which fault modes of the subitems or external events, or combinations
thereof, may result in a stated fault mode of the item, presented in the form of a fault tree
[IEV 191-16-05]
3.8
functional safety
ability of a safety-related system to carry out the actions necessary to achieve a safe state for
the EUC or to maintain the safe state for the EUC
3.9
hazard
physical situation with a potential for human injury
3.10
level of safety
level of how far safety is to be pursued in a given context, assessed with reference to an
acceptable risk, based on the current values of society
3.11
maintainability
ability of an item, under given conditions of use, to be retained in, or restored to, a state in
which it can perform a required function, when maintenance is performed under given
conditions and using stated procedures and resources
[IEV 191-02-07]
3.12
mean time between failures
MTBF
expectation of the operating time between failures
[IEV 191-12-09]
3.13
mean time to failure
MTTF
expectation of the time to failure
[IEV 191-12-07]
– 11 – TR 62010  IEC:2005(E)
3.14
mean time to repair
MTTR
expectation of the time to restoration
[IEV 191-13-08]
3.15
proof-testing
method of ensuring that a component, equipment or system possesses all the required
performance characteristics and is capable of responding in the manner desired
3.16
random hardware failure
failure occurring at a random time, which results from one or more of the possible degradation
mechanism in the hardware
NOTE 1 There are many degradation mechanisms occurring at different rates in different components, and, since
manufacturing tolerances cause components to fail due to these mechanisms after different times in operation,
failures of equipment comprising many components occur at predictable rates but unpredictable (i.e. random)
times.
NOTE 2 A major distinguishing feature between random hardware failures and systematic failures is that system
failure rates (or other appropriate measures) arising from random hardware failures can be predicted with
reasonable accuracy but systematic failures, by their very nature, cannot be predicted. That is, system failure
arising from random hardware failure rates can be quantified with reasonable accuracy, but those arising from
systematic failure cannot be accurately quantified because events leading to them cannot easily be predicted.
3.17
redundancy
in an item, the existence of more than one means of performing a required function
[IEV 191-15-01]
3.18
reliability
probability that an item will perform a required function under given conditions for a given time
.t )
interval (t
1 2
[IEV 191-12-01]
3.19
risk
probable rate of occurrence of a hazard causing harm and the degree of severity of harm. The
concept of risk always has two elements; the frequency or probability at which a hazard
occurs and the consequences of the hazard event
3.20
safety
freedom from unacceptable risk of harm
3.21
safety integrity
SI
probability of a safety-related system satisfactorily performing the required safety functions
under all the stated conditions within a stated period of time
3.22
safety integrity level
SIL
one of four possible discrete levels for specifying the safety-integrity requirements of the
safety functions to be allocated to the safety-related systems, SIL4 having the highest level of
safety integrity, SIL1 the lowest

– 12 – TR 62010  IEC:2005(E)
3.23
safety-related system
SRL
system that
• implements the required safety functions to achieve a safe state for the EUC or to
maintain a safe state for the EUC; and
• is intended to achieve, on its own, or with other safety-related systems, the necessary
level of integrity for the implementation of the required safety functions
3.24
safety-related control system
SRCL
system which carries out active control of the EUC and which has the potential, if not in
accordance with its design intent, to enter an unsafe state
3.25
safety-related protection systems
SRPS
designed to respond to conditions on the EUC, which may be hazardous in themselves, or if
no action were taken, could give rise to hazardous events, and to generate the correct outputs
to mitigate the hazardous consequences or prevent the hazardous events
3.26
safety requirements specification
specification that contains all the requirements of the safety functions that have to be
performed by the safety-related systems divided into
• safety-functions requirement specification;
• safety-integrity requirement specification
3.27
software
intellectual creation comprising the programs, procedures, rules and any associated
documentation pertaining to the operation of a data processing system
3.28
system
set of components which interact according to a design. A component may be another system
(a subsystem). Such components (subsystems) may be, depending on the level:
• a controlling or controller system; and
• hard, software, human interaction
3.29
systematic failure
failure related in a deterministic way to a certain cause, which can only be eliminated by a
modification of the design or of the manufacturing process, operational procedures,
documentation or other relevant factors
[IEV 191-04-19]
3.30
system life cycle
activities occurring during a period of time that starts when a system is conceived and ends
when the system is on longer available

– 13 – TR 62010  IEC:2005(E)
3.31
systematic safety integrity
IS
that part of the safety integrity of safety relating to systematic failures in a dangerous mode of
failure
3.32
top event
unwanted event or incident at the “top” of a fault tree that is traced downward to more basic
failures using logic gates to determine its causes and likelihood
3.33
validation
confirmation by examination and provision of objective evidence that the particular require-
ments for a specific intended use are fulfilled
3.34
verification
confirmation by examination and provision of objective evidence that the specified
requirements have been fulfilled
4 Classifying analysers using a risk-based approach
4.1 Introduction
Definition of on-line analysers as being related to the functional categories of safety,
environmental, asset protection or profit maximization necessitates that the capability exists
to determine the required priority for performance target-setting and maintenance direction of
each instrument by designed functional category. This can be achieved using a risk graph,
whereby the target category rating of an analyser is calculated on the basis of the required
risk factor. The hazard rate of the event the analyser is designed to protect against (the so-
called top event) and the consequence of the top event must be known.
The method takes the principle and general format of the risk graph approach for IEC 61508-5.
However, as this technical report is aimed at analyser maintenance priorities, it must be noted
that
a) where analysers are part of a safety system, it is not an alternative approach to
determining safety integrity levels (SILs) and where SILs demand certain proof-checking
periods, duplication of analysers etc., these will take precedence;
b) the ranking system adopted is in line with accepted analyser maintenance practice, i.e. the
highest priority is 1 and the lowest priority is 3.

– 14 – TR 62010  IEC:2005(E)
PD PD PD
1 2 3
a a1
I
E I
R
Starting Point for
a 1 2
E
I
Priority
I
Assessment 2
R
E
2 1
1 2 3
E
I
I
R E
3 1
2 3 b
I
E
2 1
I
3 b b
R = Risk Parameter
a = Special maintenance consideration
E = Exposure Parameter
1, 2, 3 = Priority level
I = Intervention Parameter
b = No particular priority requirement
PD = Process Demand Parameter
IEC  1685/05
Figure 2 – Risk graph
Using the generalized risk graph shown in Figure 2, each design functional category is
considered in turn. The risk graph for each analyser function should be "calibrated". This is
best achieved by defining the consequences for failures, then evaluating a number of
scenarios. The exercise will establish if the outcome in terms of risk reduction is appropriate
to the applications.
At the starting point of working through the risk graph towards priority setting, it is necessary
to establish the initial element which is the risk parameter (R), i.e. the main area of impact
associated with analyser failure, for example, plant damage, loss of profit, environmental
damage, serious injury/loss of life. The second element is applied on judgements of
importance of the analyser in keeping the plant running and is termed the exposure parameter
(E), i.e. high risk of immediate/short-term damage, plant control scheme ability to function,
environmental consent limitations or area sensitivity, frequency of exposure of personnel to
hazard. The third element is the intervention parameter (I) which is an assessment of whether
operator intervention can mitigate the impact of the failure or not. The graph then leads to the
prioritization box which gives priority choice based on the process demand parameter (PD),
i.e. the likelihood of the process requiring the measurement when a failure occurs
Table 1 summarizes a typical application of the elements in the risk graph, and explanatory
notes are given in 4.1, 4.2, 4.3 and 4.4.
Table 1 – Application of the elements in the risk graph
Safety Environmental Asset protection Profit maximization
R1 Multiple fatalities on or off Release causing Damage with major Production profit
site permanent damage or replacement costs margins high
major clean-up costs
R2 Fatality on or off site, Release causing Damage with Production profit
injury (resulting in temporary damage moderate margins medium
hospitalization to a requiring significant replacement costs
member of public or staff) clean-up
R3 Minor injury with lost time Release with minor Damage with minor Production profit
impact damage which should be replacement costs margins low
recorded, or failure to or no damage
record critical data
– 15 – TR 62010  IEC:2005(E)
Table 1 (continued)
Safety Environmental Asset protection Profit maximization
E1 Frequency of exposure to Consent restrictions High risk of Control scheme cannot
the hazard is: more and/or sensitive area immediate/short- function
frequent to permanent term damage
E2 Frequency of exposure to No consent restrictions Low risk of Control scheme can
hazard is: rare to more and/or non-sensitive area immediate/short- function in short term
often term damage
I1 Unlikely that operator action will prevent or mitigate circumstances
I2 Possible for operator to take action to prevent incident or to significantly reduce consequences where
there is sufficient time and suitable facilities available
PD1 Demand is frequent
PD2 Demand occurs on an average basis
PD3 Demand occurs very rarely
4.2 Safety protection
IEC 61508 defines the requirements for E/E/PES devices in all safety related systems. This
technical report does not seek to add to or modify the requirements of that standard in
relation to a method for determining analyser priority on safety issues and takes the principle
and general format of the risk-graph approach for IEC 61508-5. However, as this technical
report is aimed at analyser maintenance priorities, it must be re-emphasized that the ranking
system adopted is in line with accepted analyser maintenance practice, i.e. the highest
priority is 1 and the lowest priority is 3
The following should be noted when considering the use of analytical instrumentation as
measuring elements for safety-related systems.
The mean time between failure of analytical instrumentation is lower than standard
instrumentation used in safety-related systems (pressure, temperature and flow measure-
ments). This is especially true of complex analysers such as spectrometers and gas
chromatographs.
Should analytical instrumentation be utilized in safety-related systems, duplex and triplex
sensors, and frequent proof-checking would routinely be required to achieve the necessary
on-line times. These should be determined in accordance with the procedures and rules laid
down in IEC 61508 – the above risk-graph usage in this technical report is intended as a
guide only to setting maintenance priority and is not intended as an alternative route to
defining safety-integrity levels (SIL).
4.3 Environmental protection
Measurement of variables which impact directly on the environment are an increasingly
important function of on-line analytical instrumentation. Data produced by environmental
analysers may require submission to governmental bodies concerned with legal and
procedural aspects of environmental monitoring.
There is significant diversity in the nature of the techniques. Traditional applications and
methods are continuous air monitoring (CAM) and vent-emission monitoring by gas
chromatography or electrochemical sensors, organics in aqueous effluent by total carbon (TC)
and total oxygen demand (TOD), and acidity/basicity of aqueous effluent by electrochemical
pH sensor. These are supplemented by more modern techniques such as air-quality
monitoring by open-path spectrometry and elemental analysis by X-ray fluorescence.

– 16 – TR 62010  IEC:2005(E)
Failure of the analyser to perform its specified function may lead to consequences R1, R2, or
R3 depicted in Table 1. It should be noted that environmental analysers are often used to
record data, but examples whereby analyser failure leads direct to consequential damage are
far fewer.
An R1 consequence would be illustrated by the failure of a CAM system interlocked to
process valves, the overall function of which would be to detect emission of chemicals and
actuate the valves in order to contain the bulk of the process inventory.
An instance of an R2 consequence would typically be the result of a failure of an organics in
aqueous effluent monitor to detect a high level, thus neglecting to divert the out-of-
specification effluent for further treatment before release to the surrounding environment.
Typically, an R3 consequence would be an oxygen-analyser failure on a burner, leading to
emission of partly combusted fuel; or the failure of a vent-gas composition analyser, with
failure to record environmentally critical data.
The second element of the graph requires a determination to be made on the environmental
status of the affected area.
Classification of an area as E2 would require the probability of causing harm to the
populations in the affected area to be low. The potential to cause political as well as physical
damage should be assessed. Should it be considered that the consequence of analyser
failure has the potential to significantly affect populations in the affected area, that area
should be classed as E1. Alternatively, if environmental consent limits are imposed by the
authorities, this will determine whether route E1 or E2 should prevail.
The third element requests a determination as to the likelihood of an operator mitigating the
consequences of analyser failure. The probability of operator intervention depends on the
nature of the operator’s intervention with the process. Where the operator is required to
directly carry out actions as a consequence of the analyser’s results, there will be a high
probability of positive intervention. Automated systems, whereby the operator has no direct
involvement in implementing process adjustment due to the measured variable, are more
prone to unrevealed failure. Analyser failure diagnostics, and the facility given to the operator
to mitigate the consequences of the failure by manual intervention (for example, grab
sampling and laboratory analysis) should be considered when selecting I1 or I2.
The final element of the risk graph is a determination of how often a demand is placed by the
process upon the analyser.
Process demands on the analyser are broadly classed as infrequent, average and frequent.
Categorization of process demand is primarily the responsibility of the process engineer and
not the analyser/instrument engineer.
Some examples of process demand, and frequency categorization are detailed as follows.
PD1 A frequent demand – can typically be considered to be one significantly exceeding the
single annual demand defined in PD2. The demand on either of the examples cited in
R3 are almost perpetual (the need to record environmentally critical data is considered
to place a continuous demand on the process).
PD2 An average demand – can typically be considered to be a single demand placed upon
a system on an annual basis. An example would be a high organic content in aqueous
effluent occurring on an annual basis (see consequence R2, failure of an organics in
aqueous effluent monitor to detect a high level, thus neglecting to divert the out-of-
specification effluent for further treatment before release to the surrounding
environment).
– 17 – TR 62010  IEC:2005(E)
PD3 An infrequent event – can typically be illustrated by the demand rate placed upon a
CAM system such as the one described in R1 (a CAM system interlocked to process
valves, the overall function of which would be to detect emission of chemicals and
actuate the valves in order to contain the bulk of the process inventory). Such systems
are des
...

Questions, Comments and Discussion

Ask us and Technical Secretary will try to provide an answer. You can facilitate discussion about the standard in here.

Loading comments...