Information technology - Artificial intelligence - Guidance on risk management (ISO/IEC 23894:2023)

This document provides guidance on how organizations that develop, produce, deploy or use products, systems and services that utilize artificial intelligence (AI) can manage risk specifically related to AI. The guidance also aims to assist organizations to integrate risk management into their AI-related activities and functions. It moreover describes processes for the effective implementation and integration of AI risk management.
The application of this guidance can be customized to any organization and its context.

Informationstechnik - Künstliche Intelligenz - Leitlinien für Risikomanagement (ISO/IEC 23894:2023)

Technologies de l’information - Intelligence artificielle - Recommandations relatives au management du risque (ISO/IEC 23894:2023)

Le présent document fournit des recommandations relatives à la manière dont les organismes qui développent, produisent, déploient ou utilisent des produits, systèmes et services faisant appel à l’intelligence artificielle (IA) peuvent gérer le risque spécifiquement lié à l’IA. Ces recommandations visent également à assister les organismes dans l’intégration du management du risque à leurs activités et fonctions liées à l’IA. Le présent document décrit en outre des processus pour la mise en œuvre et l’intégration efficaces du management du risque lié à l’IA.
L’application de ces recommandations peut être adaptée à n’importe quel organisme et à son contexte.

Informacijska tehnologija - Umetna inteligenca - Smernice za obvladovanje tveganj (ISO/IEC 23894:2023)

Ta dokument podaja naslednje smernice za organizacije, ki razvijajo, ustvarjajo, nameščajo ali uporabljajo izdelke, sisteme in storitve, ki uporabljajo umetno inteligenco, pri obvladovanju tveganj, posebej povezanih z umetno inteligenco. Smernice so prav tako namenjene za pomoč organizacijam pri vključevanju obvladovanja tveganj v svoje aktivnosti in funkcije, povezane z umetno inteligenco. Prav tako opisuje postopke za učinkovito izvajanje in integracijo obvladovanja tveganj, povezanih z umetno inteligenco.
Uporabo teh smernic je mogoče prilagoditi kateri koli organizaciji in njenemu kontekstu.

General Information

Status
Published
Public Enquiry End Date
09-Jan-2024
Publication Date
11-Apr-2024
Technical Committee
UMI - Artificial intelligence
Current Stage
6060 - National Implementation/Publication (Adopted Project)
Start Date
06-Mar-2024
Due Date
11-May-2024
Completion Date
12-Apr-2024

Overview

EN ISO/IEC 23894:2024 (identical to ISO/IEC 23894:2023) provides practical guidance for organizations that develop, produce, deploy or use products, systems and services that utilize artificial intelligence (AI). The standard explains how to manage risks specifically related to AI, how to integrate risk management into AI activities and functions, and how to implement and adapt an AI risk management program to any organizational context. It aligns with and mirrors the structure of ISO 31000:2018 while adding AI-specific considerations.

Key topics

The standard is structured around principles, a risk management framework and detailed processes. Major technical topics and requirements include:

  • Principles of AI risk management - foundational concepts to guide decision‑making and value protection when using AI.
  • Framework elements - leadership and commitment; integration into organizational processes; design considerations such as:
    • understanding organizational context,
    • articulating risk management commitment,
    • assigning roles, authorities and accountabilities,
    • allocating resources,
    • establishing communication and consultation.
  • Risk management process - systematic steps for:
    • communication and consultation,
    • defining scope, context and risk criteria,
    • risk assessment (identification, analysis, evaluation),
    • risk treatment (selection of options, preparing and implementing treatment plans),
    • monitoring, review, recording and reporting.
  • AI-specific guidance - mapping risk activities to the AI system life cycle and describing common AI risk sources and objectives (see Annexes A–C).
  • Documentation and continual improvement - ensuring decisions, controls and monitoring are recorded and updated as AI systems evolve.

Applications and users

EN ISO/IEC 23894:2024 is aimed at organizations across sectors that build, deploy or procure AI-enabled systems. Typical users include:

  • AI/ML engineers, data scientists and system architects
  • Product managers and software development teams
  • Risk managers, compliance and governance teams
  • C-suite (CIO, CTO) and program owners responsible for AI adoption
  • Procurement and third‑party risk teams evaluating AI suppliers
  • Regulators and auditors seeking conformity with recognized AI risk practices

Practical applications include establishing AI governance, performing AI risk assessments, developing treatment plans (technical and organizational controls), and integrating risk processes into AI system life cycles.

Related standards

  • ISO/IEC 23894:2023 (International edition)
  • EN ISO/IEC 23894:2024 (European adoption)
  • ISO 31000:2018 (Risk management - guidance; referenced and extended for AI-specific concerns)

For organizations implementing AI, EN ISO/IEC 23894:2024 offers a structured, adaptable approach to identify, evaluate and treat AI risks while supporting governance and continuous improvement.

Standard

SIST EN ISO/IEC 23894:2024

English language
34 pages
Preview
Preview
e-Library read for
1 day

Frequently Asked Questions

SIST EN ISO/IEC 23894:2024 is a standard published by the Slovenian Institute for Standardization (SIST). Its full title is "Information technology - Artificial intelligence - Guidance on risk management (ISO/IEC 23894:2023)". This standard covers: This document provides guidance on how organizations that develop, produce, deploy or use products, systems and services that utilize artificial intelligence (AI) can manage risk specifically related to AI. The guidance also aims to assist organizations to integrate risk management into their AI-related activities and functions. It moreover describes processes for the effective implementation and integration of AI risk management. The application of this guidance can be customized to any organization and its context.

This document provides guidance on how organizations that develop, produce, deploy or use products, systems and services that utilize artificial intelligence (AI) can manage risk specifically related to AI. The guidance also aims to assist organizations to integrate risk management into their AI-related activities and functions. It moreover describes processes for the effective implementation and integration of AI risk management. The application of this guidance can be customized to any organization and its context.

SIST EN ISO/IEC 23894:2024 is classified under the following ICS (International Classification for Standards) categories: 03.100.01 - Company organization and management in general; 35.020 - Information technology (IT) in general. The ICS classification helps identify the subject area and facilitates finding related standards.

You can purchase SIST EN ISO/IEC 23894:2024 directly from iTeh Standards. The document is available in PDF format and is delivered instantly after payment. Add the standard to your cart and complete the secure checkout process. iTeh Standards is an authorized distributor of SIST standards.

Standards Content (Sample)


SLOVENSKI STANDARD
01-maj-2024
Informacijska tehnologija - Umetna inteligenca - Smernice za obvladovanje tveganj
(ISO/IEC 23894:2023)
Information technology - Artificial intelligence - Guidance on risk management (ISO/IEC
23894:2023)
Informationstechnik - Künstliche Intelligenz - Leitlinien für Risikomanagement (ISO/IEC
23894:2023)
Technologies de l’information - Intelligence artificielle - Recommandations relatives au
management du risque (ISO/IEC 23894:2023)
Ta slovenski standard je istoveten z: EN ISO/IEC 23894:2024
ICS:
03.100.01 Organizacija in vodenje Company organization and
podjetja na splošno management in general
35.020 Informacijska tehnika in Information technology (IT) in
tehnologija na splošno general
2003-01.Slovenski inštitut za standardizacijo. Razmnoževanje celote ali delov tega standarda ni dovoljeno.

EUROPEAN STANDARD EN ISO/IEC 23894

NORME EUROPÉENNE
EUROPÄISCHE NORM
February 2024
ICS 35.020
English version
Information technology - Artificial intelligence - Guidance
on risk management (ISO/IEC 23894:2023)
Technologies de l'information - Intelligence artificielle Informationstechnik - Künstliche Intelligenz -
- Recommandations relatives au management du Leitlinien für Risikomanagement (ISO/IEC
risque (ISO/IEC 23894:2023) 23894:2023)
This European Standard was approved by CEN on 12 February 2024.

CEN and CENELEC members are bound to comply with the CEN/CENELEC Internal Regulations which stipulate the conditions for
giving this European Standard the status of a national standard without any alteration. Up-to-date lists and bibliographical
references concerning such national standards may be obtained on application to the CEN-CENELEC Management Centre or to
any CEN and CENELEC member.
This European Standard exists in three official versions (English, French, German). A version in any other language made by
translation under the responsibility of a CEN and CENELEC member into its own language and notified to the CEN-CENELEC
Management Centre has the same status as the official versions.

CEN and CENELEC members are the national standards bodies and national electrotechnical committees of Austria, Belgium,
Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Italy,
Latvia, Lithuania, Luxembourg, Malta, Netherlands, Norway, Poland, Portugal, Republic of North Macedonia, Romania, Serbia,
Slovakia, Slovenia, Spain, Sweden, Switzerland, Türkiye and United Kingdom.

CEN-CENELEC Management Centre:
Rue de la Science 23, B-1040 Brussels
© 2024 CEN/CENELEC All rights of exploitation in any form and by any means
Ref. No. EN ISO/IEC 23894:2024 E
reserved worldwide for CEN national Members and for
CENELEC Members.
Contents Page
European foreword . 3

European foreword
The text of ISO/IEC 23894:2023 has been prepared by Technical Committee ISO/IEC JTC 1 "Information
technology” of the International Organization for Standardization (ISO) and has been taken over as
secretariat of which is held by DS.
This European Standard shall be given the status of a national standard, either by publication of an
identical text or by endorsement, at the latest by August 2024, and conflicting national standards shall
be withdrawn at the latest by August 2024.
Attention is drawn to the possibility that some of the elements of this document may be the subject of
patent rights. CEN-CENELEC shall not be held responsible for identifying any or all such patent rights.
Any feedback and questions on this document should be directed to the users’ national standards body.
A complete listing of these bodies can be found on the CEN and CENELEC websites.
According to the CEN-CENELEC Internal Regulations, the national standards organizations of the
following countries are bound to implement this European Standard: Austria, Belgium, Bulgaria,
Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland,
Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, Netherlands, Norway, Poland, Portugal, Republic of
North Macedonia, Romania, Serbia, Slovakia, Slovenia, Spain, Sweden, Switzerland, Türkiye and the
United Kingdom.
Endorsement notice
The text of ISO/IEC 23894:2023 has been approved by CEN-CENELEC as EN ISO/IEC 23894:2024
without any modification.
INTERNATIONAL ISO/IEC
STANDARD 23894
First edition
2023-02
Information technology — Artificial
intelligence — Guidance on risk
management
Technologies de l’information — Intelligence artificielle —
Recommandations relatives au management du risque
Reference number
ISO/IEC 23894:2023(E)
© ISO/IEC 2023
ISO/IEC 23894:2023(E)
© ISO/IEC 2023
All rights reserved. Unless otherwise specified, or required in the context of its implementation, no part of this publication may
be reproduced or utilized otherwise in any form or by any means, electronic or mechanical, including photocopying, or posting on
the internet or an intranet, without prior written permission. Permission can be requested from either ISO at the address below
or ISO’s member body in the country of the requester.
ISO copyright office
CP 401 • Ch. de Blandonnet 8
CH-1214 Vernier, Geneva
Phone: +41 22 749 01 11
Email: copyright@iso.org
Website: www.iso.org
Published in Switzerland
ii
© ISO/IEC 2023 – All rights reserved

ISO/IEC 23894:2023(E)
Contents Page
Foreword .iv
Introduction .v
1 Scope . 1
2 Normative references . 1
3 Terms and definitions . 1
4 Principles of AI risk management . 1
5 Framework . 5
5.1 General . 5
5.2 Leadership and commitment . 5
5.3 Integration. 6
5.4 Design . 6
5.4.1 Understanding the organization and its context . 6
5.4.2 Articulating risk management commitment . 8
5.4.3 Assigning organizational roles, authorities, responsibilities and
accountabilities . 8
5.4.4 Allocating resources . 8
5.4.5 Establishing communication and consultation . 8
5.5 Implementation . . 9
5.6 Evaluation . 9
5.7 Improvement . 9
5.7.1 Adapting . 9
5.7.2 Continually improving . 9
6 Risk management process .9
6.1 General . 9
6.2 Communication and consultation. 9
6.3 Scope, context and criteria . 9
6.3.1 General . 9
6.3.2 Defining the scope . 10
6.3.3 External and internal context . 10
6.3.4 Defining risk criteria . 10
6.4 Risk assessment . 11
6.4.1 General . 11
6.4.2 Risk identification . 11
6.4.3 Risk analysis . . 14
6.4.4 Risk evaluation.15
6.5 Risk treatment .15
6.5.1 General .15
6.5.2 Selection of risk treatment options . 15
6.5.3 Preparing and implementing risk treatment plans . 16
6.6 Monitoring and review . 16
6.7 Recording and reporting . 16
Annex A (informative) Objectives .18
Annex B (informative) Risk sources .21
Annex C (informative) Risk management and AI system life cycle .24
Bibliography .26
iii
© ISO/IEC 2023 – All rights reserved

ISO/IEC 23894:2023(E)
Foreword
ISO (the International Organization for Standardization) and IEC (the International Electrotechnical
Commission) form the specialized system for worldwide standardization. National bodies that are
members of ISO or IEC participate in the development of International Standards through technical
committees established by the respective organization to deal with particular fields of technical
activity. ISO and IEC technical committees collaborate in fields of mutual interest. Other international
organizations, governmental and non-governmental, in liaison with ISO and IEC, also take part in the
work.
The procedures used to develop this document and those intended for its further maintenance
are described in the ISO/IEC Directives, Part 1. In particular, the different approval criteria
needed for the different types of document should be noted. This document was drafted in
accordance with the editorial rules of the ISO/IEC Directives, Part 2 (see www.iso.org/directives or
www.iec.ch/members_experts/refdocs).
Attention is drawn to the possibility that some of the elements of this document may be the subject
of patent rights. ISO and IEC shall not be held responsible for identifying any or all such patent
rights. Details of any patent rights identified during the development of the document will be in the
Introduction and/or on the ISO list of patent declarations received (see www.iso.org/patents) or the IEC
list of patent declarations received (see https://patents.iec.ch).
Any trade name used in this document is information given for the convenience of users and does not
constitute an endorsement.
For an explanation of the voluntary nature of standards, the meaning of ISO specific terms and
expressions related to conformity assessment, as well as information about ISO's adherence to
the World Trade Organization (WTO) principles in the Technical Barriers to Trade (TBT) see
www.iso.org/iso/foreword.html. In the IEC, see www.iec.ch/understanding-standards.
This document was prepared by Joint Technical Committee ISO/IEC JTC 1, Information technology,
Subcommittee SC 42, Artificial intelligence.
Any feedback or questions on this document should be directed to the user’s national standards
body. A complete listing of these bodies can be found at www.iso.org/members.html and
www.iec.ch/national-committees.
iv
© ISO/IEC 2023 – All rights reserved

ISO/IEC 23894:2023(E)
Introduction
The purpose of risk management is the creation and protection of value. It improves performance,
encourages innovation and supports the achievement of objectives.
This document is intended to be used in connection with ISO 31000:2018. Whenever this document
extends the guidance given in ISO 31000:2018, an appropriate reference to the clauses of ISO 31000:2018
is made followed by AI-specific guidance, if applicable. To make the relationship between this document
and ISO 31000:2018 more explicit, the clause structure of ISO 31000:2018 is mirrored in this document
and amended by sub-clauses if needed.
This document is divided into three main parts:
Clause 4: Principles – This clause describes the underlying principles of risk management. The use of AI
requires specific considerations with regard to some of these principles as described in ISO 31000:2018,
Clause 4.
Clause 5: Framework – The purpose of the risk management framework is to assist the organization
in integrating risk management into significant activities and functions. Aspects specific to the
development, provisioning or offering, or use of AI systems are described in ISO 31000:2018, Clause 5.
Clause 6: Processes – Risk management processes involve the systematic application of policies,
procedures and practices to the activities of communicating and consulting, establishing the context,
and assessing, treating, monitoring, reviewing, recording and reporting risk. A specialization of such
processes to AI is described in ISO 31000:2018, Clause 6.
Common AI-related objectives and risk sources are provided in Annex A and Annex B. Annex C provides
an example mapping between the risk management processes and an AI system life cycle.
v
© ISO/IEC 2023 – All rights reserved

INTERNATIONAL STANDARD ISO/IEC 23894:2023(E)
Information technology — Artificial intelligence —
Guidance on risk management
1 Scope
This document provides guidance on how organizations that develop, produce, deploy or use products,
systems and services that utilize artificial intelligence (AI) can manage risk specifically related to
AI. The guidance also aims to assist organizations to integrate risk management into their AI-related
activities and functions. It moreover describes processes for the effective implementation and
integration of AI risk management.
The application of this guidance can be customized to any organization and its context.
2 Normative references
The following documents are referred to in the text in such a way that some or all of their content
constitutes requirements of this document. For dated references, only the edition cited applies. For
undated references, the latest edition of the referenced document (including any amendments) applies.
ISO 31000:2018, Risk management — Guidelines
ISO Guide 73:2009, Risk management — Vocabulary
ISO/IEC 22989:2022, Information technology — Artificial intelligence — Artificial intelligence concepts
and terminology
3 Terms and definitions
For the purposes of this document, the terms and definitions given in ISO 31000:2018,
ISO/IEC 22989:2022 and ISO Guide 73:2009 apply.
ISO and IEC maintain terminology databases for use in standardization at the following addresses:
— ISO Online browsing platform: available at https:// www .iso .org/ obp
— IEC Electropedia: available at https:// www .electropedia .org/
4 Principles of AI risk management
Risk management should address the needs of the organization using an integrated, structured and
comprehensive approach. Guiding principles allow an organization to identify priorities and make
decisions on how to manage the effects of uncertainty on its objectives. These principles apply to all
organizational levels and objectives, whether strategic or operational.
Systems and processes usually deploy a combination of various technologies and functionalities
in various environments, for specific use cases. Risk management should take into account the
whole system, with all its technologies and functionalities, and its impact on the environment and
stakeholders.
AI systems can introduce new or emergent risks for an organization, with positive or negative
consequences on objectives, or changes in the likelihood of existing risks. They also can necessitate
© ISO/IEC 2023 – All rights reserved

ISO/IEC 23894:2023(E)
specific consideration by the organization. Additional guidance for the risk management principles,
framework and processes an organization can implement is provided by this document.
NOTE Different International Standards have significantly different definitions of the word “risk.” In
ISO 31000:2018 and related International Standards, “risk” involves a negative or positive deviation from the
objectives. In some other International Standards, “risk” involves potential negative outcomes only, for example,
safety-related concerns. This difference in focus can often cause confusion when trying to understand and
properly implement a conformant risk management process.
ISO 31000:2018, Clause 4 defines several generic principles for risk management. In addition to guidance
in ISO 31000:2018, Clause 4, Table 1 provides further guidance on how to apply such principles where
necessary.
Table 1 — Risk management principles applied to artificial intelligence
Principle Description (as given in ISO Implications for the development and
31000:2018, Clause 4) use of AI
a) Integrated Risk management is an integral part of all No specific guidance beyond ISO 31000:2018.
organizational activities.
b) Structured and com- A structured and comprehensive approach to No specific guidance beyond ISO 31000:2018.
prehensive risk management contributes to consistent
and comparable results.
c) Customized The risk management framework and process No specific guidance beyond ISO 31000:2018.
are customized and proportionate to the
organization’s external and internal context
related to its objectives.
© ISO/IEC 2023 – All rights reserved

ISO/IEC 23894:2023(E)
TTabablele 1 1 ((ccoonnttiinnueuedd))
Principle Description (as given in ISO Implications for the development and
31000:2018, Clause 4) use of AI
d) Inclusive Appropriate and timely involvement of stake- Because of the potentially far-reaching im-
holders enables their knowledge, views and pacts of AI to stakeholders, it is important
perceptions to be considered. This results that organizations seek dialog with diverse
in improved awareness and informed risk internal and external groups, both to com-
management. municate harms and benefits, and to incor-
porate feedback and awareness into the risk
management process.
Organizations should also be aware that the
use of AI systems can introduce additional
stakeholders.
The areas in which the knowledge, views and
perceptions of stakeholders are of benefit
include but are not restricted to:
— Machine learning (ML) in particular
often relies on the set of data
appropriate to fulfil its objectives.
Stakeholders can help in the
identification of risks regarding
the data collection, the processing
operations, the source and type
of data, and the use of the data for
particular situations or where the
data subjects can be outliers.
— The complexity of AI technologies
creates challenges related to
transparency and explainability
of AI systems. The diversity of AI
technologies further drives these
challenges due to characteristics such
as multiple types of data modalities, AI
model topologies, and transparency
and reporting mechanisms that
should be selected per stakeholders’
needs. Stakeholders can help to
identify the goals and describe the
means for enhancing transparency
and explainability of AI systems. In
certain cases, these goals and means
can be generalized across the use case
and different stakeholders involved. In
other cases, stakeholder segmentation
of transparency frameworks and
reporting mechanisms can be tailored
to relevant personas (e.g. “regulators”,
“business owners”, “model risk
evaluators”) per the use case.
— Using AI systems for automated
decision-making can directly affect
internal and external stakeholders.
Such stakeholders can provide their
views and perceptions concerning,
for example, where human oversight
can be needed. Stakeholders can help
in defining fairness criteria and also
help to identify what constitutes bias
in the working of the AI system.
© ISO/IEC 2023 – All rights reserved

ISO/IEC 23894:2023(E)
TTabablele 1 1 ((ccoonnttiinnueuedd))
Principle Description (as given in ISO Implications for the development and
31000:2018, Clause 4) use of AI
e) Dynamic Risks can emerge, change or disappear as an To implement the guidance provided by
organization’s external and internal context ISO 31000:2018, organizations should estab-
changes. Risk management anticipates, de- lish organizational structures and measures
tects, acknowledges and responds to those to identify issues and opportunities related
changes and events in an appropriate and to emerging risks, trends, technologies, uses
timely manner. and actors related to AI systems.
Dynamic risk management is particularly
important for AI systems because:
— The nature of AI systems is itself
dynamic, due to continuous learning,
refining, evaluating, and validating.
Additionally, some AI systems have
the ability to adapt and optimize
based on this loop, creating dynamic
changes on their own.
— Customer expectations around AI
systems are high and can potentially
change quickly as the systems
themselves do.
— Legal and regulatory requirements
related to AI are frequently changing
and being updated.
Integration with the management systems
on quality, environmental footprints, safety,
healthcare, legal or corporate responsibility,
or any combination of these maintained by
the organization, can also be considered to
further understand and manage AI-related
risks to the organization, individuals and
societies.
f) Best available infor- The inputs to risk management are based on Taking into account the expectation that AI
mation historical and current information, as well affects the way individuals interact with
as on future expectations. Risk management and react to technology, it is advisable for
explicitly takes into account any limitations organizations engaged in the development
and uncertainties associated with such of AI systems to keep track of relevant infor-
information and expectations. Information mation available regarding the further uses
should be timely, clear and available to rel- of the AI systems that they developed, while
evant stakeholders. users of AI systems can maintain records of
the uses of those systems throughout the
entire lifetime of the AI system.
As AI is an emerging technology and con-
stantly evolving, historical information
can be limited, and future expectations can
change quickly. Organizations should take
this into account.
The internal use of AI systems should be
considered, if any. Tracking the use of AI
systems by customers and external users
can be limited by intellectual property,
contractual or market-specific restrictions.
Such restrictions should be captured in the
AI risk management process and updated
when business conditions warrant revisiting.
© ISO/IEC 2023 – All rights reserved

ISO/IEC 23894:2023(E)
TTabablele 1 1 ((ccoonnttiinnueuedd))
Principle Description (as given in ISO Implications for the development and
31000:2018, Clause 4) use of AI
g) Human and cultural Human behaviour and culture significantly Organizations engaged in the design, de-
factors influence all aspects of risk management at velopment or deployment of AI systems, or
each level and stage. any combination of these, should monitor
the human and cultural landscape in which
they are situated. Organizations should focus
on identifying how AI systems or compo-
nents interact with pre-existing societal
patterns that can lead to impacts on equitable
outcomes, privacy, freedom of expression,
fairness, safety, security, employment, the
environment, and human rights broadly.
h) Continual improve- Risk management is continually improved The identification of previously unknown
ment through learning and experience. risks related to the use of AI systems should
be considered in the continual improvement
process. Organizations engaged in the design,
development or deployment of AI systems or
system components, or any combination of
these, should monitor the AI ecosystem for
Performance successes, shortcomings and
lessons learned, and maintain awareness
of new AI research findings and techniques
(opportunities for improvement).
5 Framework
5.1 General
The purpose of the risk management framework is to assist the organization in integrating risk
management into significant activities and functions. The guidance provided in ISO 31000:2018, 5.1
applies.
Risk management involves assembling relevant information for an organization to make decisions and
address risk. While the governing body defines the overall risk appetite and organizational objectives,
it delegates the decision-making process of identifying, assessing and treating risk to management
within the organization.
[1]
ISO/IEC 38507 describes additional governance considerations for the organization regarding
the development, purchase or use of an AI system. Such considerations include new opportunities,
potential changes to the risk appetite as well as new governance policies to ensure the responsible use
of AI by the organization. It can be used in combination with the risk management processes described
in this document to help guide the dynamic and iterative organizational integration described in
ISO 31000:2018, 5.2.
5.2 Leadership and commitment
The guidance provided in ISO 31000:2018, 5.2 applies.
In addition to the guidance provided in ISO 31000:2018, 5.2 the following applies:
Due to the particular importance of trust and accountability related to the development and use of AI,
top management should consider how policies and statements related to AI risks and risk management
are communicated to stakeholders. Demonstrating this level of leadership and commitment can be
critical for ensuring that stakeholders have confidence that AI is being developed and used responsibly.
The organization should therefore consider issuing statements related to its commitment to AI risk
management to increase confidence of their stakeholders on their use of AI.
© ISO/IEC 2023 – All rights reserved

ISO/IEC 23894:2023(E)
Top management should also be aware of the specialized resources that can be needed to manage AI
risk, and allocate those resources appropriately.
5.3 Integration
The guidance provided in ISO 31000:2018, 5.3 applies.
5.4 Design
5.4.1 Understanding the organization and its context
The guidance provided in ISO 31000:2018, 5.4.1 applies.
In addition to guidance provided in ISO 31000:2018, 5.4.1, Table 2 lists additional factors to consider
when understanding the external context of an organization.
Table 2 — Consideration when establishing the external context of an organization
Generic guidance provided by ISO 31000:2018, Additional guidance for organizations engaged in AI
5.4.1
Organizations should consider at least the following Organizations should additionally consider, but not exclusively,
elements of their external context: the following elements:
— The social, cultural, political, legal, regulatory, — Relevant legal requirements, including those
financial, technological, economic and specifically relating to AI.
environmental factors, whether international,
— Guidelines on ethical use and design of AI and
national, regional or local;
automated systems issued by government-related
groups, regulators, standardization bodies, civil
society, academia and industry associations.
— Domain-specific guidelines and frameworks related to
AI.
— Key drivers and trends affecting the objectives — Technology trends and advancements in the various
of the organization; areas of AI.
— Societal and political implications of the deployment of
AI systems, including guidance from social sciences.
— External stakeholders’ relationships, — Stakeholder perceptions, which can be affected by
perceptions, values, needs and expectations; issues such as lack of transparency (also referred to as
opaqueness) of AI systems or biased AI systems.
— Stakeholder expectations on the availability of
specific AI-based solutions and the means by which
the AI models are made available (e.g. through a user
interface, software development kit).
— Contractual relationships and commitments; — How the use of AI, especially AI systems using
continuous learning, can affect the ability of the
organization to meet contractual obligations and
guarantees. Consequently, organizations should
carefully consider the scope of relevant contracts.
— Contractual relationships during the design and
production of AI systems and services. For example,
ownership and usage rights of test and training data
should be considered when provided by third parties.
— The complexity of networks and dependencies; — The use of AI can increase the complexity of networks
and dependencies.
© ISO/IEC 2023 – All rights reserved

ISO/IEC 23894:2023(E)
TTabablele 2 2 ((ccoonnttiinnueuedd))
Generic guidance provided by ISO 31000:2018, Additional guidance for organizations engaged in AI
5.4.1
Organizations should consider at least the following Organizations should additionally consider, but not exclusively,
elements of their external context: the following elements:
— (guidance beyond ISO 31000:2018). — An AI system can replace an existing system and, in
such a case, an assessment of the risk benefits and risk
transfers of an AI system versus the existing system
can be undertaken, considering safety, environmental,
social, technical and financial issues associated with
the implementation of the AI system.
In addition to guidance provided in ISO 31000:2018, 5.4.1, Table 3 lists additional factors to consider
when understanding the internal context of an organization.
Table 3 — Consideration when establishing the internal context of an organization
Generic guidance provided by ISO 31000:2018, Additional guidance for organizations engaged in AI
5.4.1
Organizations should consider at least the follow- Organizations should additionally consider, but not exclu-
ing elements of their internal context: sively, the following elements:
— Vision, mission and values; — No specific guidance beyond ISO 31000:2018
— Governance, organizational structure, roles — No specific guidance beyond ISO 31000:2018
and accountabilities;
— Strategy, objectives and policies; — No specific guidance beyond ISO 31000:2018
— The organization’s culture; — The effect that an AI system can have on the
organization’s culture by shifting and introducing new
responsibilities, roles and tasks.
— Standards, guidelines and models adopted by — Any additional international, regional, national and
the organization; local standards and guidelines that are imposed by the
use of AI systems.
— Capabilities, understood in terms of resources — The additional risks to organizational knowledge
and knowledge (e.g. capital, time, people, related to transparency and explainability of AI
intellectual property, processes, systems and systems.
technologies);
— The use of AI systems can result in changes to the
number of human resources needed to realize a certain
capability, or in a variation of the type of resources
needed, for instance, deskilling or loss of expertise
where human decision-making is increasingly
supported by AI systems.
— The specific knowledge in AI technologies and data
science required to develop and use AI systems.
— The availability of AI tools, platforms and libraries
can enable the development of AI systems without
there being a full understanding of the technology, its
limitations and potential pitfalls.
— The potential for AI to raise issues and opportunities
related to intellectual property for specific AI systems.
Organizations should consider their own intellectual
property in this area and ways that intellectual
property can affect transparency, security and the
ability to collaborate with stakeholders, to determine
whether any steps should be taken.
© ISO/IEC 2023 – All rights reserved

ISO/IEC 23894:2023(E)
TTabablele 3 3 ((ccoonnttiinnueuedd))
Generic guidance provided by ISO 31000:2018, Additional guidance for organizations engaged in AI
5.4.1
Organizations should consider at least the follow- Organizations should additionally consider, but not exclu-
ing elements of their internal context: sively, the following elements:
— Data, information systems and information — AI systems can be used to automate, optimize and
flows; enhance data handling.
— As consumers of data, additional quality and
completeness constraints on data and information can
be imposed by AI systems.
— Relationships with internal stakeholders, — Stakeholder perception, which can be affected by issues
taking into account their perceptions and such as lack of transparency of AI systems or biased AI
values; systems.
— Stakeholder needs and expectations can be satisfied to
a greater extent by specific AI systems.
— The need for stakeholders to be educated on capabilities,
failure modes and failure management of AI systems.
— Contractual relationships and commitments; — Stakeholder perception, which can be affected by
different challenges associated with AI systems such
as potential lack of transparency and unfairness.
— Stakeholder needs and expectations can be satisfied by
specific AI systems.
— The need for stakeholders to be educated on capabilities,
failure modes and failure management of AI systems.
— Stakeholders’ expectations of privacy, and individual
and collective fundamental rights and freedoms.
— Interdependencies and interconnections; — The use of AI systems can increase the complexity of
interdependencies and interconnections.
In addition to the guidance provided in ISO 31000:2018, 5.4.1, organizations should consider that the
use of AI systems can increase the need for specialized training.
5.4.2 Articulating risk management commitment
The guidance provided in ISO 31000:2018, 5.4.2 applies.
5.4.3 Assigning organizational roles, authorities, responsibilities and accountabilities
The guidance provided in ISO 31000:2018, 5.4.3 applies.
In addition to the guidance of ISO 31000:2018, 5.4.3, top management and oversight bodies, where
applicable, should allocate resources and identify individuals:
— with authority to address AI risks;
— with responsibility for establishing and monitoring processes to address AI risks.
5.4.4 Allocating resources
The guidance provided in ISO 31000:2018, 5.4.4 applies.
5.4.5 Establishing communication and consultation
The guidance provided in ISO 31000:2018, 5.4.5 applies.
© ISO/IEC 2023 – All rights reserved

ISO/IEC 23894:2023(E)
5.5 Implementation
The guidance provided in ISO 31000:2018, 5.5 applies.
5.6 Evaluation
The guidance provided in ISO 31000:2018, 5.6 applies.
5.7 Improvement
5.7.1 Adapting
The guidance provided in ISO 31000:2018, 5.7.1 applies.
5.7.2 Continually improving
The guidance provided in ISO 31000:2018, 5.7.2 applies.
6 Risk management process
6.1 General
The guidance provided in ISO 31000:2018, 6.1 applies.
Organizations should implement a risk-based approach to identifying, assessing, and understanding
the AI risks to which they are exposed and take appropriate treatment measures according to the
level of risk. The success of the overall AI risk management process of an organization relies on the
identification, establishment and the successful implementation of narrowly scoped risk management
processes on strategic, operational, programme and project levels. Due to concerns related but not
limited to the potential complexity, lack of transparency and unpredictability of some AI-based
technologies, particular consideration should be given to risk management processes at the AI system
project level. These system project level processes should be aligned with the organization’s objectives
and should be both informed by and inform other levels of risk management. For example, escalations
and lessons learned at the AI project level should be incorporated at the higher levels, such as the
strategic, operational and programme levels, and others as applicable.
The scope, context and criteria of a project-level risk management process are directly affected by
the stages of the AI system’s life cycle that are in the scope of the project. Annex C shows possible
relations between a project-level risk management process and an AI system life cycle (as defined in
ISO/IEC 22989:2022).
6.2 Communication and consultation
The guidance provided in ISO 31000:2018, 6.2 applies.
The set of stakeholders that can be affected by AI systems can be larger than initially foreseen, can
include otherwise unconsidered external stakeholders and can extend to other parts of a society.
6.3 Scope, context and criteria
6.3.1 General
The guidance provided in ISO 31000:2018, 6.3.1 applies.
In addition to the guidance provided in ISO 31000:2018, 6.3.1, for organizations using AI the scope of
the AI risk management, the context of the AI risk management process and the criteria to evaluate
the significance of risk to support decision-making processes should be extended to identify where AI
© ISO/IEC 2023 – All rights reserved

ISO/IEC 23894:2023(E)
systems are being developed or used in the organization. Such an inventory of AI development and use
should be documented and included in the organization’s risk management process.
6.3.2 Defining the scope
The guidance provided in ISO 31000:2018, 6.3.2 applies.
The scope should take the specific tasks and responsibilities of the different levels of an organization
into account. Moreover, the objectives and purpose of the AI systems developed or used by the
organization should be considered.
6.3.3 External and internal context
The guidance provided in ISO 31000:2018, 6.3.3 applies.
Because of the magnitude of pot
...

Questions, Comments and Discussion

Ask us and Technical Secretary will try to provide an answer. You can facilitate discussion about the standard in here.

Loading comments...

SIST EN ISO/IEC 23894:2024は、情報技術における人工知能(AI)に関するリスク管理のガイダンスを提供する標準です。この標準の範囲は、AIを活用する製品、システム、サービスを開発、製造、展開、使用する組織におけるリスク管理に特化しています。リスク管理をAI関連の活動や機能に統合する手助けをすることを目指しており、その内容は対象組織の特性やコンテキストに応じてカスタマイズ可能です。 このガイダンスの強みは、AIに特有なリスクに対して明示的な管理手法を提供している点です。AIがもたらす不確実性や潜在的なリスク要因を考慮に入れた上で、効果的なリスク管理プロセスを確立するための具体的な手引きが示されています。また、リスク管理の実効性を高めるために、AIの開発から運用までの各段階での統合が強調されています。 さらに、SIST EN ISO/IEC 23894:2024は、AIの迅速な進化に対応するための柔軟性を持っており、これは特に急速に変化する技術環境において重要です。リスクの識別、評価、軽減に関する実践的なアプローチが示されており、組織はこの標準を通じて自らのリスク管理能力を一層強化することが期待できます。このように、標準が持つ意味や価値は高く、AIを利用する組織にとって必須の参考文献となるでしょう。

The SIST EN ISO/IEC 23894:2024 is a crucial standard that offers comprehensive guidance on risk management specifically related to artificial intelligence (AI). The scope of this document is significant, as it addresses the needs of organizations involved in the development, production, deployment, or usage of AI technologies. By providing tailored guidance, the standard allows organizations to effectively manage risks associated with AI, promoting safer and more reliable integration of these technologies into their operations. One of the strengths of this standard is its emphasis on customization. The guidance is adaptable to various organizational contexts, ensuring that different types and sizes of organizations can effectively implement AI risk management frameworks. This flexibility encourages broader adoption across sectors, enhancing understanding and application of AI risk management practices. Additionally, the document outlines clear processes for integrating and implementing risk management in AI-related activities, which is vital for organizations looking to foster a risk-aware culture. This structured approach helps organizations not only identify and assess potential risks but also develop strategies to mitigate them, thus promoting the responsible use of AI. Furthermore, the relevance of SIST EN ISO/IEC 23894:2024 cannot be overstated, especially in an era where AI technologies are rapidly evolving and becoming integral to various industries. By adhering to this guidance, organizations can enhance their risk management capabilities, ensuring compliance with international best practices and ultimately safeguarding their stakeholders. Overall, SIST EN ISO/IEC 23894:2024 stands out as a forward-thinking standard that effectively addresses the complexities of AI risk management, providing valuable insights and frameworks that can drive organizational success in the increasingly AI-driven future.

SIST EN ISO/IEC 23894:2024 표준은 인공지능(AI)과 관련된 제품, 시스템 및 서비스를 개발, 생산, 배포 및 사용하는 조직을 위한 위험 관리에 관한 지침을 제공합니다. 이 표준의 주요 범위는 AI와 관련된 위험을 효과적으로 관리하기 위한 방법을 안내하는 것이며, 조직들이 AI 관련 활동과 기능에 위험 관리 프로세스를 통합하는 데 도움을 줍니다. 이 표준의 강점 중 하나는 모든 조직이 자신의 맥락에 맞게 맞춤형으로 적용할 수 있다는 점입니다. 이는 다양한 업종과 규모의 조직들이 AI 위험 관리의 중요성을 인식하고, 각자의 환경에 적합한 전략을 채택할 수 있도록 지원합니다. 또한, 표준은 AI 위험 관리를 효과적으로 구현하고 통합하기 위한 체계적인 프로세스를 설명하고 있어, 관련 분야의 전문가들에게 실질적인 가이드를 제공합니다. SIST EN ISO/IEC 23894:2024의 적합성과 관련성은 점점 더 많은 조직이 AI 기술을 채택하고 있는 현시점에서 더욱 부각됩니다. AI의 활용이 증가함에 따라 발생하는 새로운 위험 요소들을 체계적으로 관리할 수 있는 지침을 제공함으로써, 이 표준은 AI의 안전한 사용과 신뢰성을 높이는 데 기여할 것입니다. 이는 기업의 지속가능한 성장 및 사회적 책임을 구현하는 데 필요한 기초 자료로서 중요한 역할을 합니다.