Information technology - Artificial intelligence - Overview of ethical and societal concerns

This document provides a high-level overview of AI ethical and societal concerns. In addition, this document: - provides information in relation to principles, processes and methods in this area; - is intended for technologists, regulators, interest groups, and society at large; - is not intended to advocate for any specific set of values (value systems). This document includes an overview of International Standards that address issues arising from AI ethical and societal concerns.

Titre manque

General Information

Status
Published
Publication Date
18-Aug-2022
Current Stage
6060 - International Standard published
Start Date
19-Aug-2022
Completion Date
19-Aug-2022
Ref Project

Overview

ISO/IEC TR 24368:2022 - Information technology - Artificial intelligence - Overview of ethical and societal concerns - is a Technical Report from ISO/IEC JTC 1/SC 42 that provides a high‑level, non‑prescriptive overview of AI ethics, ethical frameworks, and societal concerns. The report is intended for technologists, regulators, interest groups and the public, and summarizes principles, processes and methods useful for aligning AI design, development and deployment with ethical and societal expectations. It explicitly does not advocate a specific value system and includes an overview of related international standards.

Key topics (guidance, not prescriptive requirements)

The document organizes and explains core themes and practical considerations for ethical AI. Major topics include:

  • Ethical frameworks (virtue ethics, utilitarianism, deontology) and their relevance to AI decision‑making
  • Human rights practices and international human‑rights considerations for AI systems
  • Themes and principles such as:
    • Accountability
    • Fairness and non‑discrimination
    • Transparency and explainability
    • Privacy
    • Safety and security
    • Human control of technology
    • Professional responsibility
    • Human‑centred design
    • Respect for rule of law and international norms
    • Environmental sustainability and labour practices
  • Practical practices and processes: aligning organizational processes to AI principles, defining prohibited applications, ethics review workflows, training in AI ethics, and impact assessment approaches
  • Use‑case examples and annexes to illustrate application of principles in real contexts

Note: ISO/IEC TR 24368:2022 is a Technical Report offering guidance and references (for example, it cites ISO/IEC 22989 on AI concepts and terminology) rather than mandatory technical requirements.

Practical applications

Organizations can use this report to:

  • Establish or refine AI governance and ethical review processes
  • Develop organizational AI principles, policies and training programs
  • Guide risk‑based impact assessments for privacy, fairness, safety and security
  • Inform procurement, product design and deployment choices to reduce societal harms
  • Support regulators and policy makers with common terminology and a map of ethical concerns

Who should use this standard

  • AI developers, data scientists and system architects
  • Corporate governance, compliance and legal teams
  • Public sector regulators, standards bodies and policy makers
  • Civil society groups and multidisciplinary ethics committees

Related standards

  • Normative reference in the report: ISO/IEC 22989 (AI concepts and terminology)
  • The report also maps to broader ISO/IEC work by JTC 1/SC 42 on AI governance and trustworthy AI.

Keywords: ISO/IEC TR 24368:2022, AI ethics, ethical and societal concerns, AI standards, transparency, accountability, fairness, AI governance, human‑centred design.

Technical report
ISO/IEC TR 24368:2022 - Information technology — Artificial intelligence — Overview of ethical and societal concerns Released:19. 08. 2022
English language
48 pages
sale 15% off
Preview
sale 15% off
Preview

Frequently Asked Questions

ISO/IEC TR 24368:2022 is a technical report published by the International Organization for Standardization (ISO). Its full title is "Information technology - Artificial intelligence - Overview of ethical and societal concerns". This standard covers: This document provides a high-level overview of AI ethical and societal concerns. In addition, this document: - provides information in relation to principles, processes and methods in this area; - is intended for technologists, regulators, interest groups, and society at large; - is not intended to advocate for any specific set of values (value systems). This document includes an overview of International Standards that address issues arising from AI ethical and societal concerns.

This document provides a high-level overview of AI ethical and societal concerns. In addition, this document: - provides information in relation to principles, processes and methods in this area; - is intended for technologists, regulators, interest groups, and society at large; - is not intended to advocate for any specific set of values (value systems). This document includes an overview of International Standards that address issues arising from AI ethical and societal concerns.

ISO/IEC TR 24368:2022 is classified under the following ICS (International Classification for Standards) categories: 35.020 - Information technology (IT) in general. The ICS classification helps identify the subject area and facilitates finding related standards.

You can purchase ISO/IEC TR 24368:2022 directly from iTeh Standards. The document is available in PDF format and is delivered instantly after payment. Add the standard to your cart and complete the secure checkout process. iTeh Standards is an authorized distributor of ISO standards.

Standards Content (Sample)


TECHNICAL ISO/IEC TR
REPORT 24368
First edition
2022-08
Information technology — Artificial
intelligence — Overview of ethical and
societal concerns
Reference number
© ISO/IEC 2022
© ISO/IEC 2022
All rights reserved. Unless otherwise specified, or required in the context of its implementation, no part of this publication may
be reproduced or utilized otherwise in any form or by any means, electronic or mechanical, including photocopying, or posting on
the internet or an intranet, without prior written permission. Permission can be requested from either ISO at the address below
or ISO’s member body in the country of the requester.
ISO copyright office
CP 401 • Ch. de Blandonnet 8
CH-1214 Vernier, Geneva
Phone: +41 22 749 01 11
Email: copyright@iso.org
Website: www.iso.org
Published in Switzerland
ii
© ISO/IEC 2022 – All rights reserved

Contents Page
Foreword .v
Introduction . vi
1 Scope . 1
2 Normative references . 1
3 Terms and definitions . 1
4 Overview . 3
4.1 General . 3
4.2 Fundamental sources . 4
4.3 Ethical frameworks . 6
4.3.1 General . 6
4.3.2 Virtue ethics . 6
4.3.3 Utilitarianism . 6
4.3.4 Deontology . 6
5 Human rights practices . 7
5.1 General . 7
6 Themes and principles . 8
6.1 General . 8
6.2 Description of key themes and associated principles . 8
6.2.1 Accountability . 8
6.2.2 Fairness and non-discrimination . 9
6.2.3 Transparency and explainability . 9
6.2.4 Professional responsibility . 10
6.2.5 Promotion of human values. 10
6.2.6 Privacy . 11
6.2.7 Safety and security . 11
6.2.8 Human control of technology .12
6.2.9 Community involvement and development .12
6.2.10 Human centred design .13
6.2.11 Respect for the rule of law . 13
6.2.12 Respect for international norms of behaviour .13
6.2.13 Environmental sustainability . 14
6.2.14 Labour practices . 14
7 Examples of practices for building and using ethical and socially acceptable AI .15
7.1 Aligning internal process to AI principles . 15
7.1.1 General .15
7.1.2 Defining ethical AI principles the organization can adopt .15
7.1.3 Defining applications the organization cannot pursue .15
7.1.4 Review process for new projects . 15
7.1.5 Training in AI ethics . 16
7.2 Considerations for ethical review framework . 16
7.2.1 Identify an ethical issue . 16
7.2.2 Get the facts . 16
7.2.3 List and evaluate alternative actions . 17
7.2.4 Make a decision and act on it . 17
7.2.5 Act and reflect on the outcome . 17
8 Considerations for building and using ethical and socially acceptable AI .17
8.1 General . 17
8.2 Non-exhaustive list of ethical and societal considerations . 17
8.2.1 General . 17
8.2.2 International human rights . 18
8.2.3 Accountability . 18
iii
© ISO/IEC 2022 – All rights reserved

8.2.4 Fairness and non-discrimination . 18
8.2.5 Transparency and explainability . 18
8.2.6 Professional responsibility . 19
8.2.7 Promotion of human values. 20
8.2.8 Privacy . 20
8.2.9 Safety and security . 20
8.2.10 Human control of technology . 21
8.2.11 Community involvement and development . 21
8.2.12 Human centred design . 21
8.2.13 Respect for the rule of law . 21
8.2.14 Respect for international norms of behaviour .22
8.2.15 Environmental sustainability . 22
8.2.16 Labour practices .22
Annex A (informative) AI principles documents .23
Annex B (informative) Use case studies .33
Bibliography .42
iv
© ISO/IEC 2022 – All rights reserved

Foreword
ISO (the International Organization for Standardization) and IEC (the International Electrotechnical
Commission) form the specialized system for worldwide standardization. National bodies that are
members of ISO or IEC participate in the development of International Standards through technical
committees established by the respective organization to deal with particular fields of technical
activity. ISO and IEC technical committees collaborate in fields of mutual interest. Other international
organizations, governmental and non-governmental, in liaison with ISO and IEC, also take part in the
work.
The procedures used to develop this document and those intended for its further maintenance
are described in the ISO/IEC Directives, Part 1. In particular, the different approval criteria
needed for the different types of document should be noted. This document was drafted in
accordance with the editorial rules of the ISO/IEC Directives, Part 2 (see www.iso.org/directives or
www.iec.ch/members_experts/refdocs).
Attention is drawn to the possibility that some of the elements of this document may be the subject
of patent rights. ISO and IEC shall not be held responsible for identifying any or all such patent
rights. Details of any patent rights identified during the development of the document will be in the
Introduction and/or on the ISO list of patent declarations received (see www.iso.org/patents) or the IEC
list of patent declarations received (see https://patents.iec.ch).
Any trade name used in this document is information given for the convenience of users and does not
constitute an endorsement.
For an explanation of the voluntary nature of standards, the meaning of ISO specific terms and
expressions related to conformity assessment, as well as information about ISO's adherence to
the World Trade Organization (WTO) principles in the Technical Barriers to Trade (TBT) see
www.iso.org/iso/foreword.html. In the IEC, see www.iec.ch/understanding-standards.
This document was prepared by Joint Technical Committee ISO/IEC JTC 1, Information technology,
Subcommittee SC 42, Artificial intelligence.
Any feedback or questions on this document should be directed to the user’s national standards
body. A complete listing of these bodies can be found at www.iso.org/members.html and
www.iec.ch/national-committees.
v
© ISO/IEC 2022 – All rights reserved

Introduction
Artificial intelligence (AI) has the potential to revolutionise the world and carry a plethora of
benefits for societies, organizations and individuals. However, AI can introduce substantial risks and
uncertainties. Professionals, researchers, regulators and individuals need to be aware of the ethical
and societal concerns associated with AI systems and applications.
Potential ethical concerns in AI are wide ranging. Examples of ethical and societal concerns in AI
include privacy and security breaches to discriminatory outcomes and impact on human autonomy.
Sources of ethical and societal concerns include but are not limited to:
— unauthorized means or measures of collection, processing or disclosing personal data;
— the procurement and use of biased, inaccurate or otherwise non-representative training data;
— opaque machine learning (ML) decision-making or insufficient documentation, commonly referred
to as lack of explainability;
— lack of traceability;
— insufficient understanding of the social impacts of technology post-deployment.
AI can operate unfairly particularly when trained on biased or inappropriate data or where the model
or algorithm is not fit-for-purpose. The values embedded in algorithms, as well as the choice of problems
AI systems and applications are used for to address, can be intentionally or inadvertently shaped by
developers’ and stakeholders’ own worldviews and cognitive bias.
Future development of AI can expand existing systems and applications to grow into new fields and
increase the level of automation which these systems have. Addressing ethical and societal concerns
has not kept pace with the rapid evolution of AI. Consequently, AI designers, developers, deployers
and users can benefit from flexible input on ethical frameworks, AI principles, tools and methods for
risk mitigation, evaluation of ethical factors, best practices for testing, impact assessment and ethics
reviews. This can be addressed through an inclusive, interdisciplinary, diverse and cross-sectoral
approach, including all AI stakeholders, aided by International Standards that address issues arising
from AI ethical and societal concerns, including work by Joint Technical Committee ISO/IEC JTC 1, SC
42.
vi
© ISO/IEC 2022 – All rights reserved

TECHNICAL REPORT ISO/IEC TR 24368:2022(E)
Information technology — Artificial intelligence —
Overview of ethical and societal concerns
1 Scope
This document provides a high-level overview of AI ethical and societal concerns.
In addition, this document:
— provides information in relation to principles, processes and methods in this area;
— is intended for technologists, regulators, interest groups, and society at large;
— is not intended to advocate for any specific set of values (value systems).
This document includes an overview of International Standards that address issues arising from AI
ethical and societal concerns.
2 Normative references
The following documents are referred to in the text in such a way that some or all of their content
constitutes requirements of this document. For dated references, only the edition cited applies. For
undated references, the latest edition of the referenced document (including any amendments) applies.
ISO/IEC 22989, Information technology — Artificial intelligence — Artificial intelligence concepts and
terminology
3 Terms and definitions
For the purposes of this document, the terms and definitions given in ISO/IEC 22989 and the following
apply.
ISO and IEC maintain terminology databases for use in standardization at the following addresses:
— ISO Online browsing platform: available at https:// www .iso .org/ obp
— IEC Electropedia: available at https:// www .electropedia .org/
3.1
agency
ability to define one's goals and act upon them
[SOURCE: ISO/TR 21276:2018, 3.6.2]
3.2
bias
systematic difference in treatment (3.13) of certain objects, people, or groups in comparison to others
[SOURCE: ISO/IEC TR 24027:2021, 3.2.2, modified — Removed Note to entry.]
3.3
data management
process of keeping track of all data and/or information related to the creation, production, distribution,
storage, disposal and use of e-media, and associated processes
[SOURCE: ISO 20294:2018, 3.5.4, modified — Added "disposal" to definition.]
© ISO/IEC 2022 – All rights reserved

3.4
data protection
legal, administrative, technical or physical measures taken to avoid unauthorized access to and use of
data
[SOURCE: ISO 5127:2017, 3.13.5.01, modified — Removed Note to entry.]
3.5
equality
state of being equal, especially in status, rights or opportunities
[SOURCE: ISO 30415:2021, 3.9, modified — Removed "outcome" from definition.]
3.6
equity
practice of eliminating avoidable or remediable differences among groups of people, whether those
groups are defined socially, economically, demographically or geographically
3.7
fairness
treatment (3.13), behaviour or outcomes that respect established facts, societal norms and beliefs and
are not determined or affected by favouritism or unjust discrimination
Note 1 to entry: Considerations of fairness are highly contextual and vary across cultures, generations,
geographies and political opinions.
Note 2 to entry: Fairness is not the same as the lack of bias (3.2). Bias does not always result in unfairness and
unfairness can be caused by factors other than bias.
3.8
cognitive bias
human cognitive bias
bias (3.2) that occurs when humans are processing and interpreting information
Note 1 to entry: Human cognitive bias influences judgement and decision-making.
[SOURCE: ISO/IEC TR 24027:2021, 3.2.4, modified — Added "cognitive bias" as preferred term.]
3.9
life cycle
evolution of a system, product, service, project or other human-made entity from conception through
retirement
[SOURCE: ISO/IEC/IEEE 12207:2017, 3.1.26]
3.10
organization
company, corporation, firm, enterprise, authority or institution, person or persons or part or
combination thereof, whether incorporated or not, public or private, that has its own functions and
administration
[SOURCE: ISO 30000:2009, 3:10]
3.11
privacy
rights of an entity (normally an individual or an organization), acting on its own behalf, to determine
the degree to which the confidentiality of their information is maintained
[SOURCE: ISO/IEC 24775-2:2021, 3.1.46]
© ISO/IEC 2022 – All rights reserved

3.12
responsibility
obligation to act or take decisions to achieve required outcomes
Note 1 to entry: A decision can be taken not to act.
[SOURCE: ISO/IEC 38500:2015, 2.22, modified — Changed “and” to “or” and added Note to entry.]
3.13
treatment
kind of action, such as perception, observation, representation, prediction or decision
[SOURCE: ISO/IEC TR 24027:2021, 3.2.2, modified — Changed Note to entry to term and definition.]
3.14
safety
expectation that a system does not, under defined conditions, lead to a state in which human life, health,
property, or the environment is endangered
[SOURCE: ISO/IEC/IEEE 12207:2017, 3.1.48]
3.15
security
aspects related to defining, achieving, and maintaining confidentiality, integrity, availability,
accountability, authenticity, and reliability
Note 1 to entry: A product, system, or service is considered to be secure to the extent that its users can rely that
it functions (or will function) in the intended way. This is usually considered in the context of an assessment of
actual or perceived threats.
[SOURCE: ISO/IEC 15444-8:2007, 3.25]
3.16
sustainability
state of the global system, including environmental, social and economic aspects, in which the needs
of the present are met without compromising the ability of future generations to meet their own needs
[SOURCE: ISO/Guide 82:2019, 3.1, modified — Removed Notes to entry.]
3.17
traceability
ability to identify or recover the history, provenance, application, use and location of an item or its
characteristics
3.18
value chain
range of activities or parties that create or receive value in the form of products or services
[SOURCE: ISO 22948:2020, 3.2.11]
4 Overview
4.1 General
Ethical and societal concerns are a factor when developing and using AI systems and applications.
Taking context, scope and risks into consideration can mitigate undesirable ethical and societal
outcomes and harms. Examples of areas where there is an increasing risk for undesirable ethical and
[24]
societal outcomes and harms include the following :
— financial harm;
© ISO/IEC 2022 – All rights reserved

— psychological harm;
— harm to physical health or safety;
— intangible property (for example, IP theft, damage to a company’s reputation);
— social or political systems (for example, election interference, loss of trust in authorities);
— civil liberties (for example, unjustified imprisonment or other punishment, censorship, privacy
breaches).
In the absence of such considerations, there is a risk that the technology itself can levy significant social
or other consequences, with possible unintended or avoidable costs, even if it performs flawlessly from
a technical perspective.
4.2 Fundamental sources
Various sources address ethical and societal concerns specifically or in a general way. Some of these
sources are identified.
Firstly, ISO Guide 82 provides guidance to standards developers in considering sustainability in their
activities with specific reference to the social responsibility guidance of ISO 26000. This document
therefore describes social responsibility in a form that can inform activities related to standardising
trustworthy AI.
ISO 26000 provides organizations with guidance concerning social responsibility. It is based on the
fundamental practices of:
— recognizing social responsibility within an organization;
— undertaking stakeholder identification and engagement.
Without data, the development and use of AI cannot be possible. Therefore, the importance of data and
data quality makes traceability and data management a pivotal consideration in the use and development
of AI. The following data-oriented elements are at the core of creating ethical and sustainable AI:
— data collection (including the means or measures of such data collection);
— data preparation;
— monitoring of traceability;
— access and sharing control (authentication);
— data protection;
— storage control (adding, change, removal);
— data quality.
These elements impact explainability, transparency, security and privacy, especially in cases of personal
identifiable information being generated, controlled or processed. Traceability and data management
are essential considerations for an organization using or developing AI systems and applications.
ISO/IEC 38505-1 considers data value, risks and constraints in governing how data are collected, stored,
distributed, disposed of, reported on and used in organizational decision-making and procedures. The
results of data mining or machine learning activities in reporting and decision-making are regarded as
another form of data, which are therefore subject to the same data governance guidelines.
Furthermore, the description of ethical and societal concerns relative to AI systems and applications
can be based on various AI-related International Standards.
© ISO/IEC 2022 – All rights reserved

ISO/IEC 22989 provides standardized AI terminology and concepts and describes a life cycle for AI
systems.
ISO/IEC 22989 also defines a set of stakeholders involved in the development and use of an AI system.
ISO/IEC 22989 describes the different AI stakeholders in the AI system value chain that include AI
provider, AI producer, AI customer, AI partner and AI subject. ISO/IEC 22989 also describes various
sub-roles of these types of stakeholders. In this document we refer to all of these different stakeholder
types collectively as stakeholders.
ISO/IEC 22989 includes “relevant regulatory and policy making authorities” as a sub-role of AI subject.
Regulatory roles for AI are currently not yet widely defined, but a range of proposals has been made
including organizations appointed by individual stakeholders; industry-representative bodies; self-
appointed civic-society actors; or institutions established through national legislation or international
treaty.
All of these features of ISO/IEC 22989 assist in the description of AI-specific ethical and societal
concerns.
As AI has the potential to impact a wide range of societal stakeholders, including future generations
impacted by changes to the environment (indirectly affected stakeholders). For example, images of
pedestrians on a sidewalk can be captured by autonomous vehicle technology, or innocent persons can
be subject to police surveillance equipment designed to survey suspected criminals.
ISO/IEC 23894 provides guidelines on managing AI-related risks faced by organizations during the
development and application of AI techniques and systems. It follows the structure of ISO 31000:2018
and provides guidance that arises from the development and use of AI systems. The risk management
system described in ISO/IEC 23894 assists in the description of ethical and societal concerns in this
document.
ISO/IEC TR 24027 describes the types and forms of bias in AI systems and how they can be measured
and mitigated. ISO/IEC TR 24027 also describes the concept of fairness in AI systems. Bias and fairness
are important for the description of AI-specific ethical and societal concerns.
ISO/IEC TR 24028 provides an introduction to AI system transparency and explainability, which are
important aspects of trustworthiness and which can impact ethical and societal concerns.
ISO/IEC TR 24030 describes a collection of 124 use cases of AI applications in 24 different application
domains. The use cases identify stakeholders, stakeholders’ assets and values, and threat and
vulnerabilities of the described AI system and application. Some of the use cases describe societal and
ethical concerns.
ISO/IEC 38507 provides guidance on the governance implications for organization involved in the
development and use of AI systems. This guidance is in addition to measures defined in existing
International Standards on governance, namely:
— ISO 37000;
— ISO/IEC 38500;
— ISO/IEC 38505-1.
Governance is a key mechanism by which an organization is able to address the ethical and societal
implications of its involvement in AI systems and applications.
ISO/IEC 27001 specifies the requirements for establishing, implementing, maintaining and continually
improving an information security management system within the context of the organization. It also
includes requirements for the assessment and treatment of information security risks tailored to the
needs of the organization. The requirements set out in ISO/IEC 27001 are generic in nature and can
serve as a foundation for systematic information security management within the context of AI. This,
in turn, can have downstream impacts on ethical and societal issues in AI systems and applications.
ISO/IEC 27001 is supplemented by ISO/IEC 27002:2022, which provides guidelines for organizational
© ISO/IEC 2022 – All rights reserved

information security standards and information security management practices including the selection,
implementation and management of controls.
ISO/IEC 27701 provides guidance for establishing, implementing, maintaining and continually
improving a Privacy Information Management System in the form of an extension to ISO/IEC 27001 and
ISO/IEC 27002:2022 for data privacy management within an organization. ISO/IEC 27701 can serve as
a foundation for privacy information management within the context of AI.
4.3 Ethical frameworks
4.3.1 General
AI ethics is a field within applied ethics. This means that principles and practices are rarely the result
of applying ethical theories. Nevertheless, many of the challenges are closely related to traditional
ethical concepts and problems – for example privacy, fairness and autonomy that can be addressed
in existing ethical frameworks. See Reference [25] for more possible ethical frameworks. This list of
ethical frameworks is neither collectively exhaustive nor mutually exclusive. Hence, ethical frameworks
[26]
beyond those listed can be considered .
4.3.2 Virtue ethics
Virtue ethics is an ethical framework that specifies sets of virtues, which are intended to be pursued
(e.g. respect, honesty, courage, compassion, generosity), and sets of vices (e.g. dishonesty, hatred),
which are intended to be avoided. Virtue ethics has the strength of being flexible and aspirational. Its
primary disadvantage is that it does not offer any specific implementation guidelines. Saying that an AI
system is designed to be “honest” is only meaningful if provided with a mechanism by which that virtue
is operationalized. However, so long as its technical limitations are kept in mind, virtue ethics can serve
as a useful tool for determining whether or not an application of AI is a reflection of human virtues.
4.3.3 Utilitarianism
Utilitarianism is an ethical framework that maximizes good and minimizes harm. A utilitarian choice
is one that produces the greatest good and does the least amount of harm to all stakeholders involved.
Once the ethical aspects of a problem are explained logically, utilitarian approaches have the strength
of being universally understandable and intuitive to implement. Utilitarianism’s primary disadvantage
is that utilitarian frameworks permit harming some for the good of the whole. Examples include the
[27]
Trolley Problem where utilitarianism supports murder, or the example of transplant patients at a
hospital, where utilitarianism supports the dissection of a healthy donor to transplant their organs
[28]
into multiple patients. In addition, many moral considerations are difficult to quantify (e.g. dignity)
or are subjective - what is good for one person might not be good for another. Moral considerations
vary enough that they are difficult to weigh against each other, for example environmental pollution
versus societal truthfulness. Further, utilitarianism as a framework is a form of consequentialism - the
doctrine that "the ends support the means". Consequentialism supports creating solutions that offer
[29]
net benefit, but does not require that those solutions function ethically, e.g. in an unbiased way .
4.3.4 Deontology
Deontology is an ethical framework which assesses morality by a set of predefined duties or rules.
The specific mechanism for making this determination is a set of rules or codified norms which can be
analysed in the moment without needing to calculate what the consequences of those actions can be.
An example of such a rule is “equality of opportunity” within fairness. Equality of opportunity dictates
that the people who qualify for an opportunity are equally likely to do so regardless of their social
group membership. The main disadvantage of deontology is that such universal rules can be difficult
to derive in practice and can be brittle when deployed in cross-contextual settings or highly variable
environments.
© ISO/IEC 2022 – All rights reserved

5 Human rights practices
5.1 General
International Human Rights, as outlined in the Universal Declaration of Human Rights, see Reference
[30], the UN Sustainable Development Goals, see Reference [31] and UN Guiding Principles on Business
and Human Rights, see Reference [32], are fundamental moral principles to which a person is inherently
entitled, simply by virtue of being human. They can serve as a guiding framework for directing corporate
responsibility around AI systems and applications with the benefit of international acceptance as a
more mature framework for assessments of policy and technology. International Human Rights can also
provide established process for performing due diligence and impact assessments. The implications of
human rights on the governance of AI in organizations are discussed in ISO/IEC 38507.
Frameworks, such as care ethics or social justice, support many of the themes presented in 6.2, including
privacy, fairness and non-discrimination, promotion of human values, safety and security and respect
for international norms of behaviour. In addition, many sources of international law and legal principles
can individually complement several of the themes. They include, but are not limited to the following:
— the Universal Declaration on Human Rights, see Reference [30];
— the UN Guiding Principles on Business and Human Rights, see Reference [32];
— the International Convention on the Elimination of All Forms of Racial Discrimination, see
Reference [33];
— the Declaration on the Rights of Indigenous People, see Reference [34];
— the Convention on the Elimination of Discrimination against Women, see Reference [35];
— the Convention of the Rights of Persons with Disabilities, see Reference [36].
These sources can be understood in terms of their objective of enhancing standards and practices
with regard to business and human rights, and to achieve tangible results for affected individuals and
communities. Relevant issues include due diligence by an organization to identify and mitigate human
rights impacts. Where human rights are impacted by an organization’s AI activities, clear, accessible,
predictable and equitable mechanisms can address and solve grievances.
Some examples of potential impacts of AI on civil and political human rights include:
— right to life, liberty and security of person (e.g. the use of autonomous weapons or AI-motivated
intrusive data collection practices);
— right to opinion, expression and access to information (e.g. the use of AI-enabled filtering or
synthesizing of digital content);
— freedom from discrimination and right to equality before the law (e.g. impacted by the use of AI-
aided judicial risk assessment algorithms, predictive policing tool for forward thinking crime
prevention or financial technology);
— freedom from arbitrary interference with privacy, family, home or correspondence (e.g. unauthorized,
AI-based means and measures to collect sensitive biometric and physiological data);
— right to education and desirable work (e.g. the use of AI in recruiting people for employment or
providing access to education and training).
© ISO/IEC 2022 – All rights reserved

6 Themes and principles
6.1 General
In addition to the Human Rights practices referred to in Clause 5, AI principles can help guide
organizations develop and use AI in responsible ways. The purpose of these principles is to support
organizations beyond nonmaleficence and to focus on beneficence of technology. For example, designing
AI that is intended to promote social good and that serves that specific function rather than simply
aiming to avoid harm.
These principles do not only cover AI providers and producers and their intended use of the AI systems.
When making AI systems available to AI customers and other stakeholders, it is important to also
examine their potential misuse, abuse and disuse. As emphasized in ISO/IEC TR 24028:2020, 9.9.1, this
includes:
— over-reliance on AI systems leading to negative outcomes (misuse);
— under-reliance on AI systems leading to negative outcomes (disuse);
— negative outcomes resulting from using or repurposing AI systems in an area for which it was not
designed and tested (e.g. abuse).
AI systems are particularly susceptible to disuse and misuse because of the way in which they mimic
human capabilities. When a system seems human-like yet lacks the context that humans would take
into account, users can misuse or disuse it. Such misuse or disuse can arise from trusting it more or less
than warranted. For example, with autonomous driving, medical diagnosis or loan approvals.
In response to these concerns, in anticipation of government regulation, or in an attempt, through
industry self-regulation, several sets of principles for AI have emerged out of the international
community. These have been documented in various publications, see Annex A.
This clause follows the structure laid out by the Berkman Klein Center report, see Clause A.1 by
grouping AI principles into themes. The themes emerged from the ethical concern that principles
attempt to address. Principles within these thematic groups can vary widely and can even contradict
each other. AI-specific themes complement those featured in ISO 26000, which sets out principles for an
organization to consider when aiming to behave in a socially responsible manner.
6.2 Description of key themes and associated principles
6.2.1 Accountability
[84]
Accountability occurs when an organization accepts responsibility for the impact of its actions on
stakeholders, society, the economy and the environment. Accountability means that an organization
accepts appropriate scrutiny and accepts a duty to respond to this scrutiny. Hence, accountability for AI
decisions means ensuring the organizations are capable of accepting responsibility for decisions made
on its behalf, and understand that it is not absolved of responsibility for erroneous decisions based on,
for example, AI machine learning output.
Accountability specifies that the organization and its representatives are responsible for the impact of
negative consequences resulting from the AI systems’ and applications’ design, development and use
or misuse by anyone deploying AI technologies. Accountability also provides focus and attention to
consider the unintended consequences that can arise due to the evolutionary nature of AI systems and
applications, and difficulty predicting how AI systems and applications can be used and repurposed
once deployed. Without clear requirements for accountability, constraints and boundaries are
unfettered, and potential harms can go unnoticed.
Accountability for the organization’s decisions is ultimately the responsibility of the group of people
who direct and control an organization. However, accountability is often delegated to the appropriate
responsible parties. Employees, therefore, can be trained to understand the implications of their work
© ISO/IEC 2022 – All rights reserved

in developing, deploying or using AI tools and to be accountable for their area of responsibility. They
can also understand what actions to take to ensure appropriate decisions are being made, whether in
an organizational or engineering context. For example, it is the organization’s responsibility to establish
non-discriminatory and transparent policies. It is the engineer’s responsibility to develop AI systems
and applications that follow those policies by ensuring the development and use of non-discriminatory,
transparent, and explainable algorithms.
Accountability provides necessary constraints to help limit potential negative outcomes and establish
realistic and actionable risk governance for the organization. Combined, they help to define how to
prioritize responsibilities. Some aspects that are covered by this theme are:
— working with stakeholders to assess the potential impact of a system early on in the design;
— validating that stakeholder needs have actually been met;
— verifying that an AI system is working as intended;
— ensuring the traceability of data and algorithms throughout the whole AI value chain;
— enabling a third-party audit and acting on its findings;
— providing ways to challenge AI decisions;
— remedying erroneous or harmful AI decisions when challenge or appeal is not possible.
6.2.2 Fairness and non-discrimination
[85] [86]
The theme of fairness and non-discrimination aims to ensure that AI works well for people
across different social groups, notably for those who have been deprived of social, political or economic
power in their local, national and international contexts. These social groups differ across contexts
and include but are not limited to those that require protection from discrimination based on sex,
race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other
opinion, membership of a national minority, property, birth, disability, age or sexual orientation. Some
aspects that are covered by this theme are:
— mitigating unwanted bias against members of different groups;
— ensuring that training data and user data are collected and applied in a way re
...

Questions, Comments and Discussion

Ask us and Technical Secretary will try to provide an answer. You can facilitate discussion about the standard in here.

Loading comments...

ISO/IEC TR 24368:2022は、人工知能(AI)に関する倫理的および社会的懸念を高レベルで概説した重要な文書です。この標準は、テクノロジスト、規制当局、利益団体、そして社会全体に向けて情報を提供することを目的としています。特に、AIに関連する倫理と社会的な問題に対応した原則、プロセス、方法に関する情報を含んでいます。 この文書の大きな強みは、その包括的なアプローチです。AIがもたらす倫理的リスクや社会的影響に対して、国際標準がどのように関連しているかの概要を示すことで、関係者がより深く理解し、適切に対応できるよう助けています。これにより、テクノロジーの進化に伴う新たな課題への意識を高める効果があります。 また、ISO/IEC TR 24368:2022は特定の価値体系の擁護を目的としていないため、様々なバックグラウンドを持つ人々が受け入れやすい内容となっています。これにより、多様な視点を持つステークホルダーが議論に参加しやすくなり、より豊かな意見交換が促進されます。 標準の relevancy(関連性)も高く、AI技術の急速な進展に伴い、社会的な倫理規範や法的枠組みの重要性が増しています。この文書は、AIに関連する倫理的および社会的懸念について、理解を深めるための優れた出発点となるでしょう。そのため、全ての関係者が持続可能なAIの発展に寄与するための基盤を築くための重要なリソースです。

Die ISO/IEC TR 24368:2022 stellt ein bedeutendes Dokument dar, das einen umfassenden Überblick über die ethischen und gesellschaftlichen Bedenken im Bereich der Künstlichen Intelligenz (KI) bietet. Der Anwendungsbereich dieser Norm ist besonders relevant, da sie eine Grundlage für eine informierte Diskussion über die Prinzipien, Prozesse und Methoden im Zusammenhang mit den Herausforderungen der KI schafft. Ein herausragendes Merkmal der Norm ist ihre Zugänglichkeit für verschiedene Interessengruppen, einschließlich Technologen, Regulierungsbehörden und der Gesellschaft als Ganzes. Dies fördert einen breiteren Dialog und ermöglicht es verschiedenen Akteuren, sich mit den ethischen Fragestellungen zu befassen, die im Zusammenhang mit der Implementierung von KI-Technologien auftreten können. Durch die klare Struktur und die detaillierte Übersicht über internationale Standards, die mit KI-Ethischen und gesellschaftlichen Bedenken in Verbindung stehen, bietet das Dokument eine wertvolle Ressource für die Gestaltung von Richtlinien und regulativen Maßnahmen. Die Norm ist besonders stark, da sie nicht versucht, spezifische Werte oder Wertsysteme zu fördern, sondern vielmehr einen Rahmen bietet, der ein kritisches Nachdenken über die Auswirkungen von KI anregt. Dadurch wird ein ausgewogenes und objektives Diskussionsumfeld gefördert, in dem ethische Überlegungen gleichwertig behandelt werden können, ohne einer bestimmten Agenda zu folgen. Diese Neutralität stärkt die Relevanz des Dokuments in einem sich schnell entwickelnden Bereich, in dem viele unterschiedliche Perspektiven berücksichtigt werden müssen. Insgesamt hebt sich die ISO/IEC TR 24368:2022 durch ihren klaren Fokus auf ethische und gesellschaftliche Bedenken ab und stellt sicher, dass wichtige Themen im Zusammenhang mit KI nicht nur angesprochen, sondern auch umfassend dokumentiert werden. Dies ist entscheidend für die Entwicklung von verantwortungsvoller KI und deren Integration in die Gesellschaft.

ISO/IEC TR 24368:2022 serves as a crucial standard in the rapidly evolving field of artificial intelligence (AI), focusing specifically on the ethical and societal concerns that accompany this technology. Its scope is commendable as it offers a high-level overview that is accessible to various stakeholders, including technologists, regulators, interest groups, and society at large. One of the significant strengths of this standard is its comprehensive approach to identifying key principles, processes, and methods relevant to AI ethics. By not advocating for any specific value systems, it promotes a neutral stance that respects diverse perspectives while highlighting the importance of ethical considerations in AI development and deployment. This approach fosters an inclusive dialogue among stakeholders, making the standard highly relevant in today's diverse societal landscape. The document also serves as an important reference point, summarizing existing International Standards that address ethical and societal issues related to AI. This feature enhances its utility for organizations seeking to navigate the complex ethical terrain of AI technologies, as it provides a foundational understanding of the framework within which ethical considerations must be integrated. Overall, ISO/IEC TR 24368:2022 is a valuable resource for anyone involved in AI, ensuring that a balanced consideration of ethical and societal concerns is maintained as this technology continues to advance and permeate various aspects of life.

La norme ISO/IEC TR 24368:2022 présente une approche essentielle et exhaustive concernant les préoccupations éthiques et sociétales liées à l'intelligence artificielle (IA). Son champ d'application est particulièrement pertinent, car il offre un aperçu de haut niveau des enjeux fondamentaux qui émergent avec l'utilisation croissante de l'IA dans divers domaines. En mettant en lumière les principes, les processus et les méthodes, ce document se positionne comme une ressource indispensable pour les technologues, les régulateurs, les groupes d'intérêt et la société dans son ensemble. Parmi ses forces, sa capacité à relier les préoccupations éthiques et sociétales aux normes internationales en matière de technologie met en avant son importance dans un contexte global. Ce lien crée une cohérence entre les différents standards en matière d'IA et représente un outil précieux pour favoriser une compréhension partagée des enjeux éthiques associés à la technologie. De plus, la norme ne cherche pas à promouvoir un ensemble spécifique de valeurs, ce qui lui confère une neutralité qui pourrait faciliter son acceptation par une variété d'acteurs et de parties prenantes. La pertinence de cette norme est accentuée par le besoin croissant d'une approche éthique dans le développement et l'application de systèmes d'IA. En fournissant des informations articulées et bien structurées, ISO/IEC TR 24368:2022 encourage les discussions autour des vos systèmes de valeurs et incite à la réflexion sur les responsabilités sociétales des acteurs du secteur technologique. Ce standard représente un pas significatif vers la création d'un cadre robuste pour aborder de manière critique les défis éthiques et sociétaux de l'intelligence artificielle. En somme, la norme ISO/IEC TR 24368:2022 est un document fondamental qui contribue à éclairer le débat sur l'intelligence artificielle en intégrant des préoccupations éthiques et sociétales. Son rôle de référence en fait un outil stratégique pour les professionnels et les non-professionnels intéressés par les implications de l'IA dans notre quotidien.

ISO/IEC TR 24368:2022는 인공지능(AI) 관련 윤리 및 사회적 문제에 대한 개요를 제공하는 문서로, 현재 AI 기술의 발전과 함께 더욱 중요한 이슈로 부각되고 있는 윤리적 고려사항들을 다루고 있습니다. 이 표준의 범위는 AI와 관련된 다양한 윤리적, 사회적 문제에 대한 광범위한 정보를 포함하며, 이를 통해 기술자, 규제당국, 이해관계자 그리고 일반 사회까지 폭넓은 관계자들에게 유용한 정보를 제공합니다. 이 표준의 강점은 고수준의 개요를 제시하며, AI 윤리와 사회적 우려 사항들에 대한 원칙, 프로세스 및 방법에 대한 정보를 제공하는 점입니다. 이는 업계 전문가들이 AI의 윤리적 문제를 보다 잘 이해하고, 다양한 가치 시스템을 고려하면서 기술 개발 및 정책 수립에 참여할 수 있도록 돕습니다. 표준은 구체적인 가치 체계나 입장을 주장하는 것이 아니라, 다양한 관점에서 AI 윤리 문제를 접근할 수 있는 기초를 마련하고 있습니다. 또한, ISO/IEC TR 24368:2022는 AI 윤리 및 사회적 우려 사항을 다루는 국제 표준에 대한 개요도 포함하고 있어, 관련 분야의 글로벌 트렌드와 규범을 이해하는 데 큰 도움이 됩니다. 이는 AI가 사회와 인간에게 미치는 영향을 분석하고, 이를 해결하기 위한 기반을 마련하는 데 있어 중요한 역할을 합니다. 따라서, 이 문서는 기술 발전에 따라 계속 변화하는 AI 관련 윤리적 및 사회적 문제에 대한 이해를 높이고, 미래 지향적인 기술 개발의 방향성을 제시하는 데 있어 매우 중요한 표준입니다.