ETSI GR ENI 055 V4.1.1 (2025-10)
Experiential Networked Intelligence (ENI); Use Cases and Requirements for AI Agents Based Core Network
Experiential Networked Intelligence (ENI); Use Cases and Requirements for AI Agents Based Core Network
DGR/ENI-0055v411_AI_Agents
General Information
Standards Content (Sample)
GROUP REPORT
Experiential Networked Intelligence (ENI);
Use Cases and Requirements for
AI Agents Based Core Network
Disclaimer
The present document has been produced and approved by the Experiential Networked Intelligence (ENI) ETSI Industry
Specification Group (ISG) and represents the views of those members who participated in this ISG.
It does not necessarily represent the views of the entire ETSI membership.
2 ETSI GR ENI 055 V4.1.1 (2025-10)
Reference
DGR/ENI-0055v411_AI_Agents
Keywords
6G, AI-Native, GenAI, use case
ETSI
650 Route des Lucioles
F-06921 Sophia Antipolis Cedex - FRANCE
Tel.: +33 4 92 94 42 00 Fax: +33 4 93 65 47 16
Siret N° 348 623 562 00017 - APE 7112B
Association à but non lucratif enregistrée à la
Sous-Préfecture de Grasse (06) N° w061004871
Important notice
The present document can be downloaded from the
ETSI Search & Browse Standards application.
The present document may be made available in electronic versions and/or in print. The content of any electronic and/or
print versions of the present document shall not be modified without the prior written authorization of ETSI. In case of any
existing or perceived difference in contents between such versions and/or in print, the prevailing version of an ETSI
deliverable is the one made publicly available in PDF format on ETSI deliver repository.
Users should be aware that the present document may be revised or have its status changed,
this information is available in the Milestones listing.
If you find errors in the present document, please send your comments to
the relevant service listed under Committee Support Staff.
If you find a security vulnerability in the present document, please report it through our
Coordinated Vulnerability Disclosure (CVD) program.
Notice of disclaimer & limitation of liability
The information provided in the present deliverable is directed solely to professionals who have the appropriate degree of
experience to understand and interpret its content in accordance with generally accepted engineering or
other professional standard and applicable regulations.
No recommendation as to products and services or vendors is made or should be implied.
No representation or warranty is made that this deliverable is technically accurate or sufficient or conforms to any law
and/or governmental rule and/or regulation and further, no representation or warranty is made of merchantability or fitness
for any particular purpose or against infringement of intellectual property rights.
In no event shall ETSI be held liable for loss of profits or any other incidental or consequential damages.
Any software contained in this deliverable is provided "AS IS" with no warranties, express or implied, including but not
limited to, the warranties of merchantability, fitness for a particular purpose and non-infringement of intellectual property
rights and ETSI shall not be held liable in any event for any damages whatsoever (including, without limitation, damages
for loss of profits, business interruption, loss of information, or any other pecuniary loss) arising out of or related to the use
of or inability to use the software.
Copyright Notification
No part may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying and
microfilm except as authorized by written permission of ETSI.
The content of the PDF version shall not be modified without the written authorization of ETSI.
The copyright and the foregoing restriction extend to reproduction in all media.
© ETSI 2025.
All rights reserved.
ETSI
3 ETSI GR ENI 055 V4.1.1 (2025-10)
Contents
Intellectual Property Rights . 5
Foreword . 5
Modal verbs terminology . 5
1 Scope . 6
2 References . 6
2.1 Normative references . 6
2.2 Informative references . 6
3 Definition of terms, symbols and abbreviations . 7
3.1 Terms . 7
3.2 Symbols . 7
3.3 Abbreviations . 7
4 Background . 8
4.1 Motivation of AI-Core . 8
4.1.1 Driving Force . 8
4.1.2 Key Concept . 9
4.1.3 Advantages and Challenges . 9
4.2 Business Value of AI-Core . 10
5 Use Cases and Requirements . 10
5.1 Consumer Use Cases . 10
5.1.1 Use Case: AI Agents to Enable Smart Life . 10
5.1.1.1 Description . 10
5.1.1.2 Potential Requirements . 11
5.1.2 Use Case on Network-Assisted Collaborative Robots . 12
5.1.2.1 Description . 12
5.1.2.2 Potential Requirements . 13
5.1.3 Use Case on AI Phone . 13
5.1.3.1 Description . 13
5.1.3.2 Potential Requirements . 14
5.2 Business Use Cases . 15
5.2.1 Use Case on AI Agent-based Customized Network for Smart City Traffic Monitoring . 15
5.2.1.1 Description . 15
5.2.1.2 Potential Requirements . 16
5.2.2 Use Case on AI Agents-Based Customized Network for Smart Construction Sites . 16
5.2.2.1 Description . 16
5.2.2.2 Potential Requirements . 18
5.2.3 Use Case on AI Agent Ensuring Game Acceleration Experience. 18
5.2.3.1 Description . 18
5.2.3.2 Potential Requirements . 19
5.2.4 Use Case on AI Agent-Assisted Collaborative Energy Distribution in Power Enterprises . 19
5.2.4.1 Description . 19
5.2.4.2 Potential Requirements . 21
5.3 Telecom Operator Use Cases . 21
5.3.1 Use Case on AI Agent-Based Autonomous Network Management . 21
5.3.1.1 Description . 21
5.3.1.2 Potential Requirements . 23
5.3.2 Use Case on AI Agent-Based Disaster Handling Network Management . 23
5.3.2.1 Description . 23
5.3.2.2 Potential Requirements . 24
5.3.3 Use Case on AI Agent-Based Time-Sensitive Network Management . 25
5.3.3.1 Description . 25
5.3.3.2 Potential Requirements . 25
5.3.4 Use Case on AI Agent-Driven Core Network Signalling Optimization . 26
5.3.4.1 Description . 26
5.3.4.2 Potential Requirements . 27
ETSI
4 ETSI GR ENI 055 V4.1.1 (2025-10)
5.3.5 Use Case on AI Agent-Based Core Networks to Enhance User Experience . 27
5.3.5.1 Description . 27
5.3.5.2 Potential Requirements . 28
6 Existing Use Case Summary . 29
7 Conclusion and Recommendations . 31
Annex A: Bibliography . 33
History . 34
ETSI
5 ETSI GR ENI 055 V4.1.1 (2025-10)
Intellectual Property Rights
Essential patents
IPRs essential or potentially essential to normative deliverables may have been declared to ETSI. The declarations
pertaining to these essential IPRs, if any, are publicly available for ETSI members and non-members, and can be
found in ETSI SR 000 314: "Intellectual Property Rights (IPRs); Essential, or potentially Essential, IPRs notified to
ETSI in respect of ETSI standards", which is available from the ETSI Secretariat. Latest updates are available on the
ETSI IPR online database.
Pursuant to the ETSI Directives including the ETSI IPR Policy, no investigation regarding the essentiality of IPRs,
including IPR searches, has been carried out by ETSI. No guarantee can be given as to the existence of other IPRs not
referenced in ETSI SR 000 314 (or the updates on the ETSI Web server) which are, or may be, or may become,
essential to the present document.
Trademarks
The present document may include trademarks and/or tradenames which are asserted and/or registered by their owners.
ETSI claims no ownership of these except for any which are indicated as being the property of ETSI, and conveys no
right to use or reproduce any trademark and/or tradename. Mention of those trademarks in the present document does
not constitute an endorsement by ETSI of products, services or organizations associated with those trademarks.
DECT™, PLUGTESTS™, UMTS™ and the ETSI logo are trademarks of ETSI registered for the benefit of its
Members. 3GPP™, LTE™ and 5G™ logo are trademarks of ETSI registered for the benefit of its Members and of the
3GPP Organizational Partners. oneM2M™ logo is a trademark of ETSI registered for the benefit of its Members and of ®
the oneM2M Partners. GSM and the GSM logo are trademarks registered and owned by the GSM Association.
Foreword
This Group Report (GR) has been produced by ETSI Industry Specification Group (ISG) Experiential Networked
Intelligence (ENI).
Modal verbs terminology
In the present document "should", "should not", "may", "need not", "will", "will not", "can" and "cannot" are to be
interpreted as described in clause 3.2 of the ETSI Drafting Rules (Verbal forms for the expression of provisions).
"must" and "must not" are NOT allowed in ETSI deliverables except when used in direct citation.
ETSI
6 ETSI GR ENI 055 V4.1.1 (2025-10)
1 Scope
The present document studies potential use cases and new service requirements relevant to AI-Agents based core
network (AI-Core). It covers the motivation, key concepts, and business value of AI-Core; identifies potential use cases,
including Business to Consumer (B2C), Business to Business (B2B), and telecom operators' internal scenarios, and
outlines the corresponding consolidated service requirements for future mobile communication networks.
2 References
2.1 Normative references
Normative references are not applicable in the present document.
2.2 Informative references
References are either specific (identified by date of publication and/or edition number or version number) or
non-specific. For specific references, only the cited version applies. For non-specific references, the latest version of the
referenced document (including any amendments) applies.
NOTE: While any hyperlinks included in this clause were valid at the time of publication, ETSI cannot guarantee
their long-term validity.
The following referenced documents may be useful in implementing an ETSI deliverable or add to the reader's
understanding, but are not required for conformance to the present document.
[i.1] ETSI GR ENI 051 (V4.1.1): "Experiential Networked Intelligence (ENI); Study on AI Agents
based Next-generation Network Slicing".
[i.2] The global market for humanoid robots could reach $38 billion by 2035.
[i.3] Intelligent Virtual Assistant Market Size, Share, and Trends 2025 to 2034.
[i.4] IEEE 802.11™: "IEEE Standard for Information Technology--Telecommunications and
Information Exchange between Systems Local and Metropolitan Area Networks--Specific
Requirements Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY)
Specifications".
[i.5] Recommendation ITU-R M.2160-0 (11/2023): "Framework and overall objectives of the future
development of IMT for 2030 and beyond".
[i.6] NGMN Alliance (V1.0): "6G Use Cases and Analysis".
[i.7] 3GPP TR 22.870 (V0.2.0): "Study on 6G Use Cases and Service Requirements; Stage 1
(Release 20)".
[i.8] Canalys special report (V1.0): "Now and next for AI-capable Smartphones".
[i.9] TMForum: "IG1274M AI Agent (v2.0.0)", 2025.
[i.10] Renze M, Guven E.: "Self-reflection in llm agents: Effects on problem-solving performance",
arXiv preprint arXiv:2405.06682, 2024.
[i.11] Shinn N, Cassano F, Gopinath A, et al.: "Reflexion: Language agents with verbal reinforcement
learning", Advances in Neural Information Processing Systems, 2023, 36: 8634-8652.
[i.12] Zhang W, Tang K, Wu H, et al.: "Agent-pro: Learning to evolve via policy-level reflection and
optimization", arXiv preprint arXiv:2402.17574, 2024.
[i.13] Guo Z, Xu B, Wang X, et al.: "MIRROR: Multi-agent Intra-and Inter-Reflection for Optimized
Reasoning in Tool Learning", arXiv preprint arXiv:2505.20670, 2025.
ETSI
7 ETSI GR ENI 055 V4.1.1 (2025-10)
3 Definition of terms, symbols and abbreviations
3.1 Terms
For the purposes of the present document, the following terms apply:
AI-Core: next generation core network which consists of multiple agents
NOTE: See ETSI GR ENI 051 [i.1].
3.2 Symbols
Void.
3.3 Abbreviations
For the purposes of the present document, the following abbreviations apply:
AGV Automated Guided Vehicle
AI Artificial Intelligence
AMF Access and Mobility management Function
API Application Program Interface
B2B Business to Business
B2C Business to Consumer
BSS Business Support Systems
CAGR Compound Annual Growth Rate
CN Core Network
CRM Customer Relationship Management
CSP Customer Service Platform
HTN Hierarchical Task Network
IME Intent Management Entity
IoT Internet of Tthings
IT Information Technology
IVR Interactive Voice Response
JSON JavaScript Object Notation
KG Knowledge Graph
LiDAR Light Detection And Ranging
LLM Large Language Model
MNO Mobile Network Operator
MW MegaWatts
NaaP Network as a Platform
NLP Natural Language Processing
NMS Network Management System
NPN Non-Public Network
NWDAF NetWork Data Analytics Function
OAM Operations and Management
OPS Open Programmability System
O-RAN Open-Radio Access Network
OSS Operations Support System
OTT Over The Top
P2P Point 2 Point
PCF Policy Control Function
PDU Protocol Data Unit
PR Potentiel Requirement
QCI Quality of service Class Identifier
QoE Quality of Experience
QoS Quality of Service
RAN Radio Access Network
ETSI
8 ETSI GR ENI 055 V4.1.1 (2025-10)
RAT Radio Access Technology
RF Radio Frequency
RSSI Received Signal Strength Indicator
SLA Service Level Agreement
SMF Session Management Function
SNR Signal to Noise Ratio
TSN Time Sensitive Network
UE User Equipment
UPF User Plane Function
URLLC Ultra-Reliable Low Latency Communications
USD United States Dollar
VIP Very Important Person
4 Background
4.1 Motivation of AI-Core
4.1.1 Driving Force
As shown in [i.5] and [i.6], the 6G network is required to support communication-based usage scenarios, such as
immersive communication and massive communication, as well as beyond communication usage scenarios, such as AI
and communication, and integrated sensing. The service requirements of various usage scenarios are different and each
scenario requires diverse network capabilities.
EXAMPLE: AI and communication scenarios expect a set of AI-related capabilities, including data acquisition
and management, preparation and processing, distributed model training and inference.
Integrated sensing and communication require the provision of sensing-related capabilities, including range, velocity,
angle estimation, object and presence detection, etc. Thus, the mobile network management system, in conjunction with
its orchestrator(s), needs to flexibly orchestrate and provide various combinations of network capabilities to meet the
diverse requirements of users.
The service requirements of customers are dynamic, and the environment may change. The network is required to make
a prompt adjustment and feedback to adapt to the ever-changing needs of customers so as to ensure good experiences
for them. In order to lower the threshold for customers to consume network services that they are interested in, it is
beneficial to support customers in initiating service requests through intents. Therefore, the mobile network needs to
provide the proper services and resources flexibly to satisfy the various requirements of users based on their intents and
adapt to the changes in user needs (expressed through intents), business goals, and environmental conditions in a timely
manner to meet their service needs, expressed through (for example) their Service Level Agreement (SLA).
Current network design is a mixture of rule-based and specialized software (e.g. multi-objective optimization systems).
In rule-based systems, the actions, inputs and outputs of network entities are pre-defined and structured, which are only
used to solve standardized tasks. When the network needs to be extended to new scenarios, the corresponding standards
and network functions need to be updated. The typical standardization cycle is long, making the system inflexible and
poorly adaptable. However, as described in 3GPP TR 22.870 [i.7], an AI Agent has a set of unique capabilities,
e.g. interacting with its environment, acquiring contextual information, reasoning, self-learning, decision-making, and
executing tasks (autonomously or in collaboration with other Al Agents) to achieve a specific goal. It would be
beneficial to introduce AI Agents into a Core Network (CN). The CN is the logical place for multiple AI agents that
manage and control the network. It has a global, end-to-end view of all network services, subscriber data, policies, and
resource usage. This makes it the ideal location for non-real-time tasks like translating a subscriber's business intent,
orchestrating a complex network slice that spans multiple domains, performing large-scale analytics, and real-time tasks
such as dynamic resource scheduling, session modification to ensure that SLAs of customers are met.
ETSI
9 ETSI GR ENI 055 V4.1.1 (2025-10)
In addition, with the development of AI technology, a large number of intelligent devices, such as AI Phones and
AI-embodied robots, are emerging. The analysis report of Canalys shows that 16 % of global smartphone shipments
were AI Phones in 2024, and this proportion is expected to soar to 54 % by 2028 [i.8]. The devices are gradually
evolving from types of equipment that passively respond to commands to intelligent agents with autonomous
decision-making and execution capabilities. Due to the limited resources and processing capabilities on devices, the
network can provide intelligent assistance services, e.g. assisting safe navigation and optimal path planning, to the
intelligent devices through AI Agent in CN.
In conclusion, it is necessary to integrate AI Agents into the CN to provide flexible customized services based on the
intents of subscribers and ensure their Quality of Experience (QoE) autonomously, as well as provide intelligent
assistance services for AI-capable devices.
4.1.2 Key Concept
As demonstrated in [i.1], AI-Core is the next-generation core network that consists of multiple AI Agents. It presents an
horizontal architecture where each agent in a multi-agent system plays a specific role and collaborates with others to
accomplish various tasks. The key idea of AI-Core is to utilize multiple AI Agents to handle high-level intents, plan
complex task execution, manage and control the network resources, and to flexibly process the data for new services
based on the dynamic requirements of various applications. Through the powerful capabilities of AI Agents, various
network functions and tools provided by third parties can be flexibly assembled to generate a customized network
on-demand to meet the personalized requirements of subscribers. During the operation of these customized networks,
AI Agents autonomously perceives the changes in the environment and adjust the constituent functions or tools and
resources dynamically to guarantee the QoS and QoE. The networks are automatically recycled when the service
completes. In other words, the design, generation, execution, update, and recycling of the customized networks are
entirely performed by AI Agents in CN, without human intervention, making CN an autonomous system. Moreover,
after receiving feedback from the environment, AI Agents can examine past decisions, so as to make better decisions in
the future when facing similar problems through the self-reflection mechanisms such as summarizing behavioural
guidelines or policies used for the model or fine-tuning the model through reinforcement learning [i.10], [i.11], [i.12],
[i.13]. That is, the performance of AI agents-based core (i.e. AI-Core) improves continuously.
4.1.3 Advantages and Challenges
AI-Core can solve diverse tasks, including standardized and non-standardized (i.e. novel tasks that do not have an
associated playbook) tasks. This is thanks to the fact that the basic principle of an AI Agent is that it is knowledge-
based (i.e. it generates outputs based on knowledge pre-trained or retrieved from a knowledge base). To address novel
tasks, an agentic system leverages its underlying LLM's pre-trained knowledge to generate a multi-step plan. The agent
then executes this plan by calling upon a predefined set of software 'tools' (e.g. APIs, diagnostic scripts), which allows
for a more rapid response than traditional, lengthy standardization cycles. However, this flexibility introduces a critical
trade-off between speed and safety. While accelerating new service deployment, it replaces the deterministic
predictability of standardized systems with the stochastic nature of LLM-based agents, creating operational risks for
critical infrastructure. Furthermore, an agent's effectiveness is domain-specific; its strength in semantic reasoning and
planning does not readily translate to tasks requiring high-precision mathematical optimization, for which traditional
numerical algorithms remain superior.
AI-Core aims to simplify service consumption by allowing subscribers to initiate requests through high-level intents
expressed in natural language. This approach abstracts away the complexity of network functions and APIs, making
services more accessible to non-expert users. The primary technical challenge, however, lies in reliably translating a
user's qualitative, and often ambiguous, goal into a set of precise, machine-executable network parameters. For
example, an intent like, "Ensure my video conferences have priority during peak hours", requires the system to identify
the correct application traffic, interpret time-based conditions, and apply specific QoS policies without violating other
service level agreements. Bridging this 'semantic gap' between user intent and network configuration is a non-trivial
problem, as misinterpretation can lead to incorrect network behaviour or create security vulnerabilities.
ETSI
10 ETSI GR ENI 055 V4.1.1 (2025-10)
4.2 Business Value of AI-Core
In the current ecosystem, the mobile network serves as a bit-pipe to transmit data for Over-The-Top (OTT) service
providers, while OTT service providers develop various kinds of applications for end-users. Since AI-Core can utilize
advanced agentic AI technologies to customize networks on-the-fly that integrate various network functions and tools
provided by third parties as well as associated resources, it is expected to blur the boundary between connectivity and
application, providing network operators with similar opportunities as the agentic AI technology offers to the OTT
service providers. This can bring new revenue and profit models for network operators.
Several business models can be exploited for AI-Core. The first one is that AI-Core generates tailored services, he
monetized approach (or business model) of the operator can be similar to OTT's, including advertisement revenue. The
second one is that the operator can integrate multiple OTT services into the network and generate a new service,
possibly with a value-added service included, the operator pays the service providers for API calls, while the operator
charges the end-users for consuming the integrated service. The third one is that the network operator is the integrator
of various third-party services and charges the service providers for expanding their customer base. The fourth one is
that AI-Core is a platform that generates highly customized services based on multi-agent collaboration. In such a
setting, the mobile network (operator) exposes APIs to the service provides to facilitate service customization, for
which the service providers are charged by the operator.
5 Use Cases and Requirements
5.1 Consumer Use Cases
5.1.1 Use Case: AI Agents to Enable Smart Life
5.1.1.1 Description
The growth in the use of AI agents enriches human daily life and supports the intelligent solutions in the industry. For
embodied AI agents, there are different types in the market with different capabilities and levels of intelligence, e.g. an
intelligent humanoid robot can provide many different types of support across a variety of circumstances, a robot-dog is
more lightweight but can provide less service than a humanoid robot, a drone is of low intelligence considering the
battery constraint, limiting it to tasks like delivering foods for human beings. For virtual AI entities, the chatbot has
entered the area of customer service. According to the latest report from Goldman Sachs [i.2], the global market for
humanoid robots could reach $38 billion by 2035 and the humanoid robot shipment is expected to hit 1 million units by
2035. Precedence Statistics also show that the global intelligent virtual assistant market size was United States Dollar
(USD) 16,17 billion in 2023, accounted for USD 20,42 billion in 2024, and is expected to reach around
USD 166,97 billion by 2033, expanding at a Compound Annual Growth Rate (CAGR) of 26,3 % from 2024 to 2033
[i.3].
In future production scenarios, such as industrial grounds, smart cities and hospitals, AI Agents will be used to
complement and even replace human labour.
EXAMPLE: These AI Agents could be given the order of replacing all instances of part A with part B in the
products being manufactured, checking the production line for possible problems or inefficiencies,
or reconfiguring the operation of other machines.
As with humans, autonomous AI Agents are not only the recipients of direct commands, but they also have the initiative
to inform about the status of other components in the factory or even spontaneously raise alarms. The mobile network
can provide connectivity to these AI Agents as well as provide global perception and assisted AI services to AI Agents,
rather than limiting them to local perception and intelligence. The reason is that, on the one hand, the accuracy of local
sensors and models sometimes are not high enough to ensure safe and efficient operation. On the other hand, the use of
local small-scale models for individual AI agents often results in sub-optimal decision-making. In summary, the mobile
network can empower AI Agents with network services and resources including sensing, AI, communication, and
computing.
ETSI
11 ETSI GR ENI 055 V4.1.1 (2025-10)
Assume that a user owns several types of AI agents, including a smart car, drone, robot-servant and robot-dog, which
are produced by different manufacturers and of different capabilities. Further assume that all of these agents have
registered with the AI-Core and have access to the mobile network, and each of them is allocated an identity to uniquely
identify them. Currently, agents retrieve translated intent from other sources, as this is a complex multi-stage pipeline
for all but the simplest of intents. In the future, it is possible to define a set of agents to perform the multi-stage intent
translation directly, which will simplify the life of the user.
Considering an example where a user wants to go camping on the weekend, the AI Agents are required to work together
to make a good camping plan with the assistance of the network. The service flows of this example are as follows:
1) The robot-servant sends the intent of "make a camping plan" to the 6G network on behalf of the user. How the
robot-servant gets the user's authorization is out of scope, e.g. it can take verbal commands from the user.
2) The AI agents in the mobile network parse the intent and break it down into sub-tasks, which are then assigned
to different AI agents based on their capabilities, e.g. the network AI Agent asks local life assistant (a digital
rd
AI agent provided by 3 party) to recommend campsites, instructs the smart car to pick-up family members by
designing the optimal route, asks the robot-servant to book food in advance according to the user's taste.
3) The network AI agent builds connections for the involved AI Agents since they need to exchange information.
For example, the local life assistant needs to send the campsite address to the smart car for designing the
routes. Robot-servant selects the restaurant based on the user's taste and sends the restaurant information to the
robot-dog while instructing it to pick up the order.
4) The AI agents execute the allocated sub-tasks. When they cannot perform the sub-task well, they request for
the network services. For example, due to the limitation of local perception, the smart car requests the sensing
service of the network when it determines the optimal route from the user's home to the campsite. The
robot-servant requests the AI service of the mobile network to help perform inference of motion control.
5) The network monitors the AI Agents that perform sub-tasks, when some exception occurs, it finds an
alternative AI agent to complete the sub-task.
Figure 5.1.1.1-1 depicts the described scenario.
Figure 5.1.1.1-1: Smart life use case key entities and high-level service flows
5.1.1.2 Potential Requirements
To support the above example, five requirements for the mobile network are summarized in the following:
[PR 5.1.1-1] Subject to operator policy and regulatory requirements, the mobile network is used to contact AI-Core,
which then provides a mechanism to uniquely identify an AI Agent that acts on behalf of the user.
[PR 5.1.1-2] The mobile network uses a combination of authentication, opaque execution, and policy enforcement to
rd
ensure that all AI Agents preserve the privacy of the owner (e.g. user, the 3 party) when they exchange information
with AI Agents.
NOTE 1: Opaque execution is a cornerstone of modern multi-agent communication protocols. It means that when
one agent asks another to perform a task, the second agent is able to complete the task without revealing
its internal methods, data, or reasoning processes. The collaboration happens through well-defined,
standardized interfaces, but the "inner workings" of each agent remain private.
ETSI
12 ETSI GR ENI 055 V4.1.1 (2025-10)
NOTE 2: When sensitive information needs to be exchanged, robust agentic systems have guardrails for sensitive
data handling and isolation. This means there are explicit policies that govern how sensitive data is used
and shared between agents. Privacy preservation is the set of rules followed during the conversation,
ensuring that sensitive personal or business information is not revealed unnecessarily.
[PR 5.1.1-3] The architecture includes a standardized "integration fabric" that mediates all multi-agent communication.
This fabric supports multiple interaction patterns, such as request-response for direct commands and publish-subscribe
for asynchronous event streaming, enabling decoupled and scalable collaboration.
[PR 5.1.1-4] The system provides a service registry or Knowledge Graph (KG) that allows agents to dynamically
discover and consume network-provided services. These services augment agent capabilities by offering specialized
data (e.g. high-precision location) or AI models (e.g. congestion prediction) as a service.
[PR 5.1.1-5] A high-level orchestrator agent is responsible for end-to-end task management. It uses formal planning
techniques, such as Hierarchical Task Networks (HTNs), to decompose high-level intents into sub-tasks, which are then
delegated to specialized agents based on their advertised capabilities.
5.1.2 Use Case on Network-Assisted Collaborative Robots
5.1.2.1 Description
Multiple robots can collaborate to accomplish complex tasks that are beyond the capability of any single robot.
Examples include carrying heavy objects, monitoring wide areas, or conducting search and rescue operations in disaster
zones. To enhance adaptability and generalization across diverse robotic tasks and various robot designs, large models
play a crucial role. Equipped with such, robots can perceive their dynamic environments, make informed decisions
through advanced reasoning, and execute actions to interact effectively with the physical world. Agentic AI technology
offers great potential to implement such intelligent systems, thereby, revolutionizing the next generation of robotic
systems. However, due to the inherent limitations in onboard resources and computational power, individual robot
agents often rely on the assistance of AI Agents equipped with more advanced models to assist them in performing
complex tasks efficiently.
For instance, multiple robots can cooperatively perceive their surrounding environments by utilizing onboard sensors
that capture diverse, multi-modal data such as speech, images, videos, haptic feedback, and RF signals. These sensing
processes across different robots and modalities sometimes are performed asynchronously to accommodate varying
operational conditions. The collected sensing data are transmitted to a central network, where large AI models analyse
and infer the environmental states.
AI models with relatively smaller sizes can be deployed on robots to perform data pre-processing such that the latency
of communication and computation can be balanced. To optimize communication latency and computational efficiency,
smaller-scale AI models can be deployed directly on the robots for initial data pre-processing. This distributed approach
balances the workload between the robot- and the network agents. Additionally, the network is expected to determine
which robots and sensing modalities are essential for the cooperative task, minimizing redundant operations and
maximizing overall system efficiency. The service flows of the described use case above are listed as follows:
1) Multiple robots register with a management and orchestration entity in the AI-Core to participate in
environmental sensing tasks, such as gesture recognition.
2) The mobile network collects capability information from each robot, including sensor capabilities
(i.e. modality, resolution, accuracy), communication capabilities, computational capabilities, and energy
consumption profiles.
NOTE 1: This step is a continuation of the registration process. The capability information described is the payload
of the registration request from Step 1. This structured data is used to create or update the robot's entity
within a suitable entity, such as a Knowledge Graph (KG). The KG serves as the network's "digital twin",
storing not just the identity of each robot but a rich profile of its specific attributes, which can be queried.
This allows the orchestrator to perform sophisticated queries later, such as, "Find all available robots
within Area A with camera resolution > 4K and battery > 50 %".
3) One or more robots send service requests specifying their service request (i.e. the intent to collect information
about their surroundings, potential requirements and/or preferences on information accuracy, data delivery
latency, and energy consumption) to an Intent Management Entity within the AI-Core.
ETSI
13 ETSI GR ENI 055 V4.1.1 (2025-10)
NOTE 2: Here, a robot switches roles from a potential service provider to a service consumer. It submits its
high-level intent not to the general "network" but to a specific Intent Management Entity (IME) within
the AI-Core. This interaction is handled by an Intent Translation Pipeline. This pipeline uses NLP and
LLM-based techniques to parse the intent and its constraints (accuracy, latency) into a formal,
machine-readable specification that the orchestration system can act upon.
4) Based on these requests, an AI Orchestrator (or Supervisor) Agent within the AI-Core takes the validated
intent from Step 3 and uses a formal planning methodology, such as Hierarchical Task Networks, to
decompose the abstract goal ("collect information") into a concrete workflow.
5) The Orchestrator queries the Knowledge Graph (or other entity that is used) to find the optimal set of robots
(from the pool of robots registered in Step 1) whose capabilities match the requirements of the decomposed
plan. This is where it would "select appropriate data modalities" by choosing robots with the right sensors.
6) The Orchestrator schedules the tasks and delegates specific instructions to the chosen robots.
7) Simultaneously, the Orchestrator delegates a sub-task to the relevant Network Management Domain (e.g. the
RAN or Core domain manager) to "configure relevant network equipment". This typically involves creating a
dedicated, QoS-guaranteed network slice to ensure the sensing data can be transmitted with the required
latency and reliability.
8) Robots, now acting as worker agents, execute the instructions delegated by the Orchestrator in Step 6. The
robots transmit their data to a dedicated Data Analytics Service as specified by the Orchestrator's instructions.
9) The raw data sent by the sensing robots is received by a dedicated Data Fusion and Analytics Agent within the
AI-Core. This specialized agent is responsible for the post-processing tasks: aggregating data from multiple
robotic sources, filtering out noise, and performing higher-level inference (e.g. fusing multiple camera angles
to perform gesture recognition). Once this value-added processing is complete, this agent delivers the final,
refined information back to the original robot that made the service request in Step 3, thus closing the loop.
This entire workflow is monitored by the Orchestrator to ensure the original intent is fulfilled.
NOTE 3: Steps 8 an
...








Questions, Comments and Discussion
Ask us and Technical Secretary will try to provide an answer. You can facilitate discussion about the standard in here.
Loading comments...