Fifth Generation Fixed Network (F5G); Architecture of Optical Cloud Networks

DGS/F5G-0018

General Information

Status
Not Published
Current Stage
12 - Citation in the OJ (auto-insert)
Due Date
29-Oct-2024
Completion Date
07-Oct-2024
Ref Project
Standard
ETSI GS F5G 018 V1.1.1 (2024-10) - Fifth Generation Fixed Network (F5G); Architecture of Optical Cloud Networks
English language
37 pages
sale 15% off
Preview
sale 15% off
Preview

Standards Content (Sample)


GROUP SPECIFICATION
Fifth Generation Fixed Network (F5G);
Architecture of Optical Cloud Networks
Disclaimer
The present document has been produced and approved by the Fifth Generation Fixed Network (F5G) ETSI Industry
Specification Group (ISG) and represents the views of those members who participated in this ISG.
It does not necessarily represent the views of the entire ETSI membership.

2 ETSI GS F5G 018 V1.1.1 (2024-10)

Reference
DGS/F5G-0018
Keywords
cloud, F5G, optical, service
ETSI
650 Route des Lucioles
F-06921 Sophia Antipolis Cedex - FRANCE

Tel.: +33 4 92 94 42 00  Fax: +33 4 93 65 47 16

Siret N° 348 623 562 00017 - APE 7112B
Association à but non lucratif enregistrée à la
Sous-Préfecture de Grasse (06) N° w061004871

Important notice
The present document can be downloaded from the
ETSI Search & Browse Standards application.
The present document may be made available in electronic versions and/or in print. The content of any electronic and/or
print versions of the present document shall not be modified without the prior written authorization of ETSI. In case of any
existing or perceived difference in contents between such versions and/or in print, the prevailing version of an ETSI
deliverable is the one made publicly available in PDF format on ETSI deliver.
Users should be aware that the present document may be revised or have its status changed,
this information is available in the Milestones listing.
If you find errors in the present document, please send your comments to
the relevant service listed under Committee Support Staff.
If you find a security vulnerability in the present document, please report it through our
Coordinated Vulnerability Disclosure (CVD) program.
Notice of disclaimer & limitation of liability
The information provided in the present deliverable is directed solely to professionals who have the appropriate degree of
experience to understand and interpret its content in accordance with generally accepted engineering or
other professional standard and applicable regulations.
No recommendation as to products and services or vendors is made or should be implied.
No representation or warranty is made that this deliverable is technically accurate or sufficient or conforms to any law
and/or governmental rule and/or regulation and further, no representation or warranty is made of merchantability or fitness
for any particular purpose or against infringement of intellectual property rights.
In no event shall ETSI be held liable for loss of profits or any other incidental or consequential damages.

Any software contained in this deliverable is provided "AS IS" with no warranties, express or implied, including but not
limited to, the warranties of merchantability, fitness for a particular purpose and non-infringement of intellectual property
rights and ETSI shall not be held liable in any event for any damages whatsoever (including, without limitation, damages
for loss of profits, business interruption, loss of information, or any other pecuniary loss) arising out of or related to the use
of or inability to use the software.
Copyright Notification
No part may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying and
microfilm except as authorized by written permission of ETSI.
The content of the PDF version shall not be modified without the written authorization of ETSI.
The copyright and the foregoing restriction extend to reproduction in all media.

© ETSI 2024.
All rights reserved.
ETSI
3 ETSI GS F5G 018 V1.1.1 (2024-10)
Contents
Intellectual Property Rights . 4
Foreword . 4
Modal verbs terminology . 4
Introduction . 4
1 Scope . 5
2 References . 5
2.1 Normative references . 5
2.2 Informative references . 6
3 Definition of terms, symbols and abbreviations . 6
3.1 Terms . 6
3.2 Symbols . 6
3.3 Abbreviations . 7
4 Motivation . 8
4.1 Overview of cloud access scenarios . 8
4.2 Requirements from the Use Cases . 8
5 OCN architecture . 9
5.1 Design principle . 9
5.2 Overview of OCN architecture . 10
5.3 Integration of OCN into the F5G-A network architecture . 11
5.4 Key capabilities of OCN . 12
6 OCN Data Plane technical requirements . 13
7 OCN connection control and service control technical requirements . 14
7.1 Overview . 14
7.1.1 Introduction to the OCN control interfaces and protocols . 14
7.1.2 OCN control interfaces and protocols . 15
7.2 OSP service control . 16
7.2.1 Overview of service mapping control requirements . 16
7.2.2 Service attributes identification . 17
7.2.3 Client node addresses report . 17
7.2.4 Service mapping rules generating and maintaining . 18
7.2.5 Service mapping . 19
7.2.6 OCN service control protocol messages . 19
7.3 OSP connection control . 20
7.3.1 Connection provisioning . 20
7.3.1.1 Overview of connection provisioning . 20
7.3.1.2 (fg)OTN connection creation . 21
7.3.1.3 (fg)OTN connection bandwidth adjustment . 24
7.3.1.4 (fg)OTN connection deletion . 25
7.3.2 Connection recovery . 28
7.3.2.1 Overview of connection recovery . 28
7.3.2.2 1+1 Protection function requirements . 29
7.3.2.3 Restoration function requirements . 30
8 OCN management and control technical requirements . 33
8.1 Overview . 33
8.2 Technical requirements for management and control of service flow mapping . 34
8.2.1 Configuration and maintenance of network slices and Virtual Private Networks (VPNs) . 34
8.2.2 Creation and maintenance of service flow mapping rules . 34
8.3 Technical requirements for management and control of the (fg)OTN . 34
8.3.1 Maintenance of the OTN network topology information. 34
8.3.2 (fg)OTN path computation . 35
8.3.3 Control and maintenance of (fg)OTN connections . 36
History . 37

ETSI
4 ETSI GS F5G 018 V1.1.1 (2024-10)
Intellectual Property Rights
Essential patents
IPRs essential or potentially essential to normative deliverables may have been declared to ETSI. The declarations
pertaining to these essential IPRs, if any, are publicly available for ETSI members and non-members, and can be
found in ETSI SR 000 314: "Intellectual Property Rights (IPRs); Essential, or potentially Essential, IPRs notified to
ETSI in respect of ETSI standards", which is available from the ETSI Secretariat. Latest updates are available on the
ETSI Web server (https://ipr.etsi.org/).
Pursuant to the ETSI Directives including the ETSI IPR Policy, no investigation regarding the essentiality of IPRs,
including IPR searches, has been carried out by ETSI. No guarantee can be given as to the existence of other IPRs not
referenced in ETSI SR 000 314 (or the updates on the ETSI Web server) which are, or may be, or may become,
essential to the present document.
Trademarks
The present document may include trademarks and/or tradenames which are asserted and/or registered by their owners.
ETSI claims no ownership of these except for any which are indicated as being the property of ETSI, and conveys no
right to use or reproduce any trademark and/or tradename. Mention of those trademarks in the present document does
not constitute an endorsement by ETSI of products, services or organizations associated with those trademarks.
DECT™, PLUGTESTS™, UMTS™ and the ETSI logo are trademarks of ETSI registered for the benefit of its

Members. 3GPP™ and LTE™ are trademarks of ETSI registered for the benefit of its Members and of the 3GPP
Organizational Partners. oneM2M™ logo is a trademark of ETSI registered for the benefit of its Members and of the ®
oneM2M Partners. GSM and the GSM logo are trademarks registered and owned by the GSM Association.
Foreword
This Group Specification (GS) has been produced by ETSI Industry Specification Group (ISG) Fifth Generation Fixed
Network (F5G).
Modal verbs terminology
In the present document "shall", "shall not", "should", "should not", "may", "need not", "will", "will not", "can" and
"cannot" are to be interpreted as described in clause 3.2 of the ETSI Drafting Rules (Verbal forms for the expression of
provisions).
"must" and "must not" are NOT allowed in ETSI deliverables except when used in direct citation.
Introduction
In F5G and beyond, there is a trend that more and more services will be deployed in the Cloud Data Centres (DCs),
which requires high quality, high performance, high reliability, high security network transmission between the users
and the Cloud DCs. ETSI GR F5G 008 [i.1] has already described several use cases that are related to such cloud
services.
ETSI
5 ETSI GS F5G 018 V1.1.1 (2024-10)
1 Scope
The present document specifies the architecture and the technical requirements of the Optical Cloud Network (OCN),
including its underlay Optical Transport Network (OTN) infrastructure and the control interfaces used for the control of
the optical services and connections. The present document also specifies the key functions of the Optical Service
Protocols (OSP) which are running on the control interfaces of the OCN.
2 References
2.1 Normative references
References are either specific (identified by date of publication and/or edition number or version number) or
non-specific. For specific references, only the cited version applies. For non-specific references, the latest version of the
referenced document (including any amendments) applies.
Referenced documents which are not found to be publicly available in the expected location might be found at
https://docbox.etsi.org/Reference.
NOTE: While any hyperlinks included in this clause were valid at the time of publication, ETSI cannot guarantee
their long term validity.
The following referenced documents are necessary for the application of the present document.
[1] IETF RFC 3209: "RSVP-TE: Extensions to RSVP for LSP Tunnels".
[2] IETF RFC 4920: "Crankback Signaling Extensions for MPLS and GMPLS RSVP-TE".
[3] IETF RFC 3945: "Generalized Multi-Protocol Label Switching (GMPLS) Architecture".
[4] IETF RFC 8776: "Common YANG Data Types for Traffic Engineering".
[5] Recommendation ITU-T G.808 (2016): "Terms and definitions for network protection and
restoration".
[6] ETSI GS F5G 024 (V1.1.1): "Fifth Generation Fixed Network (F5G); F5G Advanced Network
Architecture Release 3".
[7] Recommendation ITU-T G.709/Y.1331 (2020) Amd.3 (03/2024): "Interfaces for the optical
transport network".
[8] Recommendation ITU-T G.709.20 (04/2024): "Overview of fine grain OTN".
[9] Recommendation ITU-T G.7044/Y.1347 (10/2011): "Hitless adjustment of ODUflex(GFP)".
[10] Recommendation ITU-T G.7701 (04/2022): "Common control aspects".
[11] Recommendation ITU-T G.7703 (2021) Amendment 1 (11/2022): "Architecture for the
automatically switched optical network".
[12] ETSI GS F5G 013 (V1.1.1): "Fifth Generation Fixed Network (F5G); F5G Technology Landscape
Release 2".
[13] Recommendation ITU-T G.873.1 (2017) Amendment 1 (02/2022): "Optical transport network:
Linear protection".
[14] IETF RFC 8345: "A YANG Data Model for Network Topologies".
[15] IETF RFC 8795: "YANG Data Model for Traffic Engineering (TE) Topologies".
ETSI
6 ETSI GS F5G 018 V1.1.1 (2024-10)
2.2 Informative references
References are either specific (identified by date of publication and/or edition number or version number) or
non-specific. For specific references, only the cited version applies. For non-specific references, the latest version of the
referenced document (including any amendments) applies.
NOTE: While any hyperlinks included in this clause were valid at the time of publication, ETSI cannot guarantee
their long term validity.
The following referenced documents are not necessary for the application of the present document but they assist the
user with regard to a particular subject area.
[i.1] ETSI GR F5G 008 (V1.1.1): "Fifth Generation Fixed Network (F5G); F5G Use Cases Release #2".
3 Definition of terms, symbols and abbreviations
3.1 Terms
For the purposes of the present document, the following terms apply:
connection ID: combination of the source and destination node IP addresses of a connection, and an index that remains
constant over the life of the connection
NOTE: A connection ID is unique in a network, and is identical to the LSP_TUNNEL Session object in
RSVP-TE (IETF RFC 3209 [1]).
crankback: mechanism allowing new path setup attempts to be made to bypass the blocked resources
NOTE: The blocked resource information is retrieved from the failure points in the previous path setup attempts,
as defined in IETF RFC 4920 [2].
make-before-break: mechanism whereby an original path is still active while a new path is being set up in the
connection restoration procedure
NOTE: The make-before-break mechanism avoids double reservation of resources by the original and new paths,
as defined in IETF RFC 3209 [1] and IETF RFC 3945 [3].
node ID: node identification used in a Traffic Engineering (TE) topology
NOTE: A node ID is unique in a topology, as defined in IETF RFC 8776 [4].
protection group: combination of source and destination node functions, 1+1 or 1:n normal traffic signals, an extra
traffic signal in the 1:n case if any, 1+1 or 1:n working paths, and 1+1 or 1:n protection path
NOTE: The 1+1 or 1:n protection path provides extra reliability for the transport of normal traffic signals, as
defined in Recommendation ITU-T G.808 [5].
switching time: time between the initialization of the protection switching and the moment the traffic is selected from
the protection path
NOTE: As defined in Recommendation ITU-T G.808 [5].
3.2 Symbols
Void.
ETSI
7 ETSI GS F5G 018 V1.1.1 (2024-10)
3.3 Abbreviations
For the purposes of the present document, the following abbreviations apply:
(fg)O-CPE fgOTN Customer Premise Equipment
ASON Automatically Switched Optical Network
BGP Border Gateway Protocol
CBR Constant Bit Rate
CDC Central Data Centre
CE Customer Edge
CPE Customer Premise Equipment
CVLAN Client Virtual Local Area Network
DC Data Centre
E2E End-to-End
F5G Fifth Generation Fixed Network
F5G-A Fifth Generation Fixed Network - Advanced
fgOTN fine grain Optical Transport Network
GFP Generic Framing Procedure
GW GateWay
IP Internet Protocol
IPv4 Internet Protocol version 4
IPv6 Internet Protocol version 6
LDC Local Data Centre
MAC Media Access Control
MCA Management, Control, and Analytics
MD-ROADM Multi-Degree Reconfigurable Optical Add/Drop Multiplexer
MP2MP Multi-Point-to-Multi-Point
NBI NorthBound Interface
OCN Optical Cloud Network
ODU Optical Data Unit
ODUk Optical Data Unit-k
OE (fg)OTN Edge
OLT Optical Line Terminal
OSP Optical Service Protocols
OTN Optical Transport Network
P2MP Point-to-Multi-Point
PON Passive Optical Network
QoE Quality of Experience
QoS Quality of Service
ROADM Reconfigurable Optical Add/Drop Multiplexer
SAT Service Address Table
SLA Service Level Agreement
SMCC Service Mapping Control Component
SME Small and Medium Enterprise
SMP Service Mapping Point
SMT Service Mapping Table
SRLG Shared Risk Link Group
SVLAN Service Virtual Local Area Network
TDM Time Division Multiplexing
TE Traffic Engineering
UNI User Network Interface
UNI-C User Network Interface - Client
UNI-N User Network Interface - Network
VBR Variable Bit Rate
VLAN Virtual Local Area Network
VPN Virtual Private Network
VR Virtual Reality
WDM Wavelength Division Multiplexing
XC Cross-Connect
ETSI
8 ETSI GS F5G 018 V1.1.1 (2024-10)
4 Motivation
4.1 Overview of cloud access scenarios
With the rapid development of network technologies, more and more new services are emerging in the F5G era. There
are two important trends for these services:
1) More and more services are deployed in the Cloud DCs, to take full advantage of shared cloud infrastructure.
NOTE: Cloud DCs can be placed at various locations. In ETSI GS F5G 024 [6], there are Local Data
Centres (LDCs) co-located with the Aggregation Network Edge Nodes and the Central Data Centres
(CDCs) located in the core network.
2) The requirements on those cloud service shall cover a wide range of network characteristic including those that
satisfy the highest quality service experience. These highest quality cloud services are increasing significantly.
For convenience, such services that are deployed in the Cloud DCs are called "cloud services" in the present document.
ETSI GR F5G 008 [i.1] introduces 32 F5G use cases which are enabled by the F5G network, some of which are related
to cloud service provisioning. For example:
• Use case #1: Cloud Virtual Reality. Cloud computing and cloud rendering technologies for VR services are
introduced. Cloud VR content data are stored, read, rendered, coded compressed and transmitted to user
terminals through the network.
• Use case #2: High Quality Private Line. Government institutions, financial organizations and medical
organizations require high quality private lines for cloud access. Examples are medical cloud, cloud desktop
and financial cloud.
• Use case #3: High quality low cost private line for small and medium enterprises. Small and Medium
Enterprises (SMEs) may need cloud services such as cloud desktop and cloud storage.
• Use case #16: Enterprise private line connectivity to multiple Clouds. Enterprises are gradually migrating their
applications to different clouds. Meanwhile, an enterprise may have multiple branches requiring access to the
cloud applications. This requires the Multi-Point-to-Multi-Point (MP2MP) cloud access.
• Use case #17: Premium home broadband connectivity to multiple Clouds. There is an increasing demand for
premium home broadband Cloud-based services such as Cloud VR education, Cloud VR gaming, and Cloud
gaming. Since different cloud applications are deployed in different Cloud DCs, this use case also requires the
Multi-Point-to-Multi-Point (MP2MP) cloud access from different OLTs.
In these use cases, there is an increasing number of mission critical services, which require stable and highest quality
network transmission. OTN is a recommended technology for these services, because it naturally has the characteristics
of guaranteed bandwidth, low deterministic latency and low packet jitter, high availability and traffic isolation.
4.2 Requirements from the Use Cases
In general, the cloud service related F5G use cases can be categorized into two types:
• PON access network case: Users (including residential broadband users and SMEs) access the network via a
PON network;
• OTN access network case: Users (including large enterprises) access the network via an OTN equipment
(including fgOTN Customer Premise Equipment ((fg)O-CPE)).
Figure 1 shows the general F5G network topology which uses the OTN AggN for the high-quality cloud services. Both
PON access and OTN access use cases are covered in figure 1.
ETSI
9 ETSI GS F5G 018 V1.1.1 (2024-10)

Figure 1: OTN-based general network topology for cloud services
NOTE 1: An enterprise may have multiple sites which are connected to the network using different technologies
like PON or (fg)O-CPE. See the example enterprise A and B in Figure 1.
NOTE 2: Both the residential users and SME/large enterprise customers are called "users" for simplicity in the
present document.
The key requirements of the cloud service related use cases shall include:
• Multi-user access: For the PON access case, multiple residential users or SMEs are accessing the OTN AggN
via different OLTs. For the OTN access case, an enterprise has multiple branches, which access the OTN
AggN via different (fg)O-CPEs and CPEs (Ethernet based).
• Isolation between users: The isolation between different users are required, for the consideration of
manageability, QoE assurance and security. The isolation shall include address isolation and traffic isolation.
• Multi-cloud access: Users of cloud services may run different cloud applications which are deployed in
different Cloud DCs. Furthermore, a user may connect to two or more Cloud DCs at the same time for backup
and disaster recovery consideration.
• Automatic OTN connection provisioning: Considering the MP2MP connectivity from multiple user sites to
multiple clouds, the OTN AggN needs to support on-demand resource scheduling and connection provisioning
between any pair of edge OTN nodes, driven by the requests of the MP2MP connectivity services.
5 OCN architecture
5.1 Design principle
To provide high quality enterprise private line as well as Residential and SME broadband connectivity to multiple
clouds services, the OCN architecture shall support the following network features:
1) Automation: Network automation technologies via control protocols shall be introduced in OCN for the
automatic Cloud DC selection and service connection provisioning, to reduce the manual processes to a
minimum. This will reduce the service enabling time, improves the users' experience, and reduce the
configuration errors caused by human mistakes.
2) Bandwidth Flexibility: Different cloud application services may require very different bandwidths ranging
from tens of Mbps to several Gbps. Furthermore, a user may have different bandwidth requirements at
different times of the day. The OCN architecture shall be designed to support flexible OTN containers with
hitless bandwidth adjustment, to match the above service bandwidth requirements.
3) Traffic Isolation: The OCN architecture shall be designed to support user service traffic isolation. Each user
data to and from the clouds needs to be isolated from other user traffic, without affecting other traffic or being
affected by other's traffic.
ETSI
10 ETSI GS F5G 018 V1.1.1 (2024-10)
4) Connection Scalability: The OCN architecture shall be designed to provide scalable connection control and
management, to support the increasing number of connections.
5) Reliability and Availability: The OCN architecture shall be designed to support at least 99,999 % service
availability in the presence of one or multiple network failures within the OTN, with deterministic connection
recovery performance.
6) Simplicity: The OCN architecture shall be designed to support minimal network layers, interfaces and
protocols. Fewer network layers means higher resource utilization and easier network operation and
maintenance. The interface protocols shall be designed so as to minimize their complexity, and shall support
backwards compatible.
5.2 Overview of OCN architecture
In current Virtual Private Networks (VPNs), the OTN connection is used as a transparent pipe to transport the packet
flows (including IP and Ethernet flows), without recognizing the users' service traffics within the packet flows.
Therefore current OTN connections are not service aware, making it difficult to satisfy the VPN Service Level
Agreement (SLA) requirements.
To guarantee network characteristics including assured flexible bandwidth, low deterministic latency, low packet jitter,
high availability and traffic isolation for cloud services additional processes are necessary. The OCN shall directly
support the transport of the highest quality cloud services and control and manage the service traffic.

Figure 2: Evolving to the OCN architecture
NOTE 1: The (fg)OTN Edges (OEs) in Figure 2 is the edge nodes of the (fg)OTN domain, including the
(fg)O-CPE, the (fg)OTN Edge XC in the Access Node, and the (fg)OTN Switch in the AggN Edge.
Figure 2 shows a comparison between the current connections oriented optical network (2a in Figure 2) and the OCN
architecture (2b in Figure 2).
Comparing the current connection-oriented optical network (as shown in 2a) of Figure 2), the most distinct change is
that the OTN connections are services aware, and carry the services directly over OTN connections. This reduces and
simplifies the network layers, and ensures service quality. The two major improvements of OCN are:
• Current OTN transports IP/Ethernet packet flows transparently, without recognizing the users' service traffics
within the packet flows, and therefore cannot guarantee the service latency, bandwidth, and hard isolation.
With the support of the Optical Service Protocols (OSP), the OCN supports recognizing the service request
information (including the service bandwidth, service source and destination and SLA requirements), and
support service differentiation and transmission with assurance.
ETSI
11 ETSI GS F5G 018 V1.1.1 (2024-10)
• In current OTN, most of the network connections are provisioned manually and are not dynamic. In OCN,
with the support of OSP, a large number of network connections shall be dynamically provisioned, triggered
by the users' service requests. Hitless bandwidth adjustment of OTN connection shall also be supported, based
on the changing service bandwidth requirements.
The OCN architecture contains three network layers see 2b) in Figure 2:
1) Infrastructure layer: The Wavelength Division Multiplexing (WDM) technology is used as the OCN optical
infrastructure. This is unchanged from the current connection-oriented optical network.
2) Underlay connection layer: The OTN connections (ODUk/fgODU connection) are used to carry the client-side
services. In addition, to enable automatic provisioning the services orientated OTN connections, the Optical
Service Protocols (OSP) are deployed in the underlay connection layer, with the two main control functions:
- Service flow mapping control: The edge OTN nodes (Access Node and AggN Edge node) interact with
the client-side overlay protocol and map the service flows to the OTN connections.
- OTN connection control: Enhances the performance of the OTN signalling mechanisms to support
dynamic control of a larger number of connections.
3) Client-side service layer: High-quality services cloud services are the main services to be carried directly by
the OTN connections. Note that IP/Ethernet flows (which may be identified by existing technologies such as
VLAN) can still be carried by the OTN network in the OCN architecture.
NOTE 2: It is important to note that the OCN architecture does not change the data plane interfaces and protocols
on the customer equipment.
5.3 Integration of OCN into the F5G-A network architecture
ETSI GS F5G 024 [6] specifies the overall F5G-A architecture and its network topology. The OCN architecture is a
sub-set of the overall end-to-end F5G-A architecture.
As part of the F5G-A architecture, the OCN architecture mainly focuses on the use of OTN (including fgOTN, referring
to as (fg)OTN in the present document) in the CPE, the Access node of the Access network and the Aggregation
Network (including the AggN Edge node) for high-quality services between the users (including, residential, SME
users, and enterprise private line customers) and the cloud DCs.
Figure 3 shows the relationship between the overall F5G-A network topology and the scope of OCN architecture.
ETSI
12 ETSI GS F5G 018 V1.1.1 (2024-10)

Figure 3: OCN in the F5G-A network topology
In the OCN architecture, the high-quality cloud services from the OLT (including residential and SME services) are
carried by the OTN or Ethernet through the interface B in the F5G-A architecture, while the enterprise private line or
private network services are carried by the OTN or Ethernet through the interface H in the F5G-A architecture.
The OCN architecture also covers the WDM layer (including the Colourless ROADM, MD-ROADM and the λ fabric)
in the Aggregation Network as its infrastructure layer. The detail technical requirements of the WDM network in the
OCN architecture are for further study.
NOTE: Traditional service flows can still be carried by the IP/Ethernet network in the F5G-A architecture, which
is out of scope of the present document.
In the AggN Edge node, a router may be used to interconnect the OTN and the cloud DCs, which is for further study.
In the F5G-Aarchitecture the control plane, clauses 5.3.3 and 5.3.4 of ETSI GS F5G 024 [6] define the near real-time
control topology as well as its control interfaces. Specifically, clause 5.3.4.7 of ETSI GS F5G 024 [6] specifies the C1
and C2/C2' (fg)OTN control interfaces.
The OCN architecture includes two types of control interfaces: the connection control interface(C1) and the service
control interface (C2/C2'). In clause 7.1 of the present document there is a more detail description of these interfaces.
5.4 Key capabilities of OCN
To ensure the transmission quality of the high-quality cloud services which requires assured bandwidth, low latency and
high availability, the OCN architecture shall have the following capabilities:
1) Service agility: The OCN architecture shall provide agile service provisioning capability for Cloud DC
services. It shall support identifying different cloud-side and user-side (including CPE and PON) service
flows, and supports mapping these service flows into different OTN connections based on the service
destinations. In this way, the OCN architecture provides automatic provisioning of Point-to-Multi-Point
(P2MP) and Multi-Point-to-Multi-Point (MP2MP) service access to multiple Cloud DCs.
2) Service adaptation: The OCN architecture shall be aware of the service SLA requirements (including the
bandwidth requirement), and provide fast and hitless bandwidth adjustment to the OTN connections, based on
the variable service bandwidth requirements. This enables the OTN to adapt to service bandwidth changes on
demand, and improves the OTN resource utilization.
ETSI
13 ETSI GS F5G 018 V1.1.1 (2024-10)
3) Service slicing: The OCN architecture shall support service-oriented OTN slicing, to serve different service
requests, and to ensure service isolation from each other service traffic. In addition, enterprises may use their
own private addresses (IP addresses) for their private network services. To ensure private address conflict is
avoided between different enterprises, private address isolation shall be supported for enterprise service
transmission.
4) Service assurance: Cloud services including cloud VR and cloud gaming are more sensitive to latency and
bandwidth. The service transport network needs to evolve from best effort to deterministic transmission. The
OCN architecture shall provide secure service connection and guaranteed service bandwidth to ensure the
service quality and the user experience. The service traffic traverses a variety of heterogeneous networks
including Access Network, Aggregation Network and Core Network where Cloud DCs reside. Therefore, the
OCN architecture shall support collaborating and integrating various network technologies to enable effective
resource scheduling and reduce the E2E network processing delay.
5) Service capacity: OCN architecture shall support thousand-level OTN containers per 100 Gbps ODU link
supporting increasing number of services, and OTN containers with bandwidth ranging from 10 Mbps to
100 Gbps, to adapt to various cloud services and to improve the OTN resource utilization.
6) Service availability: The reliability of the service carrier network is extremely important for the high-quality
cloud services. The OCN architecture shall support self-monitoring, self-healing, and self-optimization, and be
capable of detecting network failures (and in some cases, predict potential network failures) In addition the
OCN architecture shall provide various protection and restoration mechanisms in the presence of single or
multiple failures within the OTN, and between the OTN and the Cloud DCs.
6 OCN Data Plane technical requirements
The (fg)OTN is the data plane in the OCN architecture, to transport the high-quality cloud services. The key technical
requirements for the OTN Data Plane include:
1) The (fg)OTN shall support the provisioning of connections with guaranteed bandwidth matching the
transmission quality requirements of the cloud services, providing very low and deterministic latency and
minimal packet jitter for packet service flows.
The (fg)OTN is a Time-Division Multiplexing (TDM) technology, where different services are carried in
dedicated time slots (called tributary slots in Recommendation ITU-T G.709 [7]). Unlike the packet
forwarding technologies, OTN does not support store and forward, statistical multiplexing, oversubscription,
queuing and buffering techniques are not necessary when performing OTN switching and transport. Therefore,
OTN supports dedicated connections with guaranteed bandwidth, low deterministic latency and minimal
packet jitter for packet services carried by the connection.
As per Recommendation ITU-T G.709 [7], (fg)OTN supports both CBR and VBR client signals. OCN shall
support both CBR and VBR-based services.
2) The (fg)OTN shall support both ODUk and fgODUflex matching the bandwidth requirements of the different
cloud services.
Different cloud services may require very different bandwidths from tens of Mbps to several Gbps. To
improve the resource utilization of the OTN carrier network, the bandwidth per OTN connection shall match
the bandwidth requirements of the cloud services.
Recommendations ITU-T G.709 [7] and G.709.20 [8] define both ODUk (k = 0, 1, 2, 3, 4, flex) and
fgODUflex containers which support bandwidths of N × 10 Mbps (N = 1 to 119) for sub-1G services, and the
ODUflex containers which support bandwidths of N × 1,25 Gbps (N = 1 to the total number of 1,25 Gbps
tributary slots of the ODUk that the ODUflex is multiplexed into) for higher bandwidth services.
3) The (fg)OTN shall support dynamic and hitless bandwidth adjustment of a connection to match the bandwidth
changes of the cloud services.
The high-quality cloud services might require different bandwidths in different time periods. To improve the
resource utilization, the OTN connections shall support hitless bandwidth adjustment (increasing or
decreasing) according to the change of the service bandwidth. When adjusting the bandwidth of the OTN
connection, the existing service traffic shall not be affected.
ETSI
14 ETSI GS F5G 018 V1.1.1 (2024-10)
The current OTN supports hitless adjustment of ODUflex(GFP) defined in Recommendation
ITU-T G.7044/Y.1347 [9], and fgOTN defined in and Recommendation ITU-T G.709 [7].
7 OCN connection control and service control technical
requirements
7.1 Overview
7.1.1 Introduction to the OCN control interfaces and protocols
The (fg)OTN control plane, which is separated from (fg)OTN data plane, is used for connection-oriented (fg)OTN
control (including, OTN link and topology discovery, and (fg)OTN connection control). To further enable the automatic
service-oriented control, the (fg)OTN control plane needs to be enhanced.
In the F5G-A Architecture (ETSI GS F5G 024 [6]), the (fg)OTN control interfaces C1 and C2/C2' reside in the
(fg)OTN Access and (fg)OTN Aggregation Network, as shown in Figure 4:
• The C1 interface is used to control the (fg)OTN network connections.
• The C2 and C2' interfaces are used to control the (fg)OTN-based service connections.

Figure 4: F5G-A interfaces (Interface names in red are related to OCN)
Based on the common control aspects of the transport network defined in Recommendation ITU-T G.7701 [10], the
domain is used to express different administrative and/or managerial responsibilities including, trust relationships,
addressing schemes, infrastructure capabilities, survivability techniques, and distributions of control functionality. The
control plane supports the establishment of services through the automatic provisioning of end-to-end transport
connections across one or more domains. The User Network Interface (UNI) is an inherent part of the Automatically
Switched Optical Network (ASON) architecture in Recommendation ITU-T G.7703 [11]. The reference point between
a user and a provider domain is the UNI, which represents a user-provider service demarcation point. From the OTN
perspective, the provider domain is OTN and user domain is the OLT, CE and DC GW.
ETSI
15 ETSI GS F5G 018 V1.1.1 (2024-10)
From the OCN perspective, UNIs are control reference points where the data plane interfaces B, U', J, K and L are
specified in the (fg)OTN architecture as shown in Figure 5.
NOTE: As per F5G-A architecture [6], the (fg)OTN Switch may connect to the Cloud DC directly, or through a
Router. In the former case, the UNI is the control reference point of interface L. In the latter case, if the
Router and the (fg)OTN Switch within the AggN Edge are in the same physical equipment and the
interface J is an internal interface, the UNI is the control reference point of interface K; If they are
separated, the UNI is the control reference point of interface J. Figure 5 only shows the former case for
simplicity.
In the use case "Premium home broadband connectivity to multiple Clouds" (see clause 7.3 of ETSI GR F5G 008 [i.1]),
the OLT is in the user-side client network and supports the UNI-Client (UNI-C) functions while the OTN Edge XC is in
provider-side OTN network and supports the UNI-Network (UNI-N) functions. In the use case of "Enterprise private
line connectivity to multiple Clouds" (see clause 7.2 of ETSI GR F5G 008 [i.1]), the CE is in the user-side client
network and supports the UNI-C functions while the (fg)O-CPE is in provider-side OTN network and supports the
UNI-N functions. The (L)DC GW in the cloud-side client network and supports the UNI-C functions while the OTN
AggN Edge is in provider-side OTN and supports the UNI-N functions. See Figure 5.

Figure 5: OTN UNIs and client networks
7.1.2 OCN control interfaces and protocols
The C1 interface is the control plane handover point between two (fg)OTN nodes (including the (fg)O-CPE, the
(fg)OTN Edge XC and the (fg)OTN switch in the AggN Edge node) which are interconnected by OTN link.
In the OCN architecture, the OSP connection control is implemented on the C1 interface, to exchange the signalling
information to control the (fg)OTN connections across the different network segments. The functions of the OSP
connection control shall include:
• Configuration function: Automatic creation, modific
...

Questions, Comments and Discussion

Ask us and Technical Secretary will try to provide an answer. You can facilitate discussion about the standard in here.

Loading comments...