Network Functions Virtualisation (NFV); Infrastructure; Compute Domain

DGS/NFV-INF003

General Information

Status
Published
Publication Date
22-Dec-2014
Current Stage
12 - Completion
Due Date
19-Jan-2015
Completion Date
23-Dec-2014
Ref Project
Standard
ETSI GS NFV-INF 003 V1.1.1 (2014-12) - Network Functions Virtualisation (NFV); Infrastructure; Compute Domain
English language
57 pages
sale 15% off
Preview
sale 15% off
Preview

Standards Content (Sample)


GROUP SPECIFICATION
Network Functions Virtualisation (NFV);
Infrastructure;
Compute Domain
Disclaimer
This document has been produced and approved by the Network Functions Virtualisation (NFV) ETSI Industry Specification
Group (ISG) and represents the views of those members who participated in this ISG.
It does not necessarily represent the views of the entire ETSI membership.

2 ETSI GS NFV-INF 003 V1.1.1 (2014-12)

Reference
DGS/NFV-INF003
Keywords
NFV
ETSI
650 Route des Lucioles
F-06921 Sophia Antipolis Cedex - FRANCE

Tel.: +33 4 92 94 42 00  Fax: +33 4 93 65 47 16

Siret N° 348 623 562 00017 - NAF 742 C
Association à but non lucratif enregistrée à la
Sous-Préfecture de Grasse (06) N° 7803/88

Important notice
The present document can be downloaded from:
http://www.etsi.org
The present document may be made available in electronic versions and/or in print. The content of any electronic and/or
print versions of the present document shall not be modified without the prior written authorization of ETSI. In case of any
existing or perceived difference in contents between such versions and/or in print, the only prevailing document is the
print of the Portable Document Format (PDF) version kept on a specific network drive within ETSI Secretariat.
Users of the present document should be aware that the document may be subject to revision or change of status.
Information on the current status of this and other ETSI documents is available at
http://portal.etsi.org/tb/status/status.asp
If you find errors in the present document, please send your comment to one of the following services:
http://portal.etsi.org/chaircor/ETSI_support.asp
Copyright Notification
No part may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying
and microfilm except as authorized by written permission of ETSI.
The content of the PDF version shall not be modified without the written authorization of ETSI.
The copyright and the foregoing restriction extend to reproduction in all media.

© European Telecommunications Standards Institute 2014.
All rights reserved.
TM TM TM
DECT , PLUGTESTS , UMTS and the ETSI logo are Trade Marks of ETSI registered for the benefit of its Members.
TM
3GPP and LTE™ are Trade Marks of ETSI registered for the benefit of its Members and
of the 3GPP Organizational Partners.
GSM® and the GSM logo are Trade Marks registered and owned by the GSM Association.
ETSI
3 ETSI GS NFV-INF 003 V1.1.1 (2014-12)
Contents
Intellectual Property Rights . 6
Foreword . 6
Modal verbs terminology . 6
1 Scope . 7
2 References . 7
2.1 Normative references . 7
2.2 Informative references . 8
3 Definitions and abbreviations . 8
3.1 Definitions . 8
3.2 Abbreviations . 8
4 Domain Overview . 10
4.1 Domain Scope . 11
4.2 High Level NFV Framework. 11
4.3 Compute Domain and Inter-Domain Interfaces . 11
4.3.1 Relevant Interfaces . 12
4.4 Relation to NFV Architecture Framework . 12
4.4.1 Elements of the Compute Domain . 14
4.4.1.1 Processor & Accelerator . 15
4.4.1.2 Network Interfaces . 15
4.4.1.3 Storage . 15
4.4.2 Influence of the Disaggregation Model . 15
4.5 Deployment Scenarios . 16
4.5.1 Monolithic Operator . 16
4.5.2 Network Operator Hosting Virtual Network Operators . 17
4.5.3 Hosted Network Operator . 17
4.5.4 Hosted Communications Providers . 17
4.5.5 Hosted Communications and Application Providers . 17
4.5.6 Managed Network Service on Customer Premises . 17
4.5.7 Managed Network Service on Customer Premises Equipment. 17
5 External Interfaces of the Domain . 18
5.1 Physical Network Interfaces . 18
5.1.1 Ethernet . 18
5.1.1.1 Nature of the Interface . 18
5.1.1.2 Specifications in Current Widespread Issue . 18
5.1.1.3 Achieving Interoperability . 18
5.1.2 Fibre-Channel . 18
5.1.2.1 Nature of the Interface . 18
5.1.2.2 Specifications in Current Widespread Use . 19
5.1.2.3 Achieving Interoperability . 19
5.1.3 Infiniband . 19
5.1.3.1 Nature of the Interface . 19
5.1.3.2 Specifications in Current Widespread Use . 19
5.1.3.3 Achieving Interoperability . 19
5.2 Internal & External Domain Interfaces . 19
5.2.1 NFVI to VIM (Nf-Vi) Interface . 19
5.2.2 Nf-Vi/C and [VI-Ha]/CSr Interfaces. 20
5.2.2.1 Nature of the Interface . 20
5.2.2.2 Specifications in Current Widespread Issue . 20
5.2.2.3 Achieving Interoperability . 20
5.3 Orchestration & Management Interfaces . 20
5.3.1 Managing NFVI Hardware Elements . 21
5.3.1.1 The Rack . 21
5.3.1.2 The Fabric . 21
ETSI
4 ETSI GS NFV-INF 003 V1.1.1 (2014-12)
5.3.1.3 The Top of Rack (TOR) Switch . 22
5.3.1.4 The Power Grid . 22
5.3.1.5 The Rack Shelf Contents . 22
5.3.1.6 Server Chassis . 23
5.3.1.7 Storage Chassis . 24
6 ISG E2E Requirements & Implications over Compute & Storage . 24
6.1 General (E2E requirements) . 25
6.2 Performance (E2E requirements) . 25
6.3 Elasticity (E2E Requirements) . 26
6.4 Resiliency (E2E Requirements) . 26
6.5 Security E2E Requirements . 26
6.6 Service Continuity (E2E Requirements) . 26
6.7 Service Assurance (E2E Requirements) . 26
6.8 Operations & Management (E2E Requirements) . 27
6.8.1 Statistics Collection - Objectives in NFV Framework . 27
6.8.2 Proposed Hierarchy of resources and management software for statistics collection . 28
6.9 Energy Efficiency (E2E Requirements) . 28
6.10 Additional points of interest (ISG documents) . 29
7 Functional Blocks within the Domain . 30
7.1 Definition of the "Compute Node" . 30
7.1.1 CPU Complex Overview . 31
7.2 Network Interface & Accelerators . 31
7.2.1 Network Interface Controller (NIC) . 31
7.2.2 Accelerators . 31
7.2.3 Hardware Acceleration . 34
7.2.3.1 Need for Hardware Acceleration (HWA) Abstraction . 34
7.2.3.2 Need for Dynamic Allocation and Management . 35
7.2.3.3 Hardware Accelerator Example . 35
7.2.4 Heterogeneous Acceleration . 36
7.2.4.1 CPU Complex, NIC, and PCIe-attached Devices Accelerator Examples . 36
7.2.4.2 Examples for functional accelerations . 36
7.2.4.3 Accelerator Abstraction API Examples . 36
7.3 Storage . 37
7.3.1 Storage Building Blocks . 37
7.3.1.1 Hard Disk Drives . 37
7.3.1.2 Solid State Disks . 37
7.3.1.3 Cache Storage . 37
7.3.1.4 Cold Storage . 38
7.3.2 Storage Topologies . 38
7.3.2.1 Block-based Storage. 38
7.3.2.2 File-based Storage . 39
7.3.2.3 Object-based Storage . 39
7.3.2.4 Scale-out Storage . 39
7.3.3 Serviceability . 40
7.3.3.1 Live Migration . 40
7.3.3.1.1 Local Storage . 40
7.3.3.1.2 Remote Storage . 40
7.3.4 Management . 41
7.3.5 Security . 41
7.3.5.1 Auditing Framework . 41
7.3.5.2 Protecting Data at Rest . 41
7.3.5.3 Protecting Data In Flight . 42
7.3.5.4 LUN Mapping and Access Control Lists . 42
7.3.5.5 Authentication . 42
7.3.5.6 Physical Storage Nodes Within the NFVI Node . 43
7.4 Hardware Resource Metrics of the Compute Domain . 43
7.4.1 Static Metrics for FNF Capacity and Capability Characteristics . 43
7.4.2 Dynamic Metrics for VNF . 45
8 Interfaces within the Domain . 46
8.1 PCIe . 46
ETSI
5 ETSI GS NFV-INF 003 V1.1.1 (2014-12)
8.2 SR-IOV . 46
8.3 RDMA and RoCE Support . 46
8.4 InfiniBand. 47
8.5 DPDK & ODP Support . 47
9 Modularity and Scalability . 47
9.1 Modularity and Scalability within the Domain . 47
9.2 NFVI Scale Implications for Compute Domain . 48
9.3 Modularity of NFVI Node Hardware Resources . 49
10 Features of the Domain Affecting Management and Orchestration . 50
10.1 Arbitration of hardware accelerators . 50
10.2 Life Cycle of NFVI Nodes and their Replaceable Units . 50
11 Features of the Domain Affecting Performance . 50
11.1 Matching Resource Requirements and NFVI Compute Server Capabilities . 50
11.2 Performance Related Workload Categories . 51
11.3 Other performance related concepts . 51
11.3.1 Compute Domain Performance Related Features . 51
11.4 QoS metrics . 52
11.5 Performance Monitoring . 52
11.5.1 Conclusion - Features of the Domain Affecting Performance . 52
12 Features of the Domain Affecting Reliability . 52
13 Features of the Domain Affecting Security . 54
13.1 Physical security . 54
13.2 Security devices . 54
13.3 Hardware or Device-based Authentication . 54
13.4 Tunnelling isolation in hypervisor and non-hypervisor container environments . 54
Annex A (informative): Authors & contributors . 55
Annex B (informative): Bibliography . 56
History . 57

ETSI
6 ETSI GS NFV-INF 003 V1.1.1 (2014-12)
Intellectual Property Rights
IPRs essential or potentially essential to the present document may have been declared to ETSI. The information
pertaining to these essential IPRs, if any, is publicly available for ETSI members and non-members, and can be found
in ETSI SR 000 314: "Intellectual Property Rights (IPRs); Essential, or potentially Essential, IPRs notified to ETSI in
respect of ETSI standards", which is available from the ETSI Secretariat. Latest updates are available on the ETSI Web
server (http://ipr.etsi.org).
Pursuant to the ETSI IPR Policy, no investigation, including IPR searches, has been carried out by ETSI. No guarantee
can be given as to the existence of other IPRs not referenced in ETSI SR 000 314 (or the updates on the ETSI Web
server) which are, or may be, or may become, essential to the present document.
Foreword
This Group Specification (GS) has been produced by ETSI Industry Specification Group (ISG) Network Functions
Virtualisation (NFV).
The present document gives an overview to the series of documents covering the NFV Infrastructure.
Table 1: NFV infrastructure architecture documents
Infrastructure Architecture Document Document #
Overview GS NFV INF 001
Architecture of the Infrastructure Compute Domain GS NFV INF 003
Domains Hypervisor Domain GS NFV INF 004
Infrastructure Network Domain GS NFV INF 005
Architectural Methodology Interfaces and Abstraction GS NFV INF 007
Service Quality Metrics GS NFV INF 010

Modal verbs terminology
In the present document "shall", "shall not", "should", "should not", "may", "may not", "need", "need not", "will",
"will not", "can" and "cannot" are to be interpreted as described in clause 3.2 of the ETSI Drafting Rules (Verbal forms
for the expression of provisions).
"must" and "must not" are NOT allowed in ETSI deliverables except when used in direct citation.
ETSI
7 ETSI GS NFV-INF 003 V1.1.1 (2014-12)
1 Scope
The present document presents an architectural description of the compute (& storage) domain of the infrastructure
which supports virtualised network functions (VNFs). The compute domain includes the network & I/O interfaces
required to interface to the infrastructure network and the storage network, if any.
It sets out the scope of the infrastructure domain acknowledging the potential for overlap between infrastructure
domains, and between the infrastructure and the virtualised network functions. It also sets out the nature of interfaces
needed between infrastructure domains and within the compute domain.
The present document does not provide any detailed specification but makes reference to specifications developed by
other bodies and to potential specifications, which, in the opinion of the NFV ISG could be usefully developed by an
appropriate Standards Developing Organisation (SDO).
2 References
2.1 Normative references
References are either specific (identified by date of publication and/or edition number or version number) or
non-specific. For specific references, only the cited version applies. For non-specific references, the latest version of the
referenced document (including any amendments) applies.
Referenced documents which are not found to be publicly available in the expected location might be found at
http://docbox.etsi.org/Reference.
NOTE: While any hyperlinks included in this clause were valid at the time of publication, ETSI cannot guarantee
their long term validity.
The following referenced documents are necessary for the application of the present document.
[1] ETSI GS NFV 001 (V1.1.1): "Network Functions Virtualisation (NFV); Use Cases".
[2] ETSI GS NFV 002 (V1.1.1): "Network Functions Virtualisation (NFV); Architectural
Framework".
[3] ETSI GS NFV 003 (V1.1.1): "Network Functions Virtualisation (NFV); Terminology for Main
Concepts in NFV".
[4] ETSI GS NFV 004 (V1.1.1): "Network Functions Virtualisation (NFV); Virtualisation
Requirements".
[5] ETSI GS NFV-INF 001 (V1.1.1): "Network Function Virtualisation (NFV); Infrastructure
Overview".
[6] DMTF DSP 0217: "SMASH Implementation Requirements".
[7] ETSI GS NFV-PER 001 (V1.1.1): "Network Function Virtualisation (NFV); NFV Performance &
Portability Best Practices".
ETSI
8 ETSI GS NFV-INF 003 V1.1.1 (2014-12)
2.2 Informative references
References are either specific (identified by date of publication and/or edition number or version number) or
non-specific. For specific references, only the cited version applies. For non-specific references, the latest version of the
referenced document (including any amendments) applies.
NOTE: While any hyperlinks included in this clause were valid at the time of publication, ETSI cannot guarantee
their long term validity.
The following referenced documents are not necessary for the application of the present document but they assist the
user with regard to a particular subject area.
[i.1] Master Usage Model: "Compute Infrastructure as a Service, Rev 1", (2012) Open Data Cneter
Alliance.
NOTE: Available at
http://www.opendatacenteralliance.org/docs/ODCA_Compute_IaaS_MasterUM_v1.0_Nov2012.pdf.
TM
[i.2] IEEE 802.3 : "Ethernet Working Group".
[i.3] ETSI GS NFV-REL 001: "Network Functions Virtualisation (NFV); Resiliency Requirements".
TM
[i.4] IEEE 1588 : "IEEE Standard for a Precision Clock Synchronization Protocol for Networked
Measurement and Control Systems".
3 Definitions and abbreviations
3.1 Definitions
For the purposes of the present document, the following terms and definitions apply:
composite-NFVI: NFVI hardware resources are composed of field replaceable units that are COTS elements
field replaceable unit: unit of hardware resources designed for easy replacement during the operational life of a
network element
gateway node: See ETSI GS NFV-INF 001 [5].
network node: See ETSI GS NFV-INF 001 [5].
NFVI components: NFVI hardware resources that are not field replaceable, but are distinguishable as COTS
components at manufacturing time
NFVI-Plugin: NFVI hardware resources are deployable as a COTS field replaceable unit for another network element
NFVI-Pod: NFVI hardware resources are deployable as a single COTS entity with no field replaceable units
portability: See ETSI GS NFV-INF 001 [5].
storage node: See ETSI GS NFV-INF 001 [5].
3.2 Abbreviations
For the purposes of the present document, the following abbreviations apply:
ACL Access Control List
ACPI Advanced Configuration and Power Interface
AES Advanced Encryption Standard
API Application Programming Interface
ARM Acorn RISC Machine
ARP Address Resolution Protocol
ETSI
9 ETSI GS NFV-INF 003 V1.1.1 (2014-12)
ASIC Application Specific Integrated Circuit
AT&T American Telephone & Telegraph
ATA Advanced Technology Attachment
BBU Base Band Unit
BIOS Basic Input Output System
BRAS Broadband Remote Access Server
BT British Telecom
BW Bandwidth
CBDMA Common buffer DMA
CHD Compute Host Descriptor
CIFS Common Internet File System
CIM Common Information Model
COTS Commercial Off The Shelf
COW Copy-On-Write
CPE Customer Premise Equipment
CPU Central Processing Unit
CRAN Cloud Radio Access Network
CRC Cyclic Redundancy Check
DAS Direct Attached Storage
DCB Data Center Bridging
DCMI Data Center Management Interface
DMA Direct Memory Access
DPDK Data Plane Development Kit
DPI Deep Packet Inspection
DSLAM Digital Subscriber Loop Access Multiplexer
DSP Digital Signal Processing
ECC Error Correction Code
EMS Element Management System
FC Fibre-Channel
FCP Fibre-Channel Protocol
FPGA Field Programmable Gate Array
GPU Graphic Processing Unit
GUI Graphical User Interface
HAL Hardware Abstraction Layer
HDD Hard Disk Drive
HSM Hardware Security Module
HW Hardware
HWA Hardware Acceleration
IB InfiniBand
IO Input Output
IOMMU Input Output Memory Management Unit
IOPS Input Output Operations Per second
IP Internet Protocol
IPC Inter Process Communication
IPMI Intelligent Platform Management Interface
ISA Instruction Set Architecture
IT Information Technology
KQI Key Quality Indicator
KVM Kernel Virtual Machine
LAN Local Area Network
LLC Limited Liability Corporation (?)
LTFS Linear Tape File System
LUKS Linux Unified Key Setup
LVM Logical Volume Manager
MAC Media Access Control
MIB Management Information Base
MMU Memory Management Unit
NAPT Network Address and Port Translation
NAS Network Attached Storage
NAT Network Address Translation
NFS Network File System
NFVI Network Functions Virtualisation Infrastructure
ETSI
10 ETSI GS NFV-INF 003 V1.1.1 (2014-12)
NGFW Next Generation Fire Wall
NIC Network Interface Card
NPU Network Processor Unit
OCP Open Compute Project
ODP Open Data Plane
OS Operating System
OSPF Open Shortest Path First
PCI Peripheral Computer Interface
PCI-E Peripheral Computer Interface - Enhanced
PGP Pretty Good Privacy
RAID Redundant Array of Independent Disks
RAN Radio Access Network
RAS Remote Access Server
RDMA Remote Direct Memory Access
RIP Routing Information Protocol
RoCE RDMA over Converged Ethernet
SaaS Software as a Service
SAN Storage Area Network
SAS SAN Attached Storage
SATA Serial Advanced Technology Attachment
SCSI Small Computer Systems Interface
SDD Solid State Disk
SDO Standards Development Organization
SES SCSI Enclosure Services
SLA Service Level Agreement
SMASH System Management Architecture for Server Hardware
SNMP Simple Network Management Protocol
SR-IOV Single Root Input Output Virtualisation
SSD Solid State Disks
SSH Secure Shell
SSL Secure Socket Layer
SW Software
TLB Table Look-aside Buffer
TOR Top of Rack
TPM Trusted Platform Module
TX/RX Transmit/Receive
UPS Uninterruptable Power Supply
USB Universal Serial Bus
VF Virtual Function
VIA Virtual Interface Architecture
VIM Virtual Infrastructure Manager
VLAN Virtual Local Area Network
VM Virtual Machine
VMD VM descriptor
VNF Virtual Network Function
VNFC Virtual Network Function Component
VNFCI Virtual Network Function Component Instance
VNFD Virtual Network Function Descriptor
VNFM Virtual Network Function Manager
VPN Virtual Private Network
WAN Wide Area Network
4 Domain Overview
Cloud computing in data centers have been able to abstract the hardware from the software through virtualisation
reaping benefits from hosting multiple applications, accelerating time to market and offering over the top services.
64-bit multi-core processors with hardware support for virtual machines, form the core of the industry-standard server
within the data center. These, coupled with offload and acceleration technologies, form the required server architecture.
These are sometimes accelerated to meet workload performance. Data Center servers may use processors with 10 or
more cores and have multiple sockets per server (e.g. a 4-way server with 10 cores has 40 physical cores).
ETSI
11 ETSI GS NFV-INF 003 V1.1.1 (2014-12)
NFV is an effort by the operators to leverage cloud computing benefits and enhance the abstraction of the software from
the hardware for the network.
The present document will define the compute domain for NFV and document what is necessary for the compute
domain to meet the NFV requirements.
4.1 Domain Scope
The Compute Domain is one of three domains constituting the NFV infrastructure, or NFVI. The other two domains are
the Hypervisor, and the Infrastructure Network.
The compute domain includes storage & network, I/O interface - it comprises the generic servers and storage. It should
be noted that the compute domain is closely associated with the orchestration and management domain. The latter runs
on the NFVI as a set of modular, interconnected virtual machines.
4.2 High Level NFV Framework
The ISG NFV Architectural Framework document discusses three domains: VNFs; NFVI; and NFV Management &
Orchestration. This is shown in the figure below depicting NFVI, which supports the execution of the VNFs including
the diversity of physical resources and their virtualised counterpart. The Compute Domain is one of three domains
constituting the NFVI. The other two domains are the Hypervisor, and the Infrastructure Network.

Figure 1: NFVI consists of three domains: compute (includes storage),
hypervisor, and infrastructure network
4.3 Compute Domain and Inter-Domain Interfaces
Common to all three NFVI domains is figure 2 [5]. It depicts the general domain architecture, and associated interfaces.
Of particular interest to the compute domain is interface #11, interfacing the compute domain to the orchestration and
management, and interface #14, interconnecting to the infrastructure network.
ETSI
12 ETSI GS NFV-INF 003 V1.1.1 (2014-12)

Figure 2: General domain architecture and associated interfaces
The basic functional blocks contained are shown in figure 2: processor/acceleration, network interface, and storage. All
functional blocks are managed and orchestrated remotely through interface 11.
4.3.1 Relevant Interfaces
Table 2 depicts the relevant interfaces for compute domain.
Table 2: Relevant interfaces of compute domain
Relevant Interface Type # Description
NFVI Container Interfaces 4 This is the primary interface provided by the infrastructure to host VNFs. The
applications may be distributed and the infrastructure provides virtual connectivity which
interconnects the distributed components of an application.
Infrastructure 8 Virtual MANO Container Interface: the interfaces that allows the VNFs to request
Interconnect Interfaces different resources of the infrastructure, for example, request new infrastructure
connectivity services, allocate more compute resources, or activate/deactivate other
virtual machine components of the application.
Orchestration & 9 Orchestration and management interface with the infrastructure network domain.
Management Interface 11 Orchestration and management interface with the compute domain.
14 Network interconnect between the compute equipment's and the infrastructure network
equipment.
Legacy Interconnect 1 The interface between the VNF and the existing network. This is likely to be higher
Interfaces layers of protocol only as all protocols provided by the infrastructure are transparent to
the VNFs.
2 Management of VNFs by existing management systems.
5 Management of NFV infrastructure by existing management systems.
13 The interface between the infrastructure network and the existing network. This is likely
to be lower layers of protocol only as all protocols provided by VNFs are transparent to
the infrastructure.
4.4 Relation to NFV Architecture Framework
Figure 3 depicts the NFV reference architectural framework [5]. Of interest to the compute domain are:
• VNF-NFVI: this reference point presents the execution environment provided by NFVI to the VNF.
• VI-Ha: this is the reference point interfacing the virtualisation layer to the hardware resources, including
compute and storage.
ETSI
13 ETSI GS NFV-INF 003 V1.1.1 (2014-12)
• Nf-Vi: this reference point is used to assign virtualised resources in response to resource allocation requests;
forwarding and exchange state information.

Figure 3: High-level overview of the NFVI domains and interfaces
ETSI
14 ETSI GS NFV-INF 003 V1.1.1 (2014-12)
Characteristics of NFVI reference points.
Table 3: Characteristics of NFVI reference points
Description and Comment
Internal Vl-Ha [Vl-Ha]/CSr Execution 12 The framework architecture [2] shows a general reference
Environment point between the infrastructure 'hardware' and the
virtualisation layer. This reference point is the aspect of this
framework reference point presented to hypervisors by the
servers and storage of the compute domain. It is the
execution environment of the server/storage.
[Vl-Ha]/Nr Execution The framework architecture [2] shows a general reference
Environment point between the infrastructure 'hardware' and the
virtualisation layer. While the infrastructure network has
'hardware', it is often the case that networks are already
layered (and therefore virtualised) and that the exact
choice of network layering may vary without a direct impact
on NFV. The infrastructure architecture treats this aspect of
the Vi-Ha reference point as internal to the infrastructure
network domain.
Ha/CSr- Traffic 14 This is the reference point between the infrastructure
Ha/Nr Interface network domain and the servers/storage of the compute
domain.
External Nf-Vi [Nf-Vi]/C Management, 11 This is the reference point between the management and
and orchestration agents in compute domain and the
Orchestration management and orchestration functions in the virtual
Interface infrastructure management (VIM). It is the part of the Nf-Vi
interface relevant to the compute domain.

4.4.1 Elements of the Compute Domain
The compute domain consists of consists of a server, network interface controller (NIC), accelerator, storage, rack, and
any associated components within the rack including the physical aspects of a networking switch and all other physical
components within NFVI. For example a blade server including a built-in Ethernet switch - see figure 4.

Figure 4: Functional elements of the Compute domain - Chassis can also be a Rack
ETSI
INF Context
NFV Framework
Reference Point
INF Reference Point
Reference Point
Type
Correspondence
with figure 2
Interfaces
15 ETSI GS NFV-INF 003 V1.1.1 (2014-12)
4.4.1.1 Processor & Accelerator
General purpose compute architectures considered in the compute domain are ARM and x86. These two architectures
are used as examples. They do not preclude the use of other instruction set architectures. In addition, accelerator
functions for security, networking, and packet processing are also included.
4.4.1.2 Network Interfaces
The network interface could either be a Network Interface Card (NIC) which connects to the processor via PCIe or the
network interface capability may be resident onboard the server. The NIC may optionally contain offload and/or
acceeration functionality.
4.4.1.3 Storage
Storage addresses large-scale storage and non-volatile storage, such as hard disks and solid-state disk (SSD), including
PCIe flash cards.
4.4.2 Influence of the Disaggregation Model
This clause discusses the form factor, in light of the Open Compute Project (OCP). In this form-factor, the CPU
blades/chassis are separated from the NIC/Accelerator blades/chassis, and Storage blades/chassis. Inter-connection
between the blades/chassis could be over optical fiber.
By separating critical components from one another, each resource can be upgraded or scaled independently. This
increases lifespan for each resource and enables IT managers to replace just that resource instead of the entire system.
There are thermal efficiency opportunities by allowing more optimal component placement within a rack, and by the
relocation of power supplies from within the individual chassis to the actual rack.
By allowing for the disaggregation of hardware accelerators, OCP potentially provides for the sharing of accelerator
resources across many computing or networking blades, thus providing improved amortization of traditionally
expensive components.
Figure 5: Illustration of a typical Open Compute Project's Disaggregation model
(Storage not included)
It should be noted that a single Compute Platform can support multiple virtual appliances, as seen in figure 6.
ETSI
16 ETSI GS NFV-INF 003 V1.1.1 (2014-12)

Figure 6: Compute platform can supp
...

Questions, Comments and Discussion

Ask us and Technical Secretary will try to provide an answer. You can facilitate discussion about the standard in here.

Loading comments...