Network Functions Virtualisation (NFV); Infrastructure; Hypervisor Domain

DGS/NFV-INF004

General Information

Status
Published
Publication Date
06-Jan-2015
Current Stage
12 - Completion
Due Date
19-Jan-2015
Completion Date
07-Jan-2015
Ref Project
Standard
ETSI GS NFV-INF 004 V1.1.1 (2015-01) - Network Functions Virtualisation (NFV); Infrastructure; Hypervisor Domain
English language
49 pages
sale 15% off
Preview
sale 15% off
Preview

Standards Content (Sample)


GROUP SPECIFICATION
Network Functions Virtualisation (NFV);
Infrastructure;
Hypervisor Domain
Disclaimer
This document has been produced and approved by the Network Functions Virtualisation (NFV) ETSI Industry Specification
Group (ISG) and represents the views of those members who participated in this ISG.
It does not necessarily represent the views of the entire ETSI membership.

2 ETSI GS NFV-INF 004 V1.1.1 (2015-01)

Reference
DGS/NFV-INF004
Keywords
interface, NFV
ETSI
650 Route des Lucioles
F-06921 Sophia Antipolis Cedex - FRANCE

Tel.: +33 4 92 94 42 00  Fax: +33 4 93 65 47 16

Siret N° 348 623 562 00017 - NAF 742 C
Association à but non lucratif enregistrée à la
Sous-Préfecture de Grasse (06) N° 7803/88

Important notice
The present document can be downloaded from:
http://www.etsi.org
The present document may be made available in electronic versions and/or in print. The content of any electronic and/or
print versions of the present document shall not be modified without the prior written authorization of ETSI. In case of any
existing or perceived difference in contents between such versions and/or in print, the only prevailing document is the
print of the Portable Document Format (PDF) version kept on a specific network drive within ETSI Secretariat.
Users of the present document should be aware that the document may be subject to revision or change of status.
Information on the current status of this and other ETSI documents is available at
http://portal.etsi.org/tb/status/status.asp
If you find errors in the present document, please send your comment to one of the following services:
http://portal.etsi.org/chaircor/ETSI_support.asp
Copyright Notification
No part may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying
and microfilm except as authorized by written permission of ETSI.
The content of the PDF version shall not be modified without the written authorization of ETSI.
The copyright and the foregoing restriction extend to reproduction in all media.

© European Telecommunications Standards Institute 2015.
All rights reserved.
TM TM TM
DECT , PLUGTESTS , UMTS and the ETSI logo are Trade Marks of ETSI registered for the benefit of its Members.
TM
3GPP and LTE™ are Trade Marks of ETSI registered for the benefit of its Members and
of the 3GPP Organizational Partners.
GSM® and the GSM logo are Trade Marks registered and owned by the GSM Association.
ETSI
3 ETSI GS NFV-INF 004 V1.1.1 (2015-01)
Contents
Intellectual Property Rights . 5
Foreword . 5
Modal verbs terminology . 5
1 Scope . 6
2 References . 6
2.1 Normative references . 6
2.2 Informative references . 6
3 Definitions and abbreviations . 7
3.1 Definitions . 7
3.2 Abbreviations . 7
4 Domain Overview . 9
5 External Interfaces of the Domain . 14
5.1 Overview . 15
5.2 Hypervisor to VIM (Nf-Vi-H) Interface . 16
5.2.1 Nature of the Interface . 16
5.2.1.1 Example of MIB information . 16
5.2.2 Specifications in Current Widespread Issue . 19
5.2.2.1 Metrics for VNF performance characteristics . 19
5.2.3 Achieving Interoperability . 20
5.2.4 Recommendations . 20
6 Architecture and Functional Blocks of the Hypervisor Domain . 20
7 Requirements for the Hypervisor Domain . 22
7.1 General . 22
7.2 Portability . 22
7.3 Elasticity/Scaling . 24
7.4 Resiliency . 25
7.5 Security . 26
7.6 Service Continuity . 27
7.7 Operational and Management requirements . 28
7.8 Energy Efficiency requirements . 29
7.9 Guest RunTime Environment . 30
7.10 Coexistence with existing networks - Migration . 31
8 Service Models . 31
8.1 General . 31
8.2 Deployment models . 31
8.3 Service models . 32
Annex A (informative): Informative Reference: VIM Categories . 34
A.1 Logging for SLA, debugging . 35
A.2 Host & VM configuration /Lifecycle for a VM from a VIM perspective . 35
A.3 Resources & VM inventory Management . 37
A.4 Events, Alarms & Automated Actions (Data Provided by the INF to the VIM). 37
A.5 Utilities (Data is either provided by INF hypervisor or queried from the VIM via the Nf-Vi) . 38
A.6 CPU, Memory Data . 38
A.7 NETWORK/Connectivity . 43
Annex B (informative): Informative reference . 45
ETSI
4 ETSI GS NFV-INF 004 V1.1.1 (2015-01)
B.1 Future direction . 46
B.1.1 Improving vSwitch Performance . 46
B.1.2 Addressing Limitations of SR-IOV . 46
Annex C (informative): Authors & contributors . 48
History . 49

ETSI
5 ETSI GS NFV-INF 004 V1.1.1 (2015-01)
Intellectual Property Rights
IPRs essential or potentially essential to the present document may have been declared to ETSI. The information
pertaining to these essential IPRs, if any, is publicly available for ETSI members and non-members, and can be found
in ETSI SR 000 314: "Intellectual Property Rights (IPRs); Essential, or potentially Essential, IPRs notified to ETSI in
respect of ETSI standards", which is available from the ETSI Secretariat. Latest updates are available on the ETSI Web
server (http://ipr.etsi.org).
Pursuant to the ETSI IPR Policy, no investigation, including IPR searches, has been carried out by ETSI. No guarantee
can be given as to the existence of other IPRs not referenced in ETSI SR 000 314 (or the updates on the ETSI Web
server) which are, or may be, or may become, essential to the present document.
Foreword
This Group Specification (GS) has been produced by ETSI Industry Specification Group (ISG) Network Functions
Virtualisation (NFV).
Infrastructure Architecture Documents Document #
Overview GS NFV INF 001
Architecture of Compute Domain GS NFV INF 003
Architecture of Hypervisor Domain GS NFV INF 004
Architecture of Infrastructure Network Domain GS NFV INF 005
Scalability GS NFV INF 006
Interfaces and Abstraction GS NFV INF 007
Test Access GS NFV INF 009
Modal verbs terminology
In the present document "shall", "shall not", "should", "should not", "may", "may not", "need", "need not", "will",
"will not", "can" and "cannot" are to be interpreted as described in clause 3.2 of the ETSI Drafting Rules (Verbal forms
for the expression of provisions).
"must" and "must not" are NOT allowed in ETSI deliverables except when used in direct citation.
ETSI
6 ETSI GS NFV-INF 004 V1.1.1 (2015-01)
1 Scope
The present document presents the architecture of the Hypervisor Domain of the NFV Infrastructure which supports
deployment and execution of virtual appliances. The present document will primarily focus on the use of hypervisor for
virtualisation, due to time and resource constraints, However, the hypervisor requirements are similar if not the same for
implementing linux containers or other methods for virtualisation.
NOTE: From WikiArch: "Linux Containers (LXC) are an operating system-level virtualisation method for
running multiple isolated server installs (containers) on a single control host. LXC does not provide a
virtual machine, but rather provides a virtual environment that has its own process and network space. It
is similar to a chroot, but offers much more isolation".
There needs to be further research w.r.t to Linux Containers, including developing the ecosystem.
As well as presenting a general overview description of the NFV Infrastructure, the present document sets the NFV
infrastructure and all the documents which describe it in the context of all the documents of the NFV. It also describes
how the documents which describe the NFV infrastructure relate to each other.
The present document does not provide any detailed specification but makes reference to specifications developed by
other bodies and to potential specifications, which, in the opinion of the NFV ISG could be usefully developed by an
appropriate Standards Developing Organisation (SDO).
2 References
2.1 Normative references
References are either specific (identified by date of publication and/or edition number or version number) or
non-specific. For specific references, only the cited version applies. For non-specific references, the latest version of the
referenced document (including any amendments) applies.
Referenced documents which are not found to be publicly available in the expected location might be found at
http://docbox.etsi.org/Reference.
NOTE: While any hyperlinks included in this clause were valid at the time of publication, ETSI cannot guarantee
their long term validity.
The following referenced documents are necessary for the application of the present document.
[1] ETSI GS NFV-INF 001 (V1.1.1): "Network Functions Virtualisation (NFV); Infrastructure
Overview".
2.2 Informative references
References are either specific (identified by date of publication and/or edition number or version number) or
non-specific. For specific references, only the cited version applies. For non-specific references, the latest version of the
referenced document (including any amendments) applies.
NOTE: While any hyperlinks included in this clause were valid at the time of publication, ETSI cannot guarantee
their long term validity.
The following referenced documents are not necessary for the application of the present document but they assist the
user with regard to a particular subject area.
[i.1] ETSI GS NFV 004: "Network Functions Virtualisation (NFV); Virtualisation Requirements".
[i.2] IETF RFC 4133: "Entity MIB (Version 3)".
TM
[i.3] IEEE 802.1D : "IEEE Standard for Local and Metropolitan Area Networks -- Media access
control (MAC) Bridges".
ETSI
7 ETSI GS NFV-INF 004 V1.1.1 (2015-01)
TM
[i.4] IEEE 802.1Q MIB: "IEEE Standard for Local and Metropolitan Area Networks, Management
Information Base".
[i.5] IETF draft-ietf-opsawg-vmm-mib-00: "Management Information Base for Virtual Machines
Controlled by a Hypervisor".
NOTE: Available at http://tools.ietf.org/html/draft-ietf-opsawg-vmm-mib-00.
TM
[i.6] IEEE 1588 : "Standard for a Precision Clock Synchronization Protocol for Networked
Measurement and Control Systems".
TM
[i.7] IEEE 802.11 : "Wireless LANS IEEE Standard for Information technology -
Telecommunications and information exchange between systems Local and metropolitan area
networks - Specific requirements Part 11: Wireless LAN".
TM
[i.8] IEEE 802.3ad : "Link Aggregation".
TM
[i.9] IEEE 802.3 MIB: "Link Aggregation, Management Information Base".
[i.10] Hotlink: http://www.virtualizationpractice.com/hotlink-supervisor-vcenter-for-hyper-v-kvm-and-
xenserver-15369/.
[i.11] Systems Management Architecture for Server Hardware (SMASH).
NOTE: Available at http://www.dmtf.org/standards/smash.
[i.12] NFVINF(13)VM_019_Data_plane_performance.
[i.13] http://pubs.vmware.com/vsphere-
51/index.jsp?topic=%2Fcom.vmware.vsphere.networking.doc%2FGUID-E8E8D7B2-FE67-4B4F-
921F-C3D6D7223869.html
[i.14] http://msdn.microsoft.com/en-gb/library/windows/hardware/hh440249(v=vs.85).aspx
[i.15] http://www.vcritical.com/2013/01/sr-iov-and-vmware-vmotion/
[i.16] ETSI GS NFV-INF 010: "Network Functions Virtualisation (NFV); Service Quality Metrics".
3 Definitions and abbreviations
3.1 Definitions
For the purposes of the present document, the following terms and definitions apply:
application VMs: VM not utilizing an OS
hypervisor: virtualisation environment running on a host
NOTE: The virtualisation environment includes the tools, BIOS, firmware, Operating Systems (OS) and drivers.
portability: See ETSI GS NFV-INF 001 [1].
standard: is de-jure, de-facto or open standard that fulfils the requirement
NOTE: This assumption should be applied to all requirements through the entire document.
3.2 Abbreviations
For the purposes of the present document, the following abbreviations apply:
API Application Programming Interface
BIOS Basic Input Output System
ETSI
8 ETSI GS NFV-INF 004 V1.1.1 (2015-01)
BSD Berkeley Software Distribution
CIM Centralized Interference Mitigation
CLI Comand Line Interface
CMS Call Management System
CPU Compute Processing Unit
DMA Direct Memory Access
Dpdk Data plane development kit
EE Electrical Engineering
FFS For Further Specification
GUI Graphical User Interface
HA High Availability
IETF Internet Engineering Task Force
INF Infrastructure
IO Input Output
IP Internet Protocol
IPMI Intelligent Platform Interface
JVM Java Virtual Machine
KVM Kernel Virtual Machine
LAN Local Area Networks
LLC Lower Level Cache
LLDP Link Layer Discovery Protocol
LXC Linux Containers
MAC Media Access Controller
MANO Management and Orchastration
MIB Management Information Base
NF Network Function
NFVi Network Function Virtualisation Infrastructure
NFVINF Network Function Virtualisation Infrastructure
NIC Network Interface Card
NOC Network Operator Council
NUMA Non Uniform Memory Access
OAM Operations and Maintenance
OS Operating System
OSS Operations Systems and Software
OVF Open Virtual Framework
PAE Physical Address Extension
PCI Peripheral Component Interconnect
RAID Redundant Array of Independent Disks
RAM Random Access Memory
RAS Row Address Strobe
RELAV Reliability and Resiliency Work Group
RFC Request For Comment
SCSI Small Computer System Interface
SDO Standards Development Organizations
SEC Security Working Group
SLA Service Level Agreement
SNMP Signalling Network Management Protocol
SR-IOV Single Root I/O Virtualisation
SWA Software Architecture Work group
TCP Transport Control Protocol
TLB Translation Lookaside Buffer
UDP User Datagram Protocol
UUID Universally Unique Identifier
VIM Virtualisation Infrastructure Manager
VM Virtual Machine
VN Virtual network
VNF Virtual Network Function
VNFC Virtual Network Function Component
VNFI Virtual Network Function Interface
VSCSI Virtual Small Computer System Interface
vSwitch virtual Switch
VT Virtualisation
ETSI
9 ETSI GS NFV-INF 004 V1.1.1 (2015-01)
WG Working Group
XAPI eXtended Application Programming Interface
4 Domain Overview
Popek and Goldberg paper 'Formal Requirements for Third Generation Architectures': set the definition of hypervisors
in 1974.
• Equivalence: the hypervisor provides an environment for programs which is essentially identical to the original
machine.
• Resource control: the hypervisor is in complete control of system resources.
• Efficiency: programs run on this (virtualised) environment show at worst only minor decreases in speed.
Equivalence
The environment provided by a hypervisor is functionally equivalent to the original machine environment. This implies
that the same operating systems, tools and application software can be used in the virtual environment. This does not
preclude para-virtualisation and other optimization techniques which may require operating systems, tools and
application changes.
Resource Control
The hypervisor domain mediates the resources of the computer domain to the virtual machines of the software
appliances. Hypervisors as developed for public and enterprise cloud requirements place great value on the abstraction
they provide from the actual hardware such that they can achieve very high levels of portability of virtual machines.
In essence, the hypervisor can emulate every piece of the hardware platform even in some cases, completely emulating
a CPU instruction set such that the VM believes it is running on a completely different CPU architecture from the actual
CPU on which it is running. Such emulation, however, has a significant performance cost. The number of actual CPU
cycles needed to emulate virtual CPU cycle can be large.
Efficiency
Even when not emulating a complete hardware architecture, there can still be aspects of emulation which cause a
significant performance hit. Typically, computer architectures provide means to offload these aspects to hardware, as so
called virtualisation extensions, the set of operations that are offloaded and how they are offloaded varies between
different hardware architectures and hypervisors as innovation improves virtualisation performance.
EXAMPLE: Intel VT and ARM virtualisation extensions minimise the performance impact of virtualisation by
offloading to hardware certain frequently performed operations.
There can be many virtual machines running on the same host machine. The VMs on the same host may want to
communicate between each other and there will be a need to switch between the VMs.
Infrastructure Domains
Figure 1 illustrates the four domains of the NFV architecture, their relationship with each other and their relationship to
other domains outside the infrastructure. The figure also sets out the primary interfaces. Hypervisor for the present
document entails tools, kernel, host.
ETSI
10 ETSI GS NFV-INF 004 V1.1.1 (2015-01)
A. INF WG Domains
NFV Management and
Orchestration
Os-Ma
OSS/BSS Orchestrator
Se-Ma
Service, VNF and Infrastructure
Description
Or-Vnfm
EMS 1 EMS 2 EMS 3 Ve-Vnfm
VNF
Manager(s)
VNF 1 VNF 2 VNF 3 Or-Vi
Vn-Nf Vi-Vnfm
NFVI
Virtual Virtual Virtual
Computing Storage Network
Hypervisor
Nf-Vi
Virtualised
Virtualisation Layer
Infrastructure
Domain
Vl-Ha
Manager(s)
Hardware resources
Computing Storage Network
Hardware Hardware Hardware
Compute
Execution reference points Main NFV reference points
Other reference points
Domain
Infrastructure Networking Domain

Figure 1: General Domain Architecture and Associated Interfaces
The NFV Infrastructure (NFVI) architecture is primarily concerned with describing the Compute, Hypervisor and
Infrastructure domains, and their associated interfaces.
The present document is primarily focused on describing the hypervisor domain, which comprise the hypervisor which:
• provides sufficient abstract of the hardware to provide portability of software appliances;
• allocates the compute domain resources to the software appliance virtual machines;
• provides a management interface to the orchestration and management system which allows for the loading
andmonitoring of virtual machines.
ETSI
11 ETSI GS NFV-INF 004 V1.1.1 (2015-01)
Figure 2 depicts the NFV reference architectural framework. Table 1 gives description and definition to the interfaces.

Figure 2: NFV Reference Architectural Framework
ETSI
12 ETSI GS NFV-INF 004 V1.1.1 (2015-01)
Table 1
Description and Comment
External Vn-Nf [Vn-Nf]/VM Execution This reference point is the virtual machine (VM) container interface
Environment which is the execution environment of a single VNFC instance.
[Vn-Nf]/VN Execution This reference point is the virtual network (VN) container interface (eg
Environment an E-Line or E-LAN) which carrying communication between VNFC
instances. Note that a single VN can support communication between
more than a single pairing of VNFC instances (eg an E-LAN VN).
Nf-Vi [Nf-Vi]/N Management, This is the reference point between the management and
and orchestration agents in the infrastructure network domain and the
Orchestration management and orchestration functions in the virtual infrastructure
Interface management (VIM). It is the part of the Nf-Vi interface relevant to the
infrastructure network domain.
[Nf-Vi]/H Management, This is the reference point between the management and
and orchestration agents in hypervisor domain and the management and
Orchestration orchestration functions in the virtual infrastructure management (VIM).
Interface It is the part of the Nf-Vi interface relevant to the hypervisor domain.
[Nf-Vi]/C Management, This is the reference point between the management and
and orchestration agents in compute domain and the management and
Orchestration orchestration functions in the virtual infrastructure management (VIM).
Interface It is the part of the Nf-Vi interface relevant to the compute domain.
Vi- Management, This is the reference point that allows the VNF Manager to request
Vnfm Interface and/or for the VIM to report the characteristics, availability, and status
of infrastructure resources.
Or-Vi Orchestration This is the reference point that allows the Orchestrator to request
Interface resources and VNF instantiations and for the VIM to report the
characteristics, availability, and status of infrastructure resources.
Ex-Nf Traffic This is the reference point between the infrastructure network domain
Interface and any existing and/or non-virtualised network. This reference point
also carries an implicit reference point between VNFs and any existing
and/or non-virtualised network.
Internal Vl-Ha [Vl-Ha]/CSr Execution The framework architecture (see figure 2, NFV Reference
Environment Architectural Framework) shows a general reference point between
the infrastructure 'hardware' and the virtualisation layer. This reference
point is the aspect of this framework reference point presented to
hypervisors by the servers and storage of the compute domain. It is
the execution environment of the server/storage.
[Vl-Ha]/Nr Execution The framework architecture (see figure 2, NFV Reference
Environment Architectural Framework) shows a general reference point between
the infrastructure 'hardware' and the virtualisation layer. While the
infrastructure network has 'hardware', it is often the case that networks
are already layered (and therefore virtualised) and that the exact
choice of network layering may vary without a direct impact on NFV.
The infrastructure architecture treats this aspect of the Vi-Ha reference
point as internal to the infrastructure network domain.
Ha/CSr- Traffic This is the reference point between the infrastructure network domain
Ha/Nr Interface and the servers/storage of the compute domain.

The present document focuses on the hypervisor domain. Figures 3 to 5 will depict how the hypervisor domain inter
works within the Infrastructure (INF).
ETSI
INF Context
NFV Framework
Reference Point
INF Reference Point
Reference Point Type
13 ETSI GS NFV-INF 004 V1.1.1 (2015-01)
The general architecture of a cloud hypervisor is shown in figure 3.
Nf-Vi-H
V V V V V V
M M M M M M
Hypervisor
Sequential thread Sequential thread
Virtual Machine
emulation emulation
Management and API
Instruction, Policing Instruction, Policing
Virtual Switch (vSwitch)
mapping and mapping and
emulation emulation
Compute Node
NIC core core
Figure 3
And the architecture of the Hypervisor Domain is shown below.
The hypervisor Domain itself is a software environment which abstracts hardware and implements services, such as
starting a VM, terminating a VM, acting on policies, scaling, live migration, and high availability. These services are
not instigated without the Virtualisation Infrastructure Manager (VIM) knowing or instructing the hypervisor
domainPrimary interfaces of the hypervisor domain:
The NF-Vi interface is the interface to the VIM. This is where the request for hypervisor services occur. Only the VIM
or MANO shall interact with the VIM through these interfaces. Hypervisors shall not implement services autonomously
unless within the context of the VIM applied policy.
The Vi-Ha interface is the interface that the hypervisor pulls hardware information from and creates virtual hardware
components which the Virtual machine utilizes.
The Vn-NF is the VM to VNF logical interface. A VNF is created essentially via one or more Virtual Machines. A
virtual machine is in essence software running a function, algorithm, application without being aware of the type, model
or number of actual physical units 'underneath' the function, algorithm and/or application.
Hypervisor Domain
VNF
Vn-Nf
CPU scheduler
Virtual Machine Guest OS**
Hypervisor
Virtual Compute
Services
Memory Manager
Vi-Ha
Virtual CPU
Virtual NIC
Nf-Vi
High Availability
Physical NIC Driver
Virtual Network
Virtual
Live Migration
Memory
Physical HBA Driver
Hypervisor
Virtual Storage Management
Agent
Virtual
vSwitch*
Virtual HBA
Disk
Hypervisor Domain
* vSwitch is an implementation option for the Encapsulation Switch NF defined in
Infrastructure Networking Domain
** Guest OS includes VMs running industry standard OSes like Linux, as well as
Application VMs
Figure 4
ETSI
14 ETSI GS NFV-INF 004 V1.1.1 (2015-01)
Application VMs definition: is a VM not utilizing an OS:
• server hardware may provide a number of capabilities that enhance virtual machine (VM) performance
including:multicore processors supporting multiple independent parallel threads of execution;
• specific CPU enhancements/instructions to control memory allocation and direct access on I/O devices to VM
memory allocations;
• PCI-e bus enhancements, notably Single Root I/O virtualisation (SR-IOV).
Hypervisor support for high performance NFV VMs include:
• exclusive allocation of whole CPU cores to VMs;
• direct memory mapped polled drivers for VMs to directly access the physical NICs using user mode
instructions requiring no 'context switching';
• direct memory mapped polled drivers for interVM communications again using user mode instructions
requiring no 'context switching';
• vSwitch implementation as a high performance VM again using direct memory mapping and user mode
instructions requiring no 'context switching'.
The resulting hypervisor architecture is one the primary foundations of the NFV infrastructure. The NFV hypervisor
architecture is shown in figure 5.
Nf-Vi-H
V V V V
M M M M
Hypervisor
Sequential thread
Virtual Machine
emulation
Management and API
Instruction, Policing
Virtual Switch (vSwitch)
mapping and
emulation
Compute Node
NIC core core core
Figure 5
Figure 5 shows the NFV hypervisor architecture and is largely defining of the overall NFV architecture. It is one of a
relatively small number of critical components which enable the objectives of NFV to be met. A possible interim and
non-preferred alternative, which can still provide some of the benefits of NFV, is to accept a significantly reduced
modularly of hardware resource as being the server itself and then dedicate a whole server to a software appliance
module. This would also rely on the server to provide the means of remote install and management. While most servers
do provide such an interface, they are proprietary and these are not the primary focus of orchestration and management
systems. However, such an arrangement could still offer significant advantages over custom hardware and also provide
and a reasonable migration path to the full NFV architecture with NFV hypervisors.
5 External Interfaces of the Domain
This clause is a catalogue of the external interfaces of the domain. Given the purpose and definition of the domains
includes representing practical boundaries of supply, these are the key interfaces and the ones which need to achieve
interoperability of one form or another.
ETSI
15 ETSI GS NFV-INF 004 V1.1.1 (2015-01)
The list of interfaces should follow directly from the first picture in the domain overview.
Given these are the key interfaces, I think it would be sensible to follow a common list of sub-sections:
• Nature of the Interface - given the breadth of convergence the NFVi Hypervisor Domain WG are dealing with,
there is a very wide diversity of interface type and way interfaces are specified. This sub-section should clarify
the nature of the interface which may be obvious to those from that immediate industry sector, but may well
not be obvious to many others involved with NFV.
• Specifications in Current Widespread Use - this should be a list of interfaces in actual use and is likely to
contain interfaces which have at least proprietary nature to them.
• Achieving Interoperability - the practical means used/needed to achieve interoperability.
• Recommendations - this could be recommending specific existing standards, recommending changes to
existing standards, recommending new standards work, recommending that pragmaitic bi-lateral development
to interoperate actual implementations is good enough, etc. in the hypervisor document the interfaces are
described in the overview section of the document.

Figure 6
5.1 Overview
The VI-HA-CSr is the interface between the hypervisor and the compute domain. This interface serves the purpose of
the hypervisor control of the hardware. This interface enables the abstraction of the hardware, BIOS, Drivers, I/O
(NICs), Accelerators, and Memory.
In order for the information model to get created a standard is in place in IETF [i.5]: http://tools.ietf.org/html/draft-ietf-
opsawg-vmm-mib-00.
Ext /H.1: hypervisor shall gather all relevant metrics from the compute domain and provide data to the VIM.
The VI-HA-Nr is the interface between the hypervisor and the network domain.
ETSI
16 ETSI GS NFV-INF 004 V1.1.1 (2015-01)
Ext /H.2: hypervisor shall gather all relevant metrics from the networking domain and provide data to the VIM.
Ext/H.3: The hypervisor shall abstract the hardware and create virtual hardware, and provide appropriate level of
separation between VMs and VNFs.
5.2 Hypervisor to VIM (Nf-Vi-H) Interface
The Nf-Vi-H is the interface between the hypervisor and the Virtualisation Infrastructure Manager (VIM). This
interface serves multiple purposes below is a description of each purpose:
1) The hypervisor sends monitoring information to the VIM, of the underlying infrastructure. This is currently
done through various vendor specific packages. It is a requirement of the VIM to utilize the current vendor
specific packages. There may be a gap in this interface with respect to a common standard API requirement in
order for the VIM to be able to access various different hypervisor schemes and extend the requirements of the
interface.A common standard hypervisor monitoring API has yet to be defined and represents a gap.
There are software packages available to implement across different hypervisors. However research is needed
and input from across NFV working groups on the gaps with regard to a standard API,what information is
transferred (are all the metrics covered) and how the information is transferred (CIM, SNMP, etc.).
2) The VIM is the sole hypervisor controller. All necessary commands, configurations, alerts, policies, responses
and updates go through this interface.
5.2.1 Nature of the Interface
The nature of this interface is an informational model. There are informational models supporting the data
communication between the virtualisation layer and the virtualisation infrastructure manager (VIM) in deployment
today. Vmware, Citrix, Redhat, Wind River System., Debian, CentOS all have a VIM. Openstack potentially could be
used to be the framework of a VIM that would utilize any VIM thru a standard method.
Currently there are software products on the market today that interoperate with the various VIMs in the market place:
As an example, one such product is Hotlink: http://www.virtualizationpractice.com/hotlink-supervisor-vcenter-for-
hyper-v-kvm-and-xenserver-15369/.
It is a recommendation that there is a standard, or standard opensource that can be a VIM to interwork with multiple
commercial VIMs in deployment today in order to not re-write, re- create current VIMs.
Below is a starting point for discussion with regards to virtualisation, BIOS, hypervisors, firmware, networking and
hardware.
Ultimately there are levels of Metrics, from high level KQIs: (how long does it take to start up the system, how long
does it take to delete a system, etc.); all the way down to the hardware or compute domain. Or alternatively from the
compute domain, there are hardware capabilities exposed into the software domain via registers, bios, OS, IPMI,
drivers, up thru the virtualisation domain, which runs algorithms sent into the VIM for further calculations and from the
VIM to the NFV Orchestrator for additional calculations to get to the evaluation of KQIs.Information models are used
along the way to gather and communicate these metrics, results and performance.
The next section gives examples of data contained in an information model, followed up by a section that gets into what
the hardware provides to software in order for the information model to get the data into feature designed to calculate
SLA performance criteria and requirements.
Neither of these two following sections are anywhere close to be exhaustive. Thus, the need for an NFV WI to research
what is required above and beyond what is available today.
5.2.1.1 Example of MIB information
There are over 1 000 parameters/variables/metrics that are used in Virtualisation/ Cloud Service Assurance MIBs.
Hypervisors and cloud programs use IDs, Bios, IPMI, PCI, I/O Adapters/Drivers, memory subsystems, etc. to get to all
of the items of current concerns, including information for failover, live migration, placement of the VMs/applications.
ETSI
17 ETSI GS NFV-INF 004 V1.1.1 (2015-01)
It is not clear that there is a gap at this point. The hypervisor team has indicated the exposure of the hardware
capabilities thru what is in the documentation today (id info, impi, drivers, bios, pci, memory subsystems, etc.) has not
exposed any gaps. The hypervisor team is looking forward to working with the Metrics WI, SWA, SEC and MANO for
any gaps or requirements beyond what is available today.
The NFVINF hypervisor domain VIM is expected to leverage available managers such as CloudStack, vCenter,
Openstack, others as packages. There are software packages available today that implement this scheme, e.g. HotLink
SuperVisor: editor's note to scrub for trademarks.
http://www.virtualizationpractice.com/hotlink-supervisor-vcenter-for-hyper-v-kvm-and-xenserver-15369/
Below is a table of some of the components currently in a MIB, informational model, that show some of the functions.
MIBs are generally broken up into 'functional' areas. Below are some examples.
MIB Functions/objects:
• Resources (CPU, Memory, Storage, Adapters, Resource pools, Clusters):
- Ex: Adapters: PCI ID, I/O, memory, bus, model, status, capabilities.
• Systems (Logs, NUMA, I/O, etc.).
• Events.
• VM management.
• Obsolete/Legacy (compatibility with older versions).
• Products (supported products (hw and sw)).
• Analytics:
- Checks the incoming metrics for abnormalities in real time, updates health scores, and generates alerts
when necessary.
- Collects metrics and computes derived metrics.
- Stores the collected metrics statistics. (filesystem).
- Stores all other data collected, including objects, relationships, events,dynamic thresholds and alerts.
• Multicore Processors:
- Hyperthreading.
- CPU Affinity's.
- Power management.
Table 2 contains some details with regard to the VIM information that is gathered. These tables are not complete.
Research is required to determine the performance characteristics and the gaps of what is provided today and what is
needed.
ETSI
18 ETSI GS NFV-INF 004 V1.1.1 (2015-01)
Table 2
Functional Data exposed to VIM
objects
Main System System temperature exceeded normal operating Individual and overall temperature sensor
(Base Server) range health status, including temperature
System
readings
System temperature has returned to normal System manufacturer, model, BIOS version
operating range and date
Server model, serial number, product number and System OS name, type, version number
universal unique identifier (UUID) and description
System up time Monitoring subsystems such as IPMI
Sensor types One entry is created for each physical
component
Computer System Consolidated health status Monitoring information from Drivers, BIOS,
OS
Processor (CPU) Individual processor number, core and thread The number of physical cores in the system
Subsystem
number, speed, physical socket location and health
status
Individual processor cache size, line size, cache Individual and overall processor health
level and type, read and write policy and health status
status
Individual processor chip model, manufacturer, Individual processor model, speed, sockets,
version cores, logical processors
Processor temperature thresholds and crossings Processor temperature has returned to or
exceeded normal operating range

Table 3
Functional Data exposed to VIM
objects
Power Supply Individual power supply type, physical power Individual and overall power supply health
Subsystem
supply location and health status status
Power supply redundancy set, number of power
supplies, associations with individual power supply
members, and redundancy status
Individual power supply module removal conditions Power supply temperature exceeded
and package type normal operating range
Power supply collection health status Power supply temperature returned to
normal operating range
Memory
System memory capacity, starting and ending Memory module has failed or is predicted
Subsystem address, and health status to fail
Individual memory module manufacturer, part Memory board error
number, serial number, removal conditions, data
and total width, capacity, speed, type, position,
form factor, bank label, location and health status
Individual memory board package type, removal Memory redundancy degraded
conditions, hosting board, locked state
Number of sockets, available memory size, total Memory recovered from degraded
memory size, location and health status redundancy
Individual memory module slot connector layout, Amount of physical memory present on the
gender and description, location, and health status host
Memory redundancy set type, load balance Amount of physical memory allocated to
algorithm, operating speed, available and total the service console
memory size,
current, target and available configurations, and
redundancy status
Memory collection health status Amount of memory available to run virtual
machines and allocated to the hypervisor

Table 4
Functional objects Data exposed to VIM
Power system/sub-system Power usage/capacity Redundant Power System Units

ETSI
19 ETSI GS NFV-INF 004 V1.1.1 (2015-01)
Some of the metrics are
...

Questions, Comments and Discussion

Ask us and Technical Secretary will try to provide an answer. You can facilitate discussion about the standard in here.

Loading comments...