ETSI GS NFV-EVE 007 V3.1.1 (2017-03)
Network Functions Virtualisation (NFV) Release 3; NFV Evolution and Ecosystem; Hardware Interoperability Requirements Specification
Network Functions Virtualisation (NFV) Release 3; NFV Evolution and Ecosystem; Hardware Interoperability Requirements Specification
DGS/NFV-EVE007
General Information
Standards Content (Sample)
GROUP SPECIFICATION
Network Functions Virtualisation (NFV) Release 3;
NFV Evolution and Ecosystem;
Hardware Interoperability Requirements Specification
Disclaimer
The present document has been produced and approved by the Network Functions Virtualisation (NFV) ETSI Industry
Specification Group (ISG) and represents the views of those members who participated in this ISG.
It does not necessarily represent the views of the entire ETSI membership.
2 ETSI GS NFV-EVE 007 V3.1.1 (2017-03)
Reference
DGS/NFV-EVE007
Keywords
interoperability, NFV, NFVI, requirements
ETSI
650 Route des Lucioles
F-06921 Sophia Antipolis Cedex - FRANCE
Tel.: +33 4 92 94 42 00 Fax: +33 4 93 65 47 16
Siret N° 348 623 562 00017 - NAF 742 C
Association à but non lucratif enregistrée à la
Sous-Préfecture de Grasse (06) N° 7803/88
Important notice
The present document can be downloaded from:
http://www.etsi.org/standards-search
The present document may be made available in electronic versions and/or in print. The content of any electronic and/or
print versions of the present document shall not be modified without the prior written authorization of ETSI. In case of any
existing or perceived difference in contents between such versions and/or in print, the only prevailing document is the
print of the Portable Document Format (PDF) version kept on a specific network drive within ETSI Secretariat.
Users of the present document should be aware that the document may be subject to revision or change of status.
Information on the current status of this and other ETSI documents is available at
https://portal.etsi.org/TB/ETSIDeliverableStatus.aspx
If you find errors in the present document, please send your comment to one of the following services:
https://portal.etsi.org/People/CommiteeSupportStaff.aspx
Copyright Notification
No part may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying
and microfilm except as authorized by written permission of ETSI.
The content of the PDF version shall not be modified without the written authorization of ETSI.
The copyright and the foregoing restriction extend to reproduction in all media.
© European Telecommunications Standards Institute 2017.
All rights reserved.
TM TM TM
DECT , PLUGTESTS , UMTS and the ETSI logo are Trade Marks of ETSI registered for the benefit of its Members.
TM
3GPP and LTE™ are Trade Marks of ETSI registered for the benefit of its Members and
of the 3GPP Organizational Partners.
GSM® and the GSM logo are Trade Marks registered and owned by the GSM Association.
ETSI
3 ETSI GS NFV-EVE 007 V3.1.1 (2017-03)
Contents
Intellectual Property Rights . 5
Foreword . 5
Modal verbs terminology . 5
1 Scope . 6
2 References . 6
2.1 Normative references . 6
2.2 Informative references . 7
3 Definitions and abbreviations . 7
3.1 Definitions . 7
3.2 Abbreviations . 8
4 NFV Hardware Ecosystem . 8
4.1 Introduction . 8
4.2 Data Centres . 9
4.3 Hardware Interoperability Environment . 9
5 Hardware Interoperability Requirements . 13
5.1 Racks/Frames . 13
5.1.1 Dimension Requirements . 13
5.1.2 Radiated emission requirements . 14
5.1.3 Deployment requirements . 14
5.1.4 Safety requirements . 14
5.2 Processors and Storage . 14
5.3 Power . 15
5.3.1 Introduction. 15
5.3.2 General Power Requirements . 15
5.3.2.1 Introduction . 15
5.3.2.2 Safety . 15
5.3.2.3 Reliability . 15
5.3.2.4 Total Power . 16
5.3.2.5 Energy Efficiency . 16
5.3.2.6 Facility Feeds . 16
5.3.3 Requirements for Power Subsystem Architecture 1 (Traditional) . 16
5.3.3.1 Power Distribution Unit Requirements . 16
5.3.3.2 Compute/Storage/Network Node Power Supply Requirements . 17
5.3.4 Requirements for Power Subsystem Architecture 2 (Rack Power Shelf) . 18
5.3.4.1 Power Distribution Unit Requirements . 18
5.3.4.2 Power Shelf Requirements . 18
5.3.5 Local Energy Storage. 19
5.4 Interconnections . 20
5.4.1 Introduction. 20
5.4.2 Requirements for compute and infrastructure network domain . 21
5.5 Cooling . 21
5.5.1 Introduction. 21
5.5.2 General Cooling Requirements . 21
5.5.3 Rack Level Cooling . 22
5.5.4 Compute/Storage/Network Node-Level Cooling . 22
5.6 Hardware Platform Management . 23
5.6.1 Introduction. 23
5.6.2 General Platform Management Requirements . 23
5.6.3 Interface Protocol Requirements . 23
5.6.4 Hardware Platform Representation Requirements . 24
5.6.5 Logging Representation Requirements . 24
5.7 Hardware Security Measures . 24
5.8 Radiated Emissions and Electromagnetic Compliance . 24
5.9 Climatic and Acoustic Considerations . 24
ETSI
4 ETSI GS NFV-EVE 007 V3.1.1 (2017-03)
5.10 Timing and Synchronization Issues . 25
5.10.1 Background . 25
5.10.2 Requirements . 26
5.11 Reliability Criteria . 26
Annex A: Authors & Contributors . 27
History . 28
ETSI
5 ETSI GS NFV-EVE 007 V3.1.1 (2017-03)
Intellectual Property Rights
IPRs essential or potentially essential to the present document may have been declared to ETSI. The information
pertaining to these essential IPRs, if any, is publicly available for ETSI members and non-members, and can be found
in ETSI SR 000 314: "Intellectual Property Rights (IPRs); Essential, or potentially Essential, IPRs notified to ETSI in
respect of ETSI standards", which is available from the ETSI Secretariat. Latest updates are available on the ETSI Web
server (https://ipr.etsi.org/).
Pursuant to the ETSI IPR Policy, no investigation, including IPR searches, has been carried out by ETSI. No guarantee
can be given as to the existence of other IPRs not referenced in ETSI SR 000 314 (or the updates on the ETSI Web
server) which are, or may be, or may become, essential to the present document.
Foreword
This Group Specification (GS) has been produced by ETSI Industry Specification Group (ISG) Network Functions
Virtualisation (NFV).
Modal verbs terminology
In the present document "shall", "shall not", "should", "should not", "may", "need not", "will", "will not", "can" and
"cannot" are to be interpreted as described in clause 3.2 of the ETSI Drafting Rules (Verbal forms for the expression of
provisions).
"must" and "must not" are NOT allowed in ETSI deliverables except when used in direct citation.
ETSI
6 ETSI GS NFV-EVE 007 V3.1.1 (2017-03)
1 Scope
The present document develops a set of normative interoperability requirements for the Network Function
Virtualisation (NFV) hardware ecosystem and telecommunications physical environment to support NFV deployment.
It builds on the work originated in ETSI GS NFV 003 [i.3].
The present document focusses on the development of requirements to enable interoperability of equipment in the
telecommunications environment to support NFV deployment. The following areas are examined:
• Operations
• Environmental
• Mechanical
• Cabling
• Maintenance.
• Security
2 References
2.1 Normative references
References are either specific (identified by date of publication and/or edition number or version number) or
non-specific. For specific references, only the cited version applies. For non-specific references, the latest version of the
referenced document (including any amendments) applies.
Referenced documents which are not found to be publicly available in the expected location might be found at
http://docbox.etsi.org/Reference.
NOTE: While any hyperlinks included in this clause were valid at the time of publication, ETSI cannot guarantee
their long term validity.
[1] ETSI ES 205 200-2-1 (V1.2.1) (03-2014): "Access, Terminals, Transmission and Multiplexing
(ATTM); Energy management; Global KPIs; Operational infrastructures; Part 2: Specific
requirements; Sub-part 1: Data centres".
[2] IEC 60297-3-105:2008: "Mechanical structures for electronic equipment - Dimensions of
mechanical structures of the 482,6 mm (19 in) series - Part 3-105: Dimensions and design aspects
for 1U high chassis".
[3] ETSI ETS 300 119 (all parts): "Equipment Engineering (EE); European telecommunication
standard for equipment practice".
[4] ETSI GS NFV 004 (V1.1.1) (10-2013): "Network Functions Virtualisation (NFV); Virtualisation
Requirements".
[5] IEEE Std 1588™-2008: "IEEE Standard for a Precision Clock Synchronization Protocol for
Networked Measurement and Control Systems".
[6] IEEE Std 802.1AS™-2011: "Timing and Synchronization for Time-Sensitive Applications in
Bridged Local Area Networks".
[7] NEBS™ GR-63: "NEBS™ Requirements: Physical Protection".
[8] ETSI GS NFV-SEC 009 (V1.1.1): "Network Functions Virtualisation (NFV); NFV Security;
Report on use cases and technical approaches for multi-layer host administration".
ETSI
7 ETSI GS NFV-EVE 007 V3.1.1 (2017-03)
[9] ETSI GS NFV-SEC 012 (V3.1.1): "Network Functions Virtualisation (NFV) Release 3; Security;
System architecture specification for execution of sensitive NFV components".
[10] DMTF Specification DSP0266 V1.0.0 (2015): "Scalable Platform Management API
Specification".
[11] DMTF Specification DSP8010 V1.0.0 (2015): "Scalable Platforms Management API Schema".
2.2 Informative references
References are either specific (identified by date of publication and/or edition number or version number) or non-
specific. For specific references, only the cited version applies. For non-specific references, the latest version of the
referenced document (including any amendments) applies.
NOTE: While any hyperlinks included in this clause were valid at the time of publication, ETSI cannot guarantee
their long term validity.
The following referenced documents are not necessary for the application of the present document but they assist the
user with regard to a particular subject area.
[i.1] ETSI GS NFV 001 (V1.1.1) (10-2013): "Network Functions Virtualisation (NFV); Use Cases".
[i.2] ETSI GS NFV 002 (V1.2.1) (12-2014): "Network Functions Virtualisation (NFV); Architectural
Framework".
[i.3] ETSI GS NFV 003 (V1.2.1) (12-2014): "Network Functions Virtualisation (NFV); Terminology
for Main Concepts in NFV".
[i.5] ETSI GS NFV-EVE 003 (V1.1.1): "Network Functions Virtualisation (NFV); Ecosystem; Report
on NFVI Node Physical Architecture Guidelines for Multi-Vendor Environment".
[i.6] ETSI EN 300 386: "Telecommunication network equipment; ElectroMagnetic Compatibility
(EMC) requirements; Harmonised Standard covering the essential requirements of the Directive
2014/30/EU".
[i.7] IEC 60950-1: "Information technology requirement - Safety - Part 1: General requirements".
[i.8] ETSI EN 300 019-1-3: "Environmental Engineering (EE); Environmental conditions and
environmental tests for telecommunications equipment; Part 1-3: Classification of environmental
conditions; Stationary use at weatherprotected locations".
rd
[i.9] ASHRAE 3 edition, 2012: "Thermal Guidelines for Data Processing Environment".
[i.10] ETSI GS NFV-REL 003 (V1.1.2) (07-2016): "Network Functions Virtualisation (NFV);
Reliability; Report on Models and Features for End-to-End Reliability".
3 Definitions and abbreviations
3.1 Definitions
For the purposes of the present document, the following terms and definitions apply:
delta: three phase power connection where phases are connected like a triangle
wye: three phase power connection where phases are connected like a star with all phases connected at a central
"Neutral" point
ETSI
8 ETSI GS NFV-EVE 007 V3.1.1 (2017-03)
3.2 Abbreviations
For the purposes of the present document, the abbreviations given in ETSI GS NFV 003 [i.3] and the following apply:
ASHRAE American Society of Heating, Refrigerating, and Air-Conditioning Engineers
BBU Battery Backup Unit
BMC Basement Management Controller
CDC Customer Data Centre
DC Direct Current
FRU Field Replaceable Unit
GNSS Global Navigation Satellite System
GPS Global Positioning System
HTTP HyperText Transfer Protocol
HSM Hardware Security Module
IEC International Electrotechnical Commission
IEEE Institute of Electrical and Electronics Engineers
IP Internet Protocol
JSON JavaScript Object Notation
kW kilo Watts
MANO MANagement and Orchestration
MiFID Markets in Financial Instruments Directive
NIC Network Interface Controller
NID Network Interface Device
NFV Network Function Virtualisation
NFVI Network Function Virtualisation Infrastructure
ODC Operator Data Centre
OTT Over The Top
PoP Point of Presence
SSL Secure Sockets Layer
TCP Transmission Control Protocol
VNF Virtual Network Function
VNFC Virtual Network Function Component
4 NFV Hardware Ecosystem
4.1 Introduction
The present document develops a set of normative interoperability requirements for the NFV hardware ecosystem and
telecommunications physical environment to support NFV deployment. The intent is to develop a reference that enables
compatibility between hardware equipment provided by different hardware vendors and suppliers. The environment in
which NFV hardware will be deployed is taken into consideration for developing this reference. Telecommunications
Service Providers currently have the following hardware office environments:
• Central Office
• Access Node
• Transport Node
• Data Centre
• Hybrids (e.g. Central Office partitioned into or converted to a Data Centre)
It is expected that the bulk of the NFV environment will be deployed in Data Centres; this evolution supports the move
towards a Cloud based environment. The focus of this document is primarily on Data Centre environments; however it
also applies to the setup of an NFV Point of Presence (POP) in any of the above environments.
ETSI
9 ETSI GS NFV-EVE 007 V3.1.1 (2017-03)
4.2 Data Centres
A Data Centre [1] is defined as follows:
"structure, or group of structures, dedicated to the centralized accommodation, interconnection and operation of
information technology and network telecommunications equipment providing data storage, processing and transport
services together with all the facilities and infrastructures for power distribution and environmental control together
with the necessary levels of resilience and security required to provide the desired service availability."
According to ETSI ES 205 200-2-1 [1], a data centre is divided into Operator Data Centre (ODC) and Customer Data
Centre (CDC). An ODC is a facility embedded within the core network. The facility, therefore, is considered as a
service provider's asset. A CDC is, on the other hand, a facility that is not directly connected to the core network. The
facility is reachable with access networks. It is considered as a large branch office, a corporate headquarter, and a data
centre operated by an Over-The-Top (OTT) provider. A Data Centre is also considered a place where Network Function
Virtualisation Infrastructure (NFVI) nodes would be installed [i.1] and [i.2].
Figure 1 shows how a central office, Operator Data Centre, and Customer Data Centre are mapped onto the use case #2
[i.1].
Figure 1: NFVI Node Profiles
4.3 Hardware Interoperability Environment
The main features for the NFV hardware ecosystem are depicted as follows.
ETSI
10 ETSI GS NFV-EVE 007 V3.1.1 (2017-03)
Facility Infrastructure
Cableways
Computer Room Air Conditioner Units
Power plant and power distribution
Datacenter Infrastructure Management Software
Environmental, etc.
NFVI Node Rack
Infrastructure
Rack mechanics
NFVI Node
Rack Power
Rack
Conversion and
Distribution
Infrastructure
Rack-level cooling
(if it exists)
Rack-level security
Compute /
Compute /
Compute /
Compute /
Storage /
Storage /
Compute /
Compute /
Storage /
Storage /
Networking
Networking
Storage /
Compute / Storage /
Networking
Compute /
Networking
Nodes
Nodes
Networking
Storage Networking
Nodes
Storage
Nodes
Nodes
Nodes Nodes
Nodes
Network Nodes Network Nodes
Figure 2: Facility Infrastructure Overview
The main components of this ecosystem view are described as follows:
• Facility Infrastructure - the infrastructure (hardware/software/environmental/mechanical provided by the POP
site. All NFVI equipment within the POP is subject to interfacing within the constraints of the facility
infrastructure. Some elements of the facility infrastructure may include cableways, computer room air
conditioning units, power plant and power distribution, infrastructure management software and environmental
conditions.
• NFVI Node Rack Infrastructure - the rack, and associated rack infrastructure that provides power, cooling,
physical security and rack-level hardware management to the compute, storage and networking nodes. Typical
elements for NFVI node rack infrastructure may include rack mechanics, power conversion and power
distribution, cooling, and rack-level physical security measures (door locks, etc.). Multiple rack infrastructures
may exist within the NVFI POP - one for each equipment rack deployed.
• Compute/Storage nodes - the NFV nodes that provide compute and storage functions to the NFVI. Multiple
compute or storage nodes may exist per rack. It is assumed that these nodes draw power from the rack
infrastructure and exist within the environmental and mechanical constraints of the rack infrastructure.
ETSI
11 ETSI GS NFV-EVE 007 V3.1.1 (2017-03)
• Network Nodes - the NFV nodes that provide networking function to the NFVI. Multiple networking nodes
may exist per rack. Like compute and storage nodes, these nodes also draw power and reside within the
environmental and mechanical constraints imposed by the rack. In addition, these nodes provide network
connectivity between network/compute/storage nodes both within and external to the rack.
The dashed line in figure 3 shows a proposed boundary for the scope of this document. The areas of interoperability that
cross the boundary are considered for specification. Additionally, functions within the boundary may also need to be
specified in order to provide interoperability. The four main areas of interoperability (numbered in figure 3) are:
1) Facility to Rack: This defines the interaction between the data centre facility and each individual NFVI rack.
Expected components of this interface include: facility power feeds, facility climate, facility size and weight
restrictions, radiated emissions and susceptibility, acoustics, physical security, and facility infrastructure
management.
2) Rack to Compute/Storage Node: This defines the interaction between the compute and storage nodes with the
rack infrastructure provided by the NFVI rack. Expected elements of this interface include: Rack power,
rack-level forced air cooling (if it exists), mechanical dimensions for rack "slots" and weight per slot.
3) Compute/Storage Node to Network Node: this defines the communications path between the various nodes
within the NFVI rack. Expected elements of this interface include physical layer protocol, bandwidth,
connector, and cabling types.
4) Rack to Rack network: this defines the communication path between network nodes within one rack to
network nodes within another rack within the NFVI POP. Expected elements of this interface will include
physical layer protocol, bandwidth, connector, and cabling types.
ETSI
12 ETSI GS NFV-EVE 007 V3.1.1 (2017-03)
Facility Infrastructure
Cableways
Computer Room Air Conditioner Units
Power plant and power distribution
Datacenter Infrastructure Management Software
Environmental, etc.
NFVI Node Rack
Infrastructure
Rack mechanics
NFVI Node
Rack Power
Conversion and Rack
Distribution
Infrastructure
Rack-level cooling
(if it exists)
Rack-level security
Compute /
Compute /
Compute /
Compute /
Storage /
Storage /
Compute /
Compute /
Storage /
Storage /
Networking
Networking
Storage /
Storage /
Compute /
Networking
Compute /
Networking
Nodes
Nodes
Networking
Networking
Storage
Nodes
Storage
Nodes
Nodes
Nodes Nodes
Nodes
Network Nodes Network Nodes
Figure 3: Scope of the present document
Based on the above discussion, the areas of interoperability for which requirements are developed are summarized in
table 1. This table also lists applicable factors that need to be considered for the development of interoperability
requirements.
ETSI
13 ETSI GS NFV-EVE 007 V3.1.1 (2017-03)
Table 1: Areas of Interoperability and Applicable Factors
Areas of Interoperability Applicable Factors
Racks/Frames • Physical Dimensions
• Deployment Considerations
• Safety Considerations
Processors and Storage
Power • Distribution and Conversion
• Subsystem Architectures
• Local Energy Storage
Interconnections
• Compute and Infrastructure Domain Interconnections
Cooling • General Cooling
• Rack Level Cooling
• Compute/Storage/Network Node Level Cooling
Hardware Platform Management
Hardware Security Measures
Radiated Emissions and
Electromagnetic Compliance
Climatic and Acoustic
Considerations
Timing and Synchronization
Reliability
Lawful Intercept
5 Hardware Interoperability Requirements
5.1 Racks/Frames
5.1.1 Dimension Requirements
Racks/frames can be deployed at different sites (e.g. central office, data centre, etc.) and in different regions. The design
of the racks/frames varies based on the adopted specifications. For easier interoperability with legacy equipment and
reducing reconstruction needs for the deployment sites, the racks/frames used for NFVI Node deployment shall comply
with the relevant regional and global standards.
Requirements:
REQ 5.1.1.1: The dimensions of racks/frames shall comply with one of the following NFVI Node profiles:
• Profile 1: IEC 60297 specifications [2]
• Profile 2: ETSI ETS 300 119 specifications [3]
NOTE 1: Profile 1 is recommended for deployment in legacy central office or data centre environment. Profile 2 is
recommended for deployment in legacy access & transport scenarios.
NOTE 2: Dimension requirement samples of the referenced specifications are listed in table 2.
Table 2: Dimension requirement samples of the referenced specifications
Site Type Height Width Depth Ref standards
central office 2 200 mm 600 mm 800 mm IEC 60297 [2]
data centre 2 000 mm 600 mm 1 200 mm IEC 60297 [2]
access & transport nodes 2 200 mm 600 mm 300 mm or 600 mm ETSI ETS 300 119 [3]
Note that today due to cabling, specifically for optical cables, there might be racks with more width advisable due to the
number and radius of cables per rack. That depends a bit on the type NFV node.
ETSI
14 ETSI GS NFV-EVE 007 V3.1.1 (2017-03)
5.1.2 Radiated emission requirements
Requirements:
REQ 5.1.2.1: The racks/frames shall comply with the regional radiated emissions specifications for the types of
equipment hosted.
EXAMPLE: ETSI EN 300 386 [i.6] is an EMC requirement specification for racks/frames in Europe.
5.1.3 Deployment requirements
The racks/frames are used to host and facilitate the NFVI Node equipment, which can be categorized into compute,
storage, network and gateway nodes, to support execution of VNFs.
Requirements:
REQ 5.1.3.1: The racks/frames shall support a variety of NFVI Node scales.
REQ 5.1.3.2: The racks/frames shall support a mapping of an arbitrary number of NFVI Nodes onto one or more
physical racks of equipment (e.g. N:M mapping).
REQ 5.1.3.3: The racks/frames shall support geographical addressing in order to facilitate ease of maintenance and
possible optimizations based on locality of compute/storage/networking resources.
5.1.4 Safety requirements
Requirements:
REQ 5.1.4.1: The racks/frames shall comply with regional safety requirements for the types of equipment hosted.
EXAMPLE: IEC 60950-1 [i.7] is an international safety specification defined by the International
Electrotechnical Commission (IEC).
5.2 Processors and Storage
Processors and storage are essential components of NFVI nodes. They both play important roles in data processing and
have a big influence on the performance of the NFVI node. To enjoy the benefits of the technology development, the
NFVI nodes are expected to support latest processors and storage products for service expansion or upgrade.
The key criteria of processors are performance and power consumption. The performance of the processors is typically
measured by core numbers, clock rate, and instructions per clock. And with some specific capabilities, the processors
can further improve their performance. ETSI GS NFV-EVE 003 [i.5] has studied some of these processor capabilities
and lists some recommendations. Besides performance, power consumption is also an essential criterion for processors.
While the current Central Processing Units (CPUs) are capable of handling most of the work, under some cases, they
are less power-efficient compared with the Graphic Processing Units (GPUs) or specific designed accelerators. It is
expected that different forms of processors would be utilized for different types of services to improve power
efficiency.
The key criteria of storage are performance (I/O throughput), capacity, and scalability. Performance and capacity are the
key factors to be considered when building the storage system. Some capabilities can improve the performance of the
storage system; for example, ETSI GS NFV-EVE 003 [i.5] has studied the advantages offered by registers, cache and
memory. As the data rapidly grows, the storage system can be expanded. Storage systems can be scaled and/or
maintained as appropriate.
ETSI
15 ETSI GS NFV-EVE 007 V3.1.1 (2017-03)
5.3 Power
5.3.1 Introduction
This clause specifies requirements for the rack power architecture of the NFVI node. Two power architectures are
specified. The first architecture consists of a rack-level power entry module which distributes facility power to each of
the Compute/Storage/Network nodes within the NFVI equipment rack where individual power supplies convert the
power to appropriate DC formats to be used by the equipment. This architecture is consistent with traditional
telecommunications and enterprise Data Centre deployments. The second architecture incorporates a rack-level power
shelf for the primary conversion stage. This architecture might provide some benefits in system cost due to resource
pooling and represents an emerging architecture used by Hyperscale cloud providers [i.8]. These two architectures can
be seen in figure 4.
Figure 4: Two NFVI Node Rack-Level Power Architectures
5.3.2 General Power Requirements
5.3.2.1 Introduction
The requirements in this clause apply to all implementations of NFVI node rack power subsystems, regardless of the
particular rack power architecture.
5.3.2.2 Safety
REQ 5.3.2.2.1: The rack power subsystem should meet all regional/national safety requirements for the region in which
it is deployed.
5.3.2.3 Reliability
REQ 5.3.2.3.1: The rack power subsystem should be designed with no single points of failure in order to avoid possible
loss of availability of the NFVI node function.
REQ 5.3.2.3.2: Power failures should not impact more than a single compute/storage/networking node within the NFVI
rack.
ETSI
16 ETSI GS NFV-EVE 007 V3.1.1 (2017-03)
REQ 5.3.2.3.3: The rack power subsystem should provide a means to report power system fault events to the hardware
platform management entity and/or Data Centre infrastructure management software.
5.3.2.4 Total Power
REQ 5.3.2.4.1: The rack power subsystem should be able to support loads of 10 kW per full size rack.
NOTE: This is a minimum value based on typical rack deployments at the time of writing of the present
document. Rack deployments capable of 40 kW or more are known to exist.
5.3.2.5 Energy Efficiency
REQ 5.3.2.5.1: The rack power subsystem should support a variety of energy efficiency levels in order to meet the
particular deployment needs.
NOTE: Higher energy efficiency is preferred, however, the highest efficiency levels also might require more
complex designs and specialized components. The economic benefits of a higher efficiency power
subsystem needs to be evaluated within the context of NFV POP operational optimization. Having a
variety of energy efficiency options at the rack-level will facilitate this optimization.
5.3.2.6 Facility Feeds
REQ 5.3.2.6.1: A configuration of the rack power subsystem shall support -48VDC input facility feeds in order to
provide interoperability with existing central office infrastructures.
REQ 5.3.2.6.2: Configurations of the rack power subsystem should support the following three-phase formats.
Table 3
Connection Voltage Frequency
WYE 208 50 Hz to 60Hz
WYE 480 50 Hz to 60Hz
WYE 400 50 Hz to 60Hz
DELTA 208 50 Hz to 60Hz
DELTA 240 50 Hz to 60Hz
DELTA 230 50 Hz to 60Hz
REQ 5.3.2.6.3: Configurations for single phase facility feeds may be supported.
REQ 5.3.2.6.4: Configurations for high voltage DC facility feeds may be supported.
REQ 5.3.2.6.5: Configurations of the rack power subsystem shall support redundant input power feeds.
5.3.3 Requirements for Power Subsystem Architecture 1 (Traditional)
5.3.3.1 Power Distribution Unit Requirements
REQ 5.3.3.1.1: The power distribution unit shall accept power from one or more facility feeds.
REQ 5.3.3.1.2: The power distribution unit shall provide power from the facility feeds to the compute/storage/network
nodes without regulation.
REQ 5.3.3.1.3: For three phase power inputs, the power distribution unit shall allow load balancing of power drawn
from each phase of the input power.
NOTE: This can be facilitated by providing multiple output sockets distributed evenly among the phases.
REQ 5.3.3.1.4: To facilitate high-power racks, multiple power distribution units may be used.
REQ 5.3.3.1.5: Power distribution unit outputs should provide over-current protection by circuit breakers or similar
method.
ETSI
17 ETSI GS NFV-EVE 007 V3.1.1 (2017-03)
REQ 5.3.3.1.6 Power distribution units should be capable of measuring and remotely reporting the following
operational parameters:
• Input Current
• Input Voltage
• Output Current (each output)
• Circuit Breaker Condition (tripped, not tripped)
REQ 5.3.3.1.7 Power distribution units should support generation of alarm events with programmable limits for each of
the following conditions:
• Input Over Current
• Input Over/Under Voltage
• Output Over Current (each output)
• Circuit Breaker Trip
REQ 5.3.3.1.8: Power distribution units should allow individual outputs to be powered up/down under software control.
REQ 5.3.3.1.9: Power distribution units should support programmable power sequencing of the outputs / soft start to
prevent large current spikes when rack power is applied.
REQ 5.3.3.1.10: The power distribution unit should be capable of providing the following information to the rack
management or Data Centre infrastructure management software for asset tracking purposes:
• Manufacturer
• Model
• Serial Number
5.3.3.2 Compute/Storage/Network Node Power Supply Requirements
The function of the compute/storage/network node power supply is to convert power provided by the power distribution
unit into a format that can be used by the node. The primary purpose of the power supply is to rectify and regulate the
supplied power.
REQ 5.3.3.2.1: Compute/Storage/Network Nodes should accept one or more power supply units.
REQ 5.3.3.2.2: Compute/Storage/Network node power supply units should accept power from one and only one power
distribution unit feed.
REQ 5.3.3.2.3: Power supply redundancy may be achieved by installing more than one power supply unit within a
Compute/Storage/Network node.
REQ 5.3.3.2.4: Each power supply unit should be capable of measuring and remotely reporting the following
operational parameters:
• Output Current for Each Output Voltage
• Output Voltage Value for Each Output Voltage
REQ 5.3.3.2.5: Each power supply unit should support generation of alarm events with programmable limits for each of
the following conditions:
• Output Over Current for Each Output Voltage
• Output Over /Under Voltage for Each Output Voltage
• Output Loss of Regulation
ETSI
18 ETSI GS NFV-EVE 007 V3.1.1 (2017-03)
REQ 5.3.3.2.6: Each power supply unit should be capable of providing the following information to the rack
management or Data Centre infrastructure management software for asset tracking purposes:
• Manufacturer
• Model
• Serial Number
• Date of Manufacture
5.3.4 Requirements for Power Subsystem Architecture 2 (Rack Power
Shelf)
5.3.4.1 Power Distribution Unit Requirements
REQ 5.3.4.1.1: The power distribution unit shall accept power from one or more facility feeds.
REQ 5.3.4.1.2: The power distribution unit shall provide power from the facility feeds to the rack-level power
shelf/shelves without regulation.
REQ 5.3.4.1.3: Power distribution unit should be capable of measuring and remotely reporting the following operational
parameters:
• Input Current
• Input Voltage
• Output Current (each output)
REQ 5.3.4.1.4: The power distribution unit should support generation of alarm events with programmable limits for
each of the following conditions:
• Input Over Current
• Input Over/Under Voltage
• Output Over Current (each output)
REQ 5.3.3.1.5: The power distribution unit should be capable of providing the following information to the rack
management or Data Centre infrastructure management software for asset tracking purposes:
5.3.4.2 Power Shelf Requirements
The function of the power shelf is to convert power provided by the power distribution unit into a format that can be
used by the compute/storage/network nodes. The primary purpose of the power supply is to rectify and regulate the
supplied power.
REQ 5.3.4.2.1: The power shelf shall accept one or more power supply units for both capacity and redundancy
purposes.
REQ 5.3.4.2.2: Power shelf may accept power from more than one power distribution unit feed.
REQ 5.3.4.2.3: More than one power shelf may be installed in an NFVI rack for capacity or redundancy purposes.
REQ 5.3.4.2.4: Output power shall be delivered from the power shelf to compute/storage/network nodes via bus bar or
bussed cables.
REQ 5.3.4.2.5: The power delivered from the power shelf shall be protected from over current by circuit breaker or
other similar safety device.
ETSI
19 ETSI GS NFV-EVE 007 V3.1.1 (2017-03)
REQ 5.3.4.2.6: The power shelf should be capable of measuring and remotely reporting the following operational
parameters:
• Output Current for Each Output Voltage
• Output Voltage Value for Each Output Voltage
• Number of Power Supplies Installed in the Shelf
• Maximum Capacity of the Shelf Based on Installed Supplies
REQ 5.3.4.2.7: Each power supply unit should be capable of measuring and remotely reporting the following
operational parameters:
• Output Current for Each Output Voltage
• Output Voltage Value for Each Output Voltage
REQ 5.3.4.2.8: The power shelf should support generation of alarm events with programmable limits for each of the
following conditions:
• Output Over Current for Each Output Voltage
• Output Over/Under Voltage for Each Output Voltage
• Output Loss of Regulation
REQ 5.3.4.2.9: The power shelf should be capable of providing the following information to the rack management or
Data Centre infrastructure management software for asset tracking purposes:
• Manufacturer
• Model
• Serial Number
• Date of Manufacture
REQ 5.3.4.2.10: The power supplies should be capable of providing the following information to the rack management
or Data Centre infrastructure management software for asset tracking purposes:
• Manufacturer
• Model
• Serial Number
• Date of Manufacture
5.3.5 Local Energy Storage
Local energy storage is the practice of using batteries within the equipment rack in order to replace centralized UPS
systems. To do so, a Battery Backup Unit (BBU) is placed either in the individual compute/storage/network nodes or
within the rack power shelf. For some system deployments there could be economic, reliability, or safety advantages to
local energy storage.
Localized energy storage within NFVI nodes is not required, however, if it is present, the requirements are as follows:
REQ 5.3.5.1: Localized energy storage should be capable of providing power from the time that facility power is lost
and generator (or backup power) is stabilized.
NOTE: 15 to 90 seconds is a typical time for alternate power to stabilize.
REQ 5.3.5.2: In racks with power shelves, BBU units should be placed within the power shelf po
...








Questions, Comments and Discussion
Ask us and Technical Secretary will try to provide an answer. You can facilitate discussion about the standard in here.
Loading comments...