ETSI GR NFV-TST 011 V1.1.1 (2019-03)
Network Functions Virtualisation (NFV); Testing; Test Domain and Description Language Recommendations
Network Functions Virtualisation (NFV); Testing; Test Domain and Description Language Recommendations
DGR/NFV-TST011
General Information
Standards Content (Sample)
GROUP REPORT
Network Functions Virtualisation (NFV);
Testing;
Test Domain and Description Language Recommendations
Disclaimer
The present document has been produced and approved by the Network Functions Virtualisation (NFV) ETSI Industry
Specification Group (ISG) and represents the views of those members who participated in this ISG.
It does not necessarily represent the views of the entire ETSI membership.
2 ETSI GR NFV-TST 011 V1.1.1 (2019-03)
Reference
DGR/NFV-TST011
Keywords
language, NFV, testing
ETSI
650 Route des Lucioles
F-06921 Sophia Antipolis Cedex - FRANCE
Tel.: +33 4 92 94 42 00 Fax: +33 4 93 65 47 16
Siret N° 348 623 562 00017 - NAF 742 C
Association à but non lucratif enregistrée à la
Sous-Préfecture de Grasse (06) N° 7803/88
Important notice
The present document can be downloaded from:
http://www.etsi.org/standards-search
The present document may be made available in electronic versions and/or in print. The content of any electronic and/or
print versions of the present document shall not be modified without the prior written authorization of ETSI. In case of any
existing or perceived difference in contents between such versions and/or in print, the prevailing version of an ETSI
deliverable is the one made publicly available in PDF format at www.etsi.org/deliver.
Users of the present document should be aware that the document may be subject to revision or change of status.
Information on the current status of this and other ETSI documents is available at
https://portal.etsi.org/TB/ETSIDeliverableStatus.aspx
If you find errors in the present document, please send your comment to one of the following services:
https://portal.etsi.org/People/CommiteeSupportStaff.aspx
Copyright Notification
No part may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying
and microfilm except as authorized by written permission of ETSI.
The content of the PDF version shall not be modified without the written authorization of ETSI.
The copyright and the foregoing restriction extend to reproduction in all media.
© ETSI 2019.
All rights reserved.
TM TM TM
DECT , PLUGTESTS , UMTS and the ETSI logo are trademarks of ETSI registered for the benefit of its Members.
TM TM
3GPP and LTE are trademarks of ETSI registered for the benefit of its Members and
of the 3GPP Organizational Partners.
oneM2M™ logo is a trademark of ETSI registered for the benefit of its Members and
of the oneM2M Partners. ®
GSM and the GSM logo are trademarks registered and owned by the GSM Association.
ETSI
3 ETSI GR NFV-TST 011 V1.1.1 (2019-03)
Contents
Intellectual Property Rights . 5
Foreword . 5
Modal verbs terminology . 5
Executive summary . 5
Introduction . 6
1 Scope . 7
2 References . 7
2.1 Normative references . 7
2.2 Informative references . 7
3 Definition of terms, symbols and abbreviations . 8
3.1 Terms . 8
3.2 Symbols . 8
3.3 Abbreviations . 8
4 Test Domain . 8
4.1 Overview . 8
4.2 Test Case Resources . 9
4.3 Test Execution Flow . 10
4.4 High-Level Functions . 11
4.5 Test Case Data . 11
4.6 Execution Segments . 11
4.7 Test Environment . 12
4.8 Test Suites & Traffic Mixes . 12
5 Reuse Guidelines . 12
5.1 Overview . 12
5.2 Environment Decoupling . 12
5.3 Resource API Decoupling . 13
5.4 High-level Function Decoupling . 13
5.5 Test Data Decoupling . 13
6 Recommended Models . 14
6.1 Overview . 14
6.2 Test Case Model . 14
6.3 Test Environment Model . 14
6.4 Test Scenarios . 15
6.5 Full Domain Model . 16
7 Test DSL . 18
7.1 Overview . 18
7.2 Test DSL Concepts . 18
7.3 Abstract Syntax meta-model . 19
7.4 Dynamically Loaded Constraints . 20
7.5 Test Case Header . 20
7.5.0 Introduction. 20
7.5.1 Test Case Identifier . 20
7.5.2 Test Case Description . 21
7.5.3 Custom Test Case Attributes . 21
7.5.4 High-Level Functions . 21
7.5.5 Test Case Data . 21
7.6 Resource Declaration . 21
7.7 Execution Flow . 22
7.7.0 Introduction. 22
7.7.1 Getters for Resource-Specific Data. 22
7.7.2 Symbol Lookup . 22
ETSI
4 ETSI GR NFV-TST 011 V1.1.1 (2019-03)
7.7.3 High-level Function Invocation . 22
Annex A: JADL Example . 24
Annex B: Authors & contributors . 28
History . 29
ETSI
5 ETSI GR NFV-TST 011 V1.1.1 (2019-03)
Intellectual Property Rights
Essential patents
IPRs essential or potentially essential to normative deliverables may have been declared to ETSI. The information
pertaining to these essential IPRs, if any, is publicly available for ETSI members and non-members, and can be found
in ETSI SR 000 314: "Intellectual Property Rights (IPRs); Essential, or potentially Essential, IPRs notified to ETSI in
respect of ETSI standards", which is available from the ETSI Secretariat. Latest updates are available on the ETSI Web
server (https://ipr.etsi.org/).
Pursuant to the ETSI IPR Policy, no investigation, including IPR searches, has been carried out by ETSI. No guarantee
can be given as to the existence of other IPRs not referenced in ETSI SR 000 314 (or the updates on the ETSI Web
server) which are, or may be, or may become, essential to the present document.
Trademarks
The present document may include trademarks and/or tradenames which are asserted and/or registered by their owners.
ETSI claims no ownership of these except for any which are indicated as being the property of ETSI, and conveys no
right to use or reproduce any trademark and/or tradename. Mention of those trademarks in the present document does
not constitute an endorsement by ETSI of products, services or organizations associated with those trademarks.
Foreword
This Group Report (GR) has been produced by ETSI Industry Specification Group (ISG) Network Functions
Virtualisation (NFV).
Modal verbs terminology
In the present document "should", "should not", "may", "need not", "will", "will not", "can" and "cannot" are to be
interpreted as described in clause 3.2 of the ETSI Drafting Rules (Verbal forms for the expression of provisions).
"must" and "must not" are NOT allowed in ETSI deliverables except when used in direct citation.
Executive summary
The present document proposes a model of the NFV test domain and recommends requirements for a test Domain
Specific Language (DSL) to manipulate it. In the context of NFV, a network service is supplied by multiple vendors and
each vendor has its own test technology, interfaces into the system under test, and test languages (usually GPLs like
® ® ®
Java , Ruby , Python , etc.) In order to create a common test language, the test cases follow a standardized test case
model that the language can manipulate, and that can be implemented within individual test technologies. The model
includes shareable and reusable artefacts tied to the test domain: execution flow, data, abstract resources, environment,
etc.
Integration of multiple test technologies is only possible by a system that can accept contributions of test resources from
multiple parties. These contributions may include lab resources, test APIs, test data, high-level function libraries, test
execution platforms, etc. The test environment is then constructed dynamically from these contributions. To allow the
dynamic nature of the test environment, it is necessary for the test case to be decoupled from specific resource
contributions and express the test process in terms of resource abstractions. Mapping these abstractions to concrete
resources is the job of a Dynamic Resource Management (DRM) system. This is done by creating an environment
resource meta-model available to the test case developers at design time. The meta-model is then used for creation of
specific environment instance models at runtime. Each environment instance includes dynamic resource contributions
to which resource abstractions are mapped.
ETSI
6 ETSI GR NFV-TST 011 V1.1.1 (2019-03)
Introduction
With the advent of NFV, the industry is experiencing the following transformative challenges:
• Multiple contributions to a network service
• Open collaboration
• Shift away from dedicated resources (sharing of resources)
• Shift of integration responsibility from vendors to service providers (or their agents)
• Test cases/plans can be repeated multiple times, making reuse critical
To address these challenges and to encourage collaboration, the present document provides recommendations for NFV
test domain modelling and a Test DSL that does not force vendors/participants to change their test technology/language
and enables efficient utilization of resources. To enable reuse, the model also decouples test data and test environment
from the test case and uses dynamic allocation of test resources.
ETSI
7 ETSI GR NFV-TST 011 V1.1.1 (2019-03)
1 Scope
The present document proposes a model of the NFV test domain and recommends requirements for a test Domain
Specific Language (DSL) to manipulate it. The description includes an NFV test automation ecosystem that facilitates
interaction among NFV suppliers and operators, based on the DevOps principles.
The NFV test domain contains:
• System Under Test (SUT): Network Functions (NF), Network Functions Virtualisation Infrastructure (NFVI)
and network services.
• Test Resources: tools or instrumented NF's and NFVI elements that test cases can interface to manipulate the
SUT.
• Test Execution Flow: controlled and uncontrolled state transitions.
• Test case configuration data and parameters: test-resource-specific and non-test-resource-specific.
The present document explores the following attributes to enable efficient multi-supplier NFV interaction:
• Reusability of test plans, test cases and test resources.
• Abstraction of test data.
• Decoupling of test case from the test environment.
• Use of test resource abstractions in place of concrete resources.
2 References
2.1 Normative references
Normative references are not applicable in the present document.
2.2 Informative references
References are either specific (identified by date of publication and/or edition number or version number) or
non-specific. For specific references, only the cited version applies. For non-specific references, the latest version of the
referenced document (including any amendments) applies.
NOTE: While any hyperlinks included in this clause were valid at the time of publication, ETSI cannot guarantee
their long term validity.
The following referenced documents are not necessary for the application of the present document but they assist the
user with regard to a particular subject area.
[i.1] ETSI GS NFV-TST 001 (V1.1.1): "Network Functions Virtualisation (NFV); Pre-deployment
Testing; Report on Validation of NFV Environments and Services".
[i.2] ETSI GS NFV-MAN 001 (V1.1.1): "Network Functions Virtualisation (NFV); Management and
Orchestration".
[i.3] ETSI GS NFV-SOL 003 (V2.3.1): "Network Functions Virtualisation (NFV) Release 2; Protocols
and Data Models; RESTful protocols specification for the Or-Vnfm Reference Point".
[i.4] ETSI GS NFV-IFA 006: "Network Functions Virtualisation (NFV) Release 2; Management and
Orchestration; Vi-Vnfm reference point - Interface and Information Model Specification".
[i.5] ETSI GS NFV 003: "Network Functions Virtualisation (NFV); Terminology for Main Concepts in
NFV".
ETSI
8 ETSI GR NFV-TST 011 V1.1.1 (2019-03)
3 Definition of terms, symbols and abbreviations
3.1 Terms
For the purposes of the present document, the terms given in ETSI GS NFV 003 [i.5] and the following apply:
execution engine: means by which a program can be executed on the target platform using the two approaches of
interpretation and compilation
3.2 Symbols
Void.
3.3 Abbreviations
For the purposes of the present document, the abbreviations given in ETSI GS NFV 003 [i.5] and the following apply:
DSL Domain Specific Language
GPL General Purpose Language
HLF High-Level Function
TEP Test Execution Platform
4 Test Domain
4.1 Overview
The NFV test domain is a set of artefacts and systems for testing NFV-based solutions. NFV introduced the concept of
"dynamically configurable and fully automated cloud environments" (https://www.etsi.org/technologies/nfv) for
network functions. The present document models the NFV test domain as a set of abstractions so that the same level of
flexibility is available in testing those network functions. Figure 1 is included to illustrate relationships among artefacts
discussed in clause 4; the NFV Test Domain model is described in more detail in clause 6. In addition, the present
document proposes requirements for a Domain-specific language (DSL) to manipulate that test domain. Using these
models and recommended requirements, suppliers and service providers are able to leverage different test technologies,
dynamically allocate test resources, and reuse test plans, test cases, and Test Execution Platforms (TEPs). In the present
document, a test case always refers to a computer program that can be executed by a test automation system.
ETSI
9 ETSI GR NFV-TST 011 V1.1.1 (2019-03)
Figure 1: NFV Test Domain Artefact Relationships
The NFV test domain is comprised of:
• System Under Test (SUT) [i.1]: Network Functions (NF), Network Functions Virtualisation
Infrastructure (NFVI), and network services
• Test Case Resources: tools or instrumented NF's and NFVI elements that test cases can interface to manipulate
the SUT
• Execution Flow: controlled and uncontrolled state transitions
• High-Level Functions
• Test case configuration data and parameters: test-resource-specific and non-test-resource-specific
• Execution Segments
• Test Environment
• Test Suites & Traffic Mixes
4.2 Test Case Resources
In order to be tested, the SUT exposes a set of interfaces over which test interactions happen. These interfaces vary in
the degree of complexity and may include entire protocol stacks. The test drivers communicate with the SUT by
sending and receiving encoded messages over one or more of these interfaces. This means that the implementation of
the interface is also present on the test side.
It is therefore necessary for the executing test case to create or otherwise acquire one or more objects that implement the
SUT interfaces and use them to send and receive messages to and from the SUT. These objects in turn expose their own
interfaces to the test case to allow the test to manipulate the message flow more easily. These objects will be referred to
as abstract resources. In essence an abstract resource is an instance of an SUT interface that provides its own API to the
test.
ETSI
10 ETSI GR NFV-TST 011 V1.1.1 (2019-03)
Abstract resources are test case-facing abstractions that utilize units of lab equipment, software, and/or data to do real
work. These units will be referred to as concrete resources. In the context of a shared lab, concrete resources are shared
among multiple users and are allocated to specific test cases for the duration of the test. This guarantees that the test
case gets exclusive access to concrete resources required for its execution. Some examples of concrete resources include
instances of instrumented or stubbed out MANO components, VNFs, VNFCs, etc. Availability of concrete resources is
generally limited and concurrently executing test cases contend to gain access to them.
The relationship between abstract and concrete resources is typically many-to-many, and the mapping between them is
described in clause 6.3. The test DSL is expected to enforce proper resource declaration, preclude any access to
concrete resources outside the resource management system, and provide the user with an intuitive way to declare and
manipulate abstract resources. Once an abstract resource is mapped to an allocated concrete resource they form a single
entity used by the test case to interact with the SUT. This entity will be called test case resource as illustrated in
Figure 2.
4.3 Test Execution Flow
As the test case executes, every resource goes through a sequence of states, reflecting the SUT functionality being
tested. These sequences range in complexity from trivial to very complex. The test case may be interested in some but
not all of these states. For example, if a resource is running a protocol stack, unless protocol conformance is being
tested, most low-level messaging is of no interest to the test case. It may only be interested in the successful or
unsuccessful outcome of such messaging. The degree to which the test case has visibility of the resource state can vary
for the same type of resource depending on the test scenario. Consequently, it is necessary for a mechanism for different
levels of control over the resource state to be present. The test case should have the ability to "take over" the resource
state transition when necessary and let the resource run its own state machine at other times. The resource states
controlled by the test case are referred to as controlled states.
Controlled states of all test case resources at any given time form the state of the test case. Test case initiates state
transitions by sending or receiving and verifying messages to and from the SUT on any of the test case resources. For
example, if the SUT is a VNFM implementation and an Instantiate VNF request is sent from the NFVO resource, the
test case can verify receiving a Grant VNF Lifecycle Operation request by the NFVO resource and after acknowledging
it with a Grant VNF Lifecycle Operation response, verify that the Allocate Resource request is received by the VIM
resource. Running a single state machine per test case for controlled states while letting individual resources run their
own state machines for uncontrolled states provides an intuitive and flexible framework for SUT interface interworking.
Figure 2: Test Case Resources and State Machine
ETSI
11 ETSI GR NFV-TST 011 V1.1.1 (2019-03)
Any controlled state can be passing or failing. Criteria for failing the test are evaluated by the test case at every
controlled state. If failure criteria are satisfied the test fails immediately. Otherwise it continues until it reaches a final
state at which point it passes. If a test case does not reach a final state it gets killed by the TEP once sufficient time has
passed to conclude that the test behaviour is abnormal. Regardless of the type of test case termination (passed, killed,
crashed, hung and killed by the TEP), the allocated concrete resources are returned into their initial state and released.
Since no assumption can be made about normal vs. abnormal test case termination, the responsibility of resource
clean-up and release cannot be assigned to the test case and should lie with individual resource implementations.
4.4 High-Level Functions
A test scenario may also require a series of controlled state transitions involving multiple resources whose details are
outside the main focus of the test. Continuing with the previous example, if the main focus of the test is verification of
VNFM configuring the newly created VNF with deployment-specific parameters, the details of the VNF instantiation
are not important to the test scenario and can be grouped into a single action. These actions will be referred to as
high-level functions. They capture frequently repeated state transition sequences and encapsulate them into invokable,
parameterizable units of functionality. High-level functions can also call other high-level functions. This provides a
flexible mechanism for control of granularity for different parts of the execution flow. High-level functions are reusable
across multiple test cases and activities.
4.5 Test Case Data
Many test cases utilize data that are not explicitly defined in the body of the test case. Separation of the execution flow
and the data that can parameterize the execution flow allows greater flexibility of test case design and reuse of the same
data sets by multiple tests.
Some data are resource-specific and come from test case resources. Since the test case only manipulates abstract
resources and has no prior knowledge of what concrete resource may be allocated to it, any attributes of the concrete
resources used in the test case are acquired at run time. For example, in order to verify that the VNFM under test sends
the correct VNF identifier in the alarm notification to the NFVO, it is necessary for the test case to know the VNF
identifier of the VNF generating the alarm. Until a specific VNF test resource instance is allocated to the test case, its
identifier is not known. The test case should therefore be able to obtain the identifier from the VNF test resource at
runtime to verify that the correct one is sent to the NFVO test resource.
Some data are not resource-specific and come from elsewhere. For example, default values for a set of protocol
messages can be specified outside the test case. This allows the test case writer to only provide non-default values for
message Information Elements (IEs). This also allows reuse of this data across a potentially very large number of test
cases using the same protocol.
Test case data should be easily customizable. In the protocol defaults example there may be global defaults, defaults
that a particular user team is using for their purposes, or even defaults for a particular test activity that further specialize
test case writers' defaults. This can be accomplished by creating progressively specialized test case data hierarchies with
every new child defining a nested scope within its parent's scope. Data lookup then follows the regular scoping rules.
4.6 Execution Segments
In some cases the test execution flow can be separated into individually executable segments. For example, many
control-plane test scenarios are designed to have segments of signalling separated by periods of inactivity. These
periods of inactivity represent voice or data transmission and are commonly referred to as hold time. Hold time may be
in orders of magnitude longer that the active segments of the test case and a very large number of test cases can be
holding at the same time. It is therefore important to have a mechanism for suspending execution of the test case and
releasing associated computing resources (not test case resources) for the duration of the hold time.
Unlike high-level functions execution segments are only defined within the scope of an individual test execution flow.
They are not shareable artefacts and cannot be invoked by the test case. They only provide the ability to the TEP to
execute parts of the test case individually. High-level functions of various degree of granularity can be invoked from
within execution segments.
ETSI
12 ETSI GR NFV-TST 011 V1.1.1 (2019-03)
Segmentation also provides the ability to run parts of the execution flow in a different order. A separate model
specifying desired combinations of the segments and their permutations can be defined and provided to the TEP. The
TEP can then execute the segments in the specified order. This can provide substantial time and effort savings
compared to writing a separate test case for every desired permutation.
4.7 Test Environment
Test Environment is defined in ETSI GS NFV-TST 001 [i.1], clause 4.2. The present document introduces the concepts
of Abstract and Concrete test environments that enable reuse. Similarly to test case resources, environments can also be
abstract and concrete. Abstract environments consist of abstract resources, and concrete environments consist of
concrete resources. Abstract resources are mapped to concrete resources by the resource management system.
Before concrete resources can be used in a test case (or a group of test cases within the same test activity) they are
allocated to that test activity and provisioned. In addition to specifying required abstract resources, abstract environment
definition also specifies Provisioning Data that is applied to allocated concrete resources to complete concrete
environment instantiation.
4.8 Test Suites & Traffic Mixes
In a typical test scenario test cases are not executed by themselves. They run as a part of either a test suite or a traffic
mix.
Test suites aggregate test cases that verify a particular set of functional requirements. Test cases in a test suite are
executed once and success and failure are considered on individual test case basis. Test suites impose sequential or
parallel mode of test case execution, or define flows that combine both. Test suites can also set context (e.g. test data)
for parts of such flows.
Traffic mixes aggregate test cases that verify non-functional requirements, such as performance, robustness, etc. Test
cases in a traffic mix execute repeatedly, each at its own rate. Together, they emulate real network traffic where
multiple activities happen at the same time. Success and failure are considered on the entire traffic mix execution and a
certain number of individual test case failures is typically expected. For example, a five 9's availability metric allows
one out 100 000 test cases to fail in a successful run.
Traffic mixes define relative frequencies of different network activities and the overall rate of network traffic. Both
activity distribution and the overall rate can change over time.
5 Reuse Guidelines
5.1 Overview
NFV solutions are composed of contributions from multiple suppliers. As such, test cases or entire test plans may be
repeated multiple times, making reuse critical. The NFV test domain model (as defined in the present document)
enables reusability through decoupling, abstraction and modularization.
5.2 Environment Decoupling
Decoupling the test environment from test cases is achieved by strict separation of abstract and concrete resources and
using a dynamic resource management system to map the abstract resource space to the concrete resource space.
The relationship between test cases and their environments is many-to-many, which means that multiple test cases with
the same resource requirements can execute on the same test environment and that the same test case can execute on
multiple environments.
The resource management system manages the available test resources and accepts contributions of resources from
various resource contributors. From these contributions it builds a dynamic model of the concrete resource space. The
model adapts dynamically to changes in resource contributions. This dynamic model is used by the resource
management system to find a concrete resource for an abstract resource request from the resource consumer.
ETSI
13 ETSI GR NFV-TST 011 V1.1.1 (2019-03)
This dynamic model is an instance of a meta-model that specifies resource abstractions and their relationships. This
meta-model is defined statically and describes a family of resource models. The resource abstractions specified by the
meta-model are visible to resource consumers. A resource consumer can request any resource abstraction without the
knowledge of the concrete resource space.
When a test activity needs a concrete environment to run on, an abstract environment gets designed to describe it. This
abstract environment is defined in terms of specific resource abstractions provided by the meta-model. This abstract
environment is a reusable artefact that multiple test activities can use to build concrete environments.
Environment meta-models and abstract environment definitions are carefully designed by solution and automation
architects and managed by the test environment management service. The end user only deals with the namespace for
resource abstractions (like a pool of VIMs) they use in the test case.
5.3 Resource API Decoupling
Test resources are contributed from multiple sources and, in the context of multi-organizational collaboration, different
test technologies come from different suppliers. The resource APIs are therefore decoupled from any resource
implementations and are defined outside of any specific test technology. Test cases written in a test DSL have no
knowledge of the test resources but as long as the APIs are defined outside of any specific resource contribution, the test
cases can be written against these APIs, compile-time error checking can be performed, and code assist/code
completion can be provided to the user.
5.4 High-level Function Decoupling
High-level Functions (HLFs) provide a high degree of reuse and a single point of truth (written or fixed once - used
everywhere). Most existing test technologies will have some form of HLFs already implemented that may be leveraged.
Similarly to test resource contributions, this necessitates having an externally defined contract for using these HLFs
with which all implementations comply. It is the responsibility of individual execution engines to compile HLF API
calls into calls on specific HLF implementations.
5.5 Test Data Decoupling
As indicated in clause 4.5, the test domain definition includes a mechanism for separation of the test case execution
flow from the data used to parameterize the test.
Resource-specific data come from the concrete resource allocated to the test case and is not known in advance to the
test case designer. Non-resource-specific data is a part of the context for the test case execution and forms a hierarchy
that can be easily customized. The customization may include progressive specialization from global set of values down
to values for individual test activities. The data is dynamically looked up by the test case using fully qualified symbols
as keys. Each fully qualified name represents a path from the root of the data tree to the node holding the value.
The symbol lookup functionality is provided by the TEP and can have its own implementation. Test data is provided to
the TEP as metadata that can be converted into any TEP-specific format or integrated into already existing
functionality.
In order to use test case data, it is necessary for the test case designer to provide names for individual data elements. In
the case of resource-specific data the problem of checking validity of these names at compile time and providing code
assist/code completion functionality to the user is easily solved by making getters for any data elements the resource
exposes to the user a part of the resource API.
For non-resource-specific data this problem is more challenging. Since this type of data is a part of the context for the
test case execution it is not known at design time. If a test case tries to look up a symbol that is not present in the data it
will be a run-time error. In addition, the namespace can be very large and fully qualified symbol names could also be
long and prone to spelling errors. The problem can be rectified if a schema for test data is made available to the test case
at design time. Symbol names will be validated against this schema, auto-proposed, and auto-completed.
ETSI
14 ETSI GR NFV-TST 011 V1.1.1 (2019-03)
6 Recommended Models
6.1 Overview
In order to create a common test language, the test cases follow a standardized model that the language can manipulate,
and that can be implemented within individual test technologies. The model includes shareable and reusable artefacts
tied to the test domain:
• execution flow;
• data;
• abstract environment definitions;
• etc.
6.2 Test Case Model
The diagram in Figure 3 shows test case elements and their relationships. The test case has a test script and uses test
data, high-level function libraries and segment ordering data. The test script in turn has resource declaration and
execution flow definition parts. The resource declaration section aggregates a set of abstract resources that realize their
respective APIs. The execution flow can call the abstract resource APIs. It can also invoke higher-level functions.
Higher-level functions can invoke other higher-level functions and call abstract resource APIs. The execution flow uses
the test data model for dynamic symbol lookup. The execution flow aggregates execution segments that are ordered
according to the segment ordering data model.
Figure 3: Test Case Elements
6.3 Test Environment Model
The diagram in Figure 4 shows how test environments relate to test cases. Abstract test environments aggregate abstract
resources, which in turn are mapped to concrete resources. There are three major types of concrete resources: elements
of the SUT, test tools used to interact with the SUT and reservable data. An example of reservable data could be a
license key with a limited set of instances.
ETSI
15 ETSI GR NFV-TST 011 V1.1.1 (2019-03)
The link between the abstract and the concrete environments is provided by the Environment Meta-model. It models the
concrete resource space as a set of reservable entities and provides a number of abstractions (e.g. pools) that can be
referenced from the abstract environment definition. The meta-model captures domain knowledge about the concrete
resource space and considerably simplifies abstract environment definition.
Before concrete resources can be used in a test case - or a group of test cases within the same test activity - they are
provisioned (configured). Abstract environment definition specifies Provisioning Data that is applied to allocated
concrete resources to complete concrete environment instantiation.
Figure 4: Test Environment in Relation to Test Cases
6.4 Test Scenarios
The diagram in Figure 5 shows test suites and traffic mixes in relation to the test cases. Test suites and traffic mixes
aggregate test cases and verify functional and non-functional requirements respectively. All requirements define a set of
attributes or keywords that the test cases are labelled with. Dynamic suites and traffic mixes are constructed from test
cases selected based on these labels. A natural constraint is that all test cases within the same test suite or traffic mix are
able to execute on the same test environment. Test cases with non-intersecting environment requirements cannot be part
of the same test suite or traffic mix.
ETSI
16 ETSI GR NFV-TST 011 V1.1.1 (2019-03)
Figure 5: Statically and Dynamically Defined Test Suites and Traffic Mixes
6.5 Full Domain Model
Domain diagrams in Figures 3 through 5 combine to form a recommended test case domain model as shown on
Figure 6.
ETSI
17 ETSI GR NFV-TST 011 V1.1.1 (2019-03)
Figure 6: Recommended Test Case Domain Model
ETSI
18 ETSI GR NFV-TST 011 V1.1.1 (2019-03)
7 Test DSL
7.1 Overview
Vendors have their own test technologies, interfaces into the system under test, and test languages (usually GPLs like
® ® ®
Java , Ruby , Python , etc.). In order to create a common test language, it is necessary for the test cases to follow a
standardized test case model that the language can manipulate, and that can be implemented within individual test
technologies. As stated in clause 6, the model includes shareable and reusable artefacts tied to the test domain:
execution flow, data, abstract resources, environment, etc.
Integration of multiple test technologies is only possible by a system that can accept contributions of test resources from
multiple parties. These contributions may include lab resources, test APIs, test data, high-level function libraries, test
execution platforms, etc. The test environment is then constructed dynamically from various contributions. To allow the
dynamic nature of the test environment, the test case are decoupled from specific resource contributions and express the
test process in terms of resource abstractions. Mapping these abstractions to concrete resources is the job of a dynamic
resource management system. This is done by creating an environment resource meta-model available to the test case
developers at design time. The meta-model is then used for creation of specific environment instance models at runtime.
Each environment instance includes dynamic resource contributions to which resource abstractions are mapped.
7.2 Test DSL Concepts
A test case is a computer program. This program has syntax and semantics. The semantics reflect the meaning of what
the program does and the syntax describes a particular representation of this meaning. Many different syntactic
representations may result in the same prog
...








Questions, Comments and Discussion
Ask us and Technical Secretary will try to provide an answer. You can facilitate discussion about the standard in here.
Loading comments...