ISO/TS 37444:2023
(Main)Electronic fee collection — Charging performance framework
Electronic fee collection — Charging performance framework
This document defines the charging performance metrics to be used during the evaluation or on-going monitoring of an electronic fee collection (EFC) system and the examination framework for the measurement of these metrics. It specifies a method for the specification and documentation of a specific examination framework which can be used by the responsible entity to evaluate charging performance for a particular information exchange interface or for overall charging performance within a toll scheme. The following scheme types are within the scope of this document: a) discrete schemes; b) continuous schemes (autonomous type of systems). This document defines measurements only on standardized interfaces. This document defines metrics for the charging performance of EFC systems in terms of the level of errors associated with charging computation. This document describes a set of metrics with definitions, principles and formulations, which together make up a reference framework for the establishment of requirements for EFC systems and the subsequent examination of charging performance. This document defines metrics for the following information exchanges: — charge reports (including usage evidence); — toll declarations; — exception lists; — billing details and associated event data; — payment claims on the level of service user accounts; — end-to-end metrics which assess the overall performance of the charging process. These metrics focus solely on the outcome of the charging process, i.e. the amount charged in relation to a pre-measured or theoretically correct amount, rather than intermediate variables from various components as sensors, such as positioning accuracy, signal range or optical resolution. This approach ensures comparable results for each metric in all relevant situations. The following aspects are outside the scope of this document. — Definition of specific numeric performance bounds, or average or worst-case error bounds in percentage or monetary units. — Specification of a common reference system which would be required for comparison of performance between systems. — Measurements on proprietary interfaces. NOTE It is not possible to define standardized metrics on such system properties. Neither is it possible to define metrics for parts of the charging processing chain which are considered to be the internal matter of an interoperability partner, such as: — equipment performance, e.g. for on-board equipment (OBE), roadside equipment (RSE) or data centres such as signal range, optical resolution or computing system availability; — position performance metrics: the quality of data generated by position sensors is considered as an internal aspect of the GNSS front end. It is masked by correction algorithms, filtering, inferring of data and the robustness of the charge object recognition algorithms. — The evaluation of the expected performance of a system based on modelling and measured data from a trial at another place.
Perception de télépéage — Cadre de performance d'imputation
General Information
Relations
Standards Content (Sample)
TECHNICAL ISO/TS
SPECIFICATION 37444
First edition
2023-06
Electronic fee collection — Charging
performance framework
Perception de télépéage — Cadre de performance d'imputation
Reference number
© ISO 2023
All rights reserved. Unless otherwise specified, or required in the context of its implementation, no part of this publication may
be reproduced or utilized otherwise in any form or by any means, electronic or mechanical, including photocopying, or posting on
the internet or an intranet, without prior written permission. Permission can be requested from either ISO at the address below
or ISO’s member body in the country of the requester.
ISO copyright office
CP 401 • Ch. de Blandonnet 8
CH-1214 Vernier, Geneva
Phone: +41 22 749 01 11
Email: copyright@iso.org
Website: www.iso.org
Published in Switzerland
ii
Contents Page
Foreword .v
Introduction . vi
1 Scope . 1
2 Normative references . 2
3 Terms and definitions . 2
4 Symbols and abbreviated terms.6
5 Examination framework .7
5.1 General . 7
5.2 Method for defining a specific examination framework . 7
5.2.1 General . 7
5.2.2 Selection of metrics to be evaluated . 8
5.2.3 Definition of environmental conditions and performance requirements . 9
5.2.4 Determination of required sample sizes . 9
5.2.5 Selection of methods for generating charging input and reference data . 9
5.2.6 Determination of test routes and trips . 9
5.2.7 Definition of measurement time period . 10
5.2.8 Documentation of the specific examination framework . 10
5.3 Sources of data. 10
5.4 Methods of generating charging input . 13
5.4.1 General .13
5.4.2 Predefined routes (identifier: “PVP”) . 14
5.4.3 Reference system (used in combination with identifiers: “PVR” and “UVR”) .15
5.4.4 Simulated OBE/FE (identifier: “SO”) . 16
5.4.5 Dedicated OBE testing (identifier: “DO”). 16
6 Charging performance metrics .17
6.1 General . 17
6.2 Metric identification .20
6.3 Charge report metrics . 21
6.3.1 General . 21
6.3.2 Metrics relevant for all schemes . 21
6.3.3 Metrics only applicable to discrete schemes .22
6.3.4 Metrics applicable to continuous schemes . 23
6.4 Toll declaration metrics . 24
6.4.1 General . 24
6.4.2 Metrics relevant for all schemes . 24
6.4.3 Metrics only applicable to discrete schemes . 25
6.4.4 Metrics applicable to continuous schemes . 26
6.5 Billing details metrics . 27
6.6 Payment claim metrics . 29
6.7 Exception list metrics .30
6.8 User account metrics . 31
6.9 End-to-end metrics . 32
6.10 Applicability of metrics scheme types . 32
6.11 Charging metric selection tables . 37
6.11.1 General . 37
6.11.2 Discrete . 37
6.11.3 Autonomous discrete .39
6.11.4 Autonomous continuous . 41
7 Examination tests .43
7.1 General . 43
7.2 Technology-independent tests. 43
7.2.1 General . 43
iii
7.2.2 ET-CM-E2E-1 E2E — Correct charging rate . 43
7.2.3 ET-CM-E2E-2 E2E — Overcharging rate .44
7.2.4 ET-CM-E2E-3 E2E — Undercharging rate . 45
7.2.5 ET-CM-E2E-4 E2E — Late charging rate .46
7.2.6 ET-CM-UA-1 UA — Correct charging rate .46
7.2.7 ET-CM-UA-2 UA — Overcharging rate . 47
7.2.8 ET-CM-UA-3 UA — Undercharging rate .48
7.2.9 ET-CM-UA-4 UA — Accurate application of payments and refunds .49
7.2.10 ET-CM-UA-5 UA — Accurate personalization of OBEs .49
7.2.11 ET-CM-EL-1 EL — Correct exception list generation rate .50
7.2.12 ET-CM-EL-2 EL — Incorrect exception list generation rate .50
7.2.13 ET-CM-PC-1 PC — Correct charging rate . 51
7.2.14 ET-CM-PC-2 PC — Overcharging Rate . 52
7.2.15 ET-CM-PC-3 PC — Undercharging rate . 53
7.2.16 ET-CM-PC-4 PC — Latency — TC . 53
7.2.17 ET-CM-PC-5 PC — Late payment claims rate .54
7.2.18 ET-CM-PC-6 PC — Rejected payment claim rate . 55
7.2.19 ET-CM-BD-1 BD — Correct charging rate . 55
7.2.20 ET-CM-BD-2 BD — Overcharging rate.56
7.2.21 ET-CM-BD-3 BD — Undercharging rate . 57
7.2.22 ET-CM-BD-4 BD — Incorrect charging rate . 57
7.2.23 ET-CM-BD-5 BD — Latency — TC .58
7.2.24 ET-CM-BD-6 BD — Late billing details rate . 59
7.2.25 ET-CM-BD-7 BD — Rejected billing details rate . 59
7.2.26 ET-CM-BD-8 BD — Incorrectly rejected billing details rate.60
7.2.27 ET-CM-BD-9 BD — Inferred billing details rate .60
7.2.28 ET-CM-CR-1 CR— Usage evidence availability . 61
7.2.29 ET-CM-CR-2 CR— Usage evidence integrity . 61
7.2.30 ET-CM-CR-3 CR- Usage evidence time-to-first-fix . 62
7.3 Technology-dependent tests .63
7.3.1 Autonomous discrete specific examination tests .63
7.3.2 Discrete — Optional toll declaration metrics .73
7.3.3 Autonomous continuous specific examination tests . 76
Annex A (informative) Examination test documentation template .86
Annex B (informative) Examination framework considerations .87
Annex C (informative) Statistical considerations .90
Annex D (informative) Methods for reducing sample sizes during the evaluation phase .95
Annex E (informative) Examples of specific examination frameworks .99
Annex F (informative) Defining performance requirements .110
Bibliography . 118
iv
Foreword
ISO (the International Organization for Standardization) is a worldwide federation of national standards
bodies (ISO member bodies). The work of preparing International Standards is normally carried out
through ISO technical committees. Each member body interested in a subject for which a technical
committee has been established has the right to be represented on that committee. International
organizations, governmental and non-governmental, in liaison with ISO, also take part in the work.
ISO collaborates closely with the International Electrotechnical Commission (IEC) on all matters of
electrotechnical standardization.
The procedures used to develop this document and those intended for its further maintenance are
described in the ISO/IEC Directives, Part 1. In particular, the different approval criteria needed for the
different types of ISO documents should be noted. This document was drafted in accordance with the
editorial rules of the ISO/IEC Directives, Part 2 (see www.iso.org/directives).
Attention is drawn to the possibility that some of the elements of this document may be the subject of
patent rights. ISO shall not be held responsible for identifying any or all such patent rights. Details of
any patent rights identified during the development of the document will be in the Introduction and/or
on the ISO list of patent declarations received (see www.iso.org/patents).
Any trade name used in this document is information given for the convenience of users and does not
constitute an endorsement.
For an explanation of the voluntary nature of standards, the meaning of ISO specific terms and
expressions related to conformity assessment, as well as information about ISO's adherence to
the World Trade Organization (WTO) principles in the Technical Barriers to Trade (TBT), see
www.iso.org/iso/foreword.html.
This document was prepared by Technical Committee ISO/TC 204, Intelligent transport systems, in
collaboration with the European Committee for Standardization (CEN) Technical Committee CEN/TC
278, Intelligent transport systems, in accordance with the Agreement on technical cooperation between
ISO and CEN (Vienna Agreement).
This first edition cancels and replaces the second editions ISO/TS 17444-1:2017 and ISO/TS 17444-2:2017,
which have been technically revised.
The main changes are as follows:
— the resulting document has been renumbered as ISO/TS 37444;
— various editorial changes have been made to improve the readability of the text;
— a technology-neutral definition of metrics and examination tests has been applied, which also includes
support for tolling systems based on automatic number plate recognition (ANPR) technology;
— terminology and references to other documents have been updated.
Any feedback or questions on this document should be directed to the user’s national standards body. A
complete listing of these bodies can be found at www.iso.org/members.html.
v
Introduction
0.1 General
Electronic tolling systems are complex distributed systems involving critical technology such as
dedicated short-range communication (DSRC), camera-based technology (e.g. automatic number plate
recognition, ANPR) and global navigation satellite systems (GNSS). These technologies are all subject
to a certain behaviour that can affect the computation of the charges. Thus, to protect the interests
of the different stakeholders involved, particularly toll service users (SUs) and toll chargers (TCs), it
is essential to define metrics that measure the performance of the system in terms of computation
of charges, and that ensure that the potential resulting errors are acceptable. These metrics will be
a useful tool for establishing requirements for the systems and for examining the system capabilities
during acceptance and throughout the operational life of the system.
In addition, to ensure the interoperability of different systems, it is necessary to agree on common
metrics for use and on the actual values that define the required acceptable performances. However,
these points are not covered in this document.
Instead, this document is defined as a toolbox standard of examination tests, with a method for defining
and documenting a specific examination framework to meet specific needs. The detailed choice of the set
of examination tests within an examination framework depends on the application and the respective
context. Conformance to this document means using the definitions and prescriptions laid out in this
document whenever the respective system aspects are subjected to performance measurements, rather
than using other definitions and examination methods.
0.2 Charging performance metrics
This document also defines a set of charging performance metrics with definitions, principles and
formulae, which together make up a reference framework for the establishment of requirements for
electronic fee collection (EFC) systems and their later examination of the charging performance.
These charging performance metrics are intended for use with any toll scheme, regardless of its
technical underpinnings, system architecture, tariff structure, geographical coverage or organizational
model. They are defined to treat technical details that can differ between technologies as a “black
box”. They focus solely on the outcome of the charging process, i.e. the amount charged in relation
to a pre-measured or theoretically correct amount, rather than intermediate variables from various
components as sensors, such as positioning accuracy, signal range or optical resolution. This approach
ensures comparable results for each metric in all relevant situations.
The metrics are designed to cover the information exchanged on the front-end (FE) interface and the
interoperability interfaces between toll service providers (TSPs) and TCs, as well as information on the
end-to-end level.
Metrics for the following information exchanges are defined:
— charge reports (including usage evidence);
— toll declarations;
— billing details and associated event data;
— payment claims on the level of user accounts;
— exception lists;
— end-to-end metrics which assess the overall performance of the charging process.
The proposed metrics are specifically addressed to protect the interests of the actors in a toll system,
such as TSPs, TCs and SUs. They can be used to define requirements (e.g. for requests for proposals) and
for performance assessments.
vi
Toll schemes take on various forms as identified in ISO 17573-1 and ISO 12855. In order to create a
uniform performance metric specification, toll schemes are grouped into two classes based on the
character of their primary charging variable:
— charging based on discrete events (charges associated to the fact that a vehicle is crossing or
standing within a certain zone);
— charging based on a continuous measurement (duration or distance).
The following are examples of discrete (event-based) toll schemes.
— Single object charging: a road section, bypass, bridge, tunnel, mountain pass or even a ferry, charged
per passage.
EXAMPLE 1 Most tolled bridges belong to this category.
— Closed road charging: a fixed amount is charged for a certain combination of entry and exit on a
motorway or other closed road network.
EXAMPLE 2 Many of the motorways in Southern Europe belong to this category.
— Discrete road links charging: determined by use of specified road links, whether or not they are
used in their entirety.
EXAMPLE 3 Heavy goods vehicle (HGV) charge in Germany.
— Charging for cordon crossing: triggered by passing in or out through a cordon that encircles a city
core, for example.
EXAMPLE 4 Congestion and infrastructure charging schemes in Stockholm and Gothenburg (Sweden).
The following are examples of continuous toll schemes.
— Charging based on direct distance measurement: defined as an amount per km driven.
EXAMPLE 5 HGV charge in Switzerland and US basic vehicle miles-travelled toll systems concepts.
— Charging based on direct distance measurement in different tariff zones or road types: defined
as an amount per km driven, with different tariffs applying in different zones or on different road
types. This is a widely discussed approach, also known as time-distance-place charging, and is
under consideration in European countries.
EXAMPLE 6 OReGO, the pilot programme in Oregon, North America.
— Time in use charge: determined by the accumulated time a vehicle has been in operation, or
alternatively, by the time the vehicle has been present inside a predefined zone.
In all of these toll schemes, tolls can additionally vary as a function of vehicle class characteristics (such
as trailer presence, number of axles, taxation class and operating function) and depending on time of
day or day of week, such that, for example, tariffs are higher in rush hour and lower on weekends.
With this degree of complexity, it is not surprising to find that the attempts to evaluate and compare
technical solutions for SU charging have been made on an individual basis each time a procurement
or study is initiated, and with only limited ability to reuse prior comparisons made by other testing
entities.
The identification of different types of schemes as proposed in the ISO 17575 series and their grouping
in the mentioned two classes is described in Table 1. Table 1 also identifies the examples mentioned
above.
vii
Table 1 — Toll scheme designs grouped according to scheme categories
Examples Scheme type ISO 17575 series category
Single object charging Discrete Sectioned roads pricing
Closed road charging Discrete Sectioned roads pricing
Discrete road links charging Discrete Sectioned roads pricing
Charging for cordon crossing Discrete Cordon pricing
Time in use charge Continuous Area pricing — time
Cumulative distance charge Continuous Area pricing — distance
Charging for cumulative distance in different zones (or Continuous Area pricing — distance
by road type)
0.3 Examination framework
The examination framework that is defined in this document is designed for measuring the metrics
defined in Clause 6. The general aim is to achieve a maximum comparability and reproducibility
of the results without restricting the technological choices in system design. Specific examination
frameworks can be defined for the evaluation and monitoring phases of a project due to the differences
in the availability of equipped vehicles.
a) Evaluation phase
The evaluation phase encompasses system evaluation and selection, as well as commissioning and
ramp-up during implementation. Important aspects of this phase are:
— relatively small sample sizes; and
— well-controlled behaviour of test vehicles.
b) Monitoring phase
After the system has gone into operation, its behaviour needs to be monitored for several reasons,
such as fine-tuning of the system performance, monitoring of service level agreements (SLAs) between
contractual partners (supplier, TC, TSP, etc.). In this phase, the following system aspects can be
expected:
— very large sample sizes possible, but with unknown behaviour of the vehicles;
— in principle all measurements from implementation phase possible, too.
0.4 Reader's guide
To understand the content of this document, the reader should be aware of the methodology and
assumptions used to develop the examination framework; therefore, a suggested reading order is given
below.
a) Annex B provides details of the underlying considerations for developing the examination
framework.
b) Annex C provides background statistical information which will enable the reader to determine
sample sizes and confidence limits based on the defined performance requirements.
c) Clause 5 provides the definition of the examination framework for the evaluation of charging
performance.
d) Clause 6 provides definitions of charging metrics and their applicability to the scheme types
described above.
e) Clause 7 contains the toolbox of examination tests for the evaluation of charging performance for
the identified scheme types.
viii
f) Annex A contains an example template for the documentation of examination tests and their
results.
g) Annex D contains methods which can be used to reduce the required sample sizes for metrics with
high and low probabilities during the evaluation phase.
h) Annex E provides examples of specific examination frameworks which have been developed in
accordance with the methodology in 5.2.
ix
TECHNICAL SPECIFICATION ISO/TS 37444:2023(E)
Electronic fee collection — Charging performance
framework
1 Scope
This document defines the charging performance metrics to be used during the evaluation or on-
going monitoring of an electronic fee collection (EFC) system and the examination framework for the
measurement of these metrics.
It specifies a method for the specification and documentation of a specific examination framework
which can be used by the responsible entity to evaluate charging performance for a particular
information exchange interface or for overall charging performance within a toll scheme.
The following scheme types are within the scope of this document:
a) discrete schemes;
b) continuous schemes (autonomous type of systems).
This document defines measurements only on standardized interfaces.
This document defines metrics for the charging performance of EFC systems in terms of the level of
errors associated with charging computation.
This document describes a set of metrics with definitions, principles and formulations, which together
make up a reference framework for the establishment of requirements for EFC systems and the
subsequent examination of charging performance.
This document defines metrics for the following information exchanges:
— charge reports (including usage evidence);
— toll declarations;
— exception lists;
— billing details and associated event data;
— payment claims on the level of service user accounts;
— end-to-end metrics which assess the overall performance of the charging process.
These metrics focus solely on the outcome of the charging process, i.e. the amount charged in relation
to a pre-measured or theoretically correct amount, rather than intermediate variables from various
components as sensors, such as positioning accuracy, signal range or optical resolution. This approach
ensures comparable results for each metric in all relevant situations.
The following aspects are outside the scope of this document.
— Definition of specific numeric performance bounds, or average or worst-case error bounds in
percentage or monetary units.
— Specification of a common reference system which would be required for comparison of performance
between systems.
— Measurements on proprietary interfaces.
NOTE It is not possible to define standardized metrics on such system properties. Neither is it possible
to define metrics for parts of the charging processing chain which are considered to be the internal matter
of an interoperability partner, such as:
— equipment performance, e.g. for on-board equipment (OBE), roadside equipment (RSE) or data
centres such as signal range, optical resolution or computing system availability;
— position performance metrics: the quality of data generated by position sensors is considered as
an internal aspect of the GNSS front end. It is masked by correction algorithms, filtering, inferring of data
and the robustness of the charge object recognition algorithms.
— The evaluation of the expected performance of a system based on modelling and measured data
from a trial at another place.
2 Normative references
The following documents are referred to in the text in such a way that some or all of their content
constitutes requirements of this document. For dated references, only the edition cited applies. For
undated references, the latest edition of the referenced document (including any amendments) applies.
ISO 12855:2022, Electronic fee collection — Information exchange between service provision and toll
charging
3 Terms and definitions
For the purposes of this document, the following terms and definitions apply.
ISO and IEC maintain terminology databases for use in standardization at the following addresses:
— ISO Online browsing platform: available at https:// www .iso .org/ obp
— IEC Electropedia: available at https:// www .electropedia .org/
3.1
absolute charging error
difference between the measured charge (toll) value and the actual value as measured by a reference
system where a positive error means that the measurement exceeds the actual value
[SOURCE: ISO/TS 17573-2:2020, 3.1]
3.2
accepted charging error interval
interval of the relative charging error (3.22) that the toll charger (3.27) considers as acceptable, i.e. as
correct charging
[SOURCE: ISO/TS 17573-2:2020, 3.3]
3.3
average relative charging error
ratio between the sum of computed charges (measurement) associated to a set of vehicles during a
certain period of time and the actual charge due (reference) minus 1
[SOURCE: ISO/TS 17573-2:2020, 3.21]
3.4
billing detail
information needed to determine or verify the amount due for the usage of a given service
[SOURCE: ISO/TS 17573-2:2020, 3.25]
3.5
charge object
geographic or road-related object for the use of which a charge is applied
[SOURCE: ISO/TS 17573-2:2020, 3.31]
3.6
charge object detection
event marking the usage of a charge object (3.5)
[SOURCE: ISO/TS 17573-2:2020, 3.32]
3.7
charge parameter change
event occurring within a tolling system, that is relevant for charge calculation, such as change of vehicle
category, but not for the detection of a charge object (3.5) itself
[SOURCE: ISO/TS 17573-2:2020, 3.34]
3.8
charge report
information containing road usage and related information originated at the front end (3.15)
[SOURCE: ISO/TS 17573-2:2020, 3.35]
3.9
charging performance metrics
specific calculations used to describe the charging performance of a system
[SOURCE: ISO/TS 17573-2:2020, 3.37]
3.10
continuous toll scheme
toll scheme where the charge is calculated based on the accumulation of continuously measured
parameter(s)
[SOURCE: ISO/TS 17573-2:2020, 3.50, modified — EXAMPLE removed.]
3.11
data analysis
parameter estimation and inference based on samples (3.24)
3.12
discrete toll scheme
toll scheme where the charge is calculated based on distinct events associated with the identification of
charge objects (3.5) such as crossing a cordon, passing a bridge and being present in an area
Note 1 to entry: Each event is associated with a certain charge.
[SOURCE: ISO/TS 17573-2:2020, 3.62, modified — Note 1 to entry added.]
3.13
evaluation
systematic process of determining how individuals, procedures, systems or programs have met
formally agreed objectives and requirements
[SOURCE: ISO/TS 17573-2:2020, 3.75]
3.14
false positive
event that was erroneously detected, but did not take place
3.15
front end
part of an EFC system which consists of on-board equipment (OBE) and possibly of a proxy where road
tolling information and usage data are collected and processed for delivery to the back end
[SOURCE: ISO/TS 17573-2:2020, 3.85]
3.16
interval estimation
calculation of lower and upper bounds for unknown parameters, assuring a predefined coverage
probability of the true value
3.17
missed recognition
usage of a charge object (3.5) that is not recorded by the system
3.18
monitoring
collection and assessment of status data for a process or a system
Note 1 to entry: This can be used to observe metrics during operation.
[SOURCE: ISO/TS 17573-2:2020, 3.120, modified — Note 1 to entry added.]
3.19
overcharging
situation where the calculated charge is above the accepted charging error interval (3.2)
[SOURCE: ISO/TS 17573-2:2020, 3.130]
3.20
payment claim
statement made available to the payer by the payee to justify the amount due
Note 1 to entry: The statement can include the concluded billing detail (3.4).
[SOURCE: ISO/TS 17573-2:2020, 3.133]
3.21
population
totality of items under consideration
[SOURCE: ISO/TS 17573-2:2020, 3.142]
3.22
relative charging error
ratio between the absolute charging error (3.1) and the reference value
[SOURCE: ISO/TS 17573-2:2020, 3.154]
3.23
representative trip
trip (3.31) that is of a distance larger than a defined threshold and so is to be considered by the related
metrics
3.24
sample
subset of a population (3.21) made up of one or more of its individual parts
[SOURCE: ISO/TS 17573-2:2020, 3.164]
3.25
specific examination framework
particular instance of a set of examination tests defined by an entity to determine the performance of
specific selected charging metrics during either evaluation (3.13) and/or monitoring (3.18)
3.26
successful charging
situation where the user has been correctly charged according to the rules of the system
[SOURCE: ISO/TS 17573-2:2020, 3.177]
3.27
toll charger
entity which levies toll for the use of vehicles in a toll domain
[SOURCE: ISO/TS 17573-2:2020, 3.194]
3.28
toll declaration
statement to declare the usage of a given toll service to a toll charger (3.27)
[SOURCE: ISO/TS 17573-2:2020, 1.199]
3.29
toll service provider
entity providing toll services in one or more toll domains
[SOURCE: ISO/TS 17573-2:2020, 3.206]
3.30
toll service user
customer of a toll service provider (3.29), i.e. one liable for toll, owner of the vehicle, fleet operator or
driver depending on the context
[SOURCE: ISO/TS 17573-2:2020, 3.207]
3.31
trip
part of the space-time trajectory of a vehicle within a toll domain
[SOURCE: ISO/TS 17573-2:2020, 3.220]
3.32
undercharging
situation where the calculated charge is below the accepted charging error interval (3.2)
[SOURCE: ISO/TS 17573-2:2020, 3.225]
3.33
user account
centrally or on-board stored transport-related service rights of the user in relation to a service provider
[SOURCE: ISO/TS 17573-2:2020, 3.228]
3.34
user complaint
complaints from users related to a specific service provision
[SOURCE: ISO/TS 17573-2:2020, 3.229]
4 Symbols and abbreviated terms
ANPR automatic number plate recognition
ARCE average relative charging error
BD billing details
CCA compliance checking detections using ANPR systems
CCD compliance checking detections using DSRC systems
CCR continuous charge report
CCTV closed-circuit television
CELB charging error interval lower bound
CEUB charging error interval upper bound
CI charging input
CM charging metric
CR charge report
CTD continuous toll declaration
DCR discrete charge report
DO dedicated OBE testing
DSRC dedicated short-range communications
DTD discrete toll declaration
E2E end-to-end
EETS European electronic toll service
EFC electronic fee collection
FE front end
GBPT GNSS-based positioning terminals
GNSS global navigation satellite system
GPP GNSS path post processing
IS independent reference system
ITS intelligent transport systems
KPI key performance indicator
LPN licence plate number
MBDD maximum billing details delay
MPCD maximum payment claim delay
MTDD maximum toll declaration delay
MUSD maximum user statement delay
OBE on-board equipment
PC payment claim
PV probe vehicle
RCE relative charging error
REE relative evidence error
RSE roadside equipment
SLA service level agreement
SO simulated OBE/FE
SU service user
TC toll charger
TC-BE toll charger back end
TD toll declaration
TSP toll service provider
TSP-BE toll service provider back end
TSP-FE toll service provider front end
UA user account
UVR service user vehicle
5 Examination framework
5.1 General
This clause:
— defines the process that should be followed to define a specific examination framework for a
particular purpose (5.2);
— provides a definition of the sources of data that can be used by the examination tests to calculate the
charging metrics (5.3);
— provides the definitions of the methods of generating charging input referenced in the examination
tests defined in Clause 7 (5.4).
5.2 Method for defining a specific examination framework
5.2.1 General
Figure 1 provides an overview of the process that should be followed to define a specific instance of
an examination framework for the evaluation of charging metrics for the roles of TSP and/or TC in a
particular toll scheme. Further details are provided in 5.2.2 to 5.2.8.
Figure 1 — Method for defining a specific examination framework
5.2.2 Selection of metrics to be evaluated
The entity responsible for the definition of the specific examination framework shall determine the
metrics to be measured in the phases of evaluation and monitoring for the roles of TSP and/or TC using
the applicable tables in 6.11.
5.2.3 Definition of environmental conditions and performance requirements
The entity responsible for the definition of the specific examination framework shall determine the
environmental conditions (representative or challenging) and associated performance requirements to
be met for each metric selected in 5.2.2.
NOTE 1 Assessment of charging metrics in a representative environment allows performance in the
operational environment to be assessed. However, care is to be taken to ensure that the charging data input/
selection of representative trips is comparable to that expected for the operational environment.
The choice of representative environmental conditions will, in practice, result in a multidimensional
parameter space (e.g. air moisture, topography, electromagnetic environment, etc.). It is important to
choose these parameters and their values with care to ensure that tests are performed in fully realistic
sets of conditions (or at least the most probable ones) while keeping the number of necessary tests to a
minimum.
NOTE 2 Assessments of charging metrics in a challenging environment are typically used to determine
behaviour for worst-case scenarios in the operational environment. Due to the nonlinear dependence of system
performance on the environmental conditions, it is difficult to transpose measured performance levels to those
in operational systems.
The environmental conditions and associated performance requirements to be met for each metric
selected should be documented in each examination test within the specific examination framework.
In cases where comparative testing is chosen (e.g. a new population of OBE is introduced into an
existing tolling system), the influence of the environmental conditions on the comparison results could
be reduced if the tests were performed in parallel. In this case, both populations are exposed to the
same conditions. Nonetheless, it is still necessary to perform the step described in 5.2.3.
NOTE 3 This is important to ensure that the comparative test is performed under all relevant conditions; it also
helps to pinpoint dependencies of performance differences on issues with robustness to certain environmental
conditions, i.e. one population of equipment being more sensitive to certain environmental conditions than the
other.
5.2.4 Determination of required sample sizes
Based on the performance requirements set for each metric selected in 5.2.3, the entity responsible
for the definition of the specific examination framework shall determine the samp
...








Questions, Comments and Discussion
Ask us and Technical Secretary will try to provide an answer. You can facilitate discussion about the standard in here.
Loading comments...