Space - Use of GNSS-based positioning for road Intelligent Transport Systems (ITS) - Metrics and Performance levels detailed definition

This document constitutes the main deliverable from WP1.1 of the GP-START project. It is devoted to a thorough review of the metrics defined in EN 16803-1 and proposes a performance classification for GNSS-based positioning terminals within designed for road applications. It will serve as one of the inputs to the elaboration of prEN 16803-2:2019 and prEN 16803-3:2019.
This document should serve as a starting point for discussion within CEN/CENELEC/JTC 5/WG1 on a consolidated set of performance metrics and associated classification logic. The proposals and conclusions appearing in this document are therefore only preliminary.

Detaillierte Definition von Metriken und Leistungsstufen

Espace - Utilisation de la localisation basée sur les GNSS pour les systèmes de transport routiers intelligents - Définition détaillée des mesures et niveaux de performance

Vesolje - Uporaba sistemov globalne satelitske navigacije (GNSS) za ugotavljanje položaja pri inteligentnih transportnih sistemih (ITS) v cestnem prometu - Podrobna opredelitev meritev in ravni uspešnosti

General Information

Status
Published
Public Enquiry End Date
11-Dec-2019
Publication Date
30-Mar-2020
Technical Committee
Current Stage
6060 - National Implementation/Publication (Adopted Project)
Start Date
12-Mar-2020
Due Date
17-May-2020
Completion Date
31-Mar-2020
Technical report
SIST-TP CEN/TR 17448:2020 - BARVE
English language
41 pages
sale 10% off
Preview
sale 10% off
Preview
e-Library read for
1 day

Standards Content (Sample)


SLOVENSKI STANDARD
01-maj-2020
Vesolje - Uporaba sistemov globalne satelitske navigacije (GNSS) za ugotavljanje
položaja pri inteligentnih transportnih sistemih (ITS) v cestnem prometu -
Podrobna opredelitev meritev in ravni uspešnosti
Space - Use of GNSS-based positioning for road Intelligent Transport Systems (ITS) -
Metrics and Performance levels detailed definition
Detaillierte Definition von Metriken und Leistungsstufen
Espace - Utilisation de la localisation basée sur les GNSS pour les systèmes de
transport routiers intelligents - Définition détaillée des mesures et niveaux de
performance
Ta slovenski standard je istoveten z: CEN/TR 17448:2020
ICS:
03.220.20 Cestni transport Road transport
33.060.30 Radiorelejni in fiksni satelitski Radio relay and fixed satellite
komunikacijski sistemi communications systems
35.240.60 Uporabniške rešitve IT v IT applications in transport
prometu
2003-01.Slovenski inštitut za standardizacijo. Razmnoževanje celote ali delov tega standarda ni dovoljeno.

TECHNICAL REPORT
CEN/TR 17448
RAPPORT TECHNIQUE
TECHNISCHER BERICHT
March 2020
ICS 03.220.20; 33.060.30; 35.240.60

English version
Space - Use of GNSS-based positioning for road Intelligent
Transport Systems (ITS) - Metrics and Performance levels
detailed definition
Espace - Utilisation de la localisation basée sur les Detaillierte Definition von Metriken und
GNSS pour les systèmes de transport routiers Leistungsstufen
intelligents - Définition détaillée des mesures et
niveaux de performance
This Technical Report was approved by CEN on 13 January 2020. It has been drawn up by the Technical Committee
CEN/CLC/JTC 5.
CEN and CENELEC members are the national standards bodies and national electrotechnical committees of Austria, Belgium,
Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Italy,
Latvia, Lithuania, Luxembourg, Malta, Netherlands, Norway, Poland, Portugal, Republic of North Macedonia, Romania, Serbia,
Slovakia, Slovenia, Spain, Sweden, Switzerland, Turkey and United Kingdom.

CEN-CENELEC Management Centre:
Rue de la Science 23, B-1040 Brussels
© 2020 CEN/CENELEC All rights of exploitation in any form and by any means Ref. No. CEN/TR 17448:2020 E
reserved worldwide for CEN national Members and for
CENELEC Members.
Contents Page
European foreword . 3
1 Scope . 4
2 Normative references . 4
3 Terms and definitions . 4
4 List of acronyms . 4
5 Review of EN 16803-1 Performance Metrics . 5
5.1 Potential Improvements of unstable definitions . 5
5.2 Completion With Additional Metrics . 13
5.3 Justification of the choice of percentiles . 16
6 GBPT Performance Classification . 22
6.1 General . 22
6.2 Classification logic . 24
6.3 Identification of Performance Classes . 26
6.4 Indicative Performance Figures For Main Categories Of Road Applications . 31
7 Conclusions and Recommendations . 31
7.1 Purpose . 31
7.2 Improvements of Existing Definitions . 31
7.3 Removal of Existing Definitions . 33
7.4 Inclusion of New Definitions . 33
7.5 Choice of Percentiles . 33
7.6 Performance Classification Logic . 33
7.7 Performance Classes . 34
Annex A (normative) Performance metrics as per EN 16803-1 . 36
Bibliography. 41

European foreword
This document (CEN/TR 17448:2020) has been prepared by Technical Committee CEN/JTC 5 “Space”,
the secretariat of which is held by DIN.
Attention is drawn to the possibility that some of the elements of this document may be the subject of
patent rights. CEN shall not be held responsible for identifying any or all such patent rights.
1 Scope
This document constitutes the main deliverable from WP1.1 of the GP-START project. It is devoted to a
thorough review of the metrics defined in EN 16803-1 and proposes a performance classification for
GNSS-based positioning terminals within designed for road applications. It will serve as one of the inputs
to the elaboration of prEN 16803-2:2019 and prEN 16803-3:2019.
This document should serve as a starting point for discussion within CEN/CENELEC/JTC 5/WG1 on a
consolidated set of performance metrics and associated classification logic. The proposals and
conclusions appearing in this document are therefore only preliminary.
2 Normative references
The following documents are referred to in the text in such a way that some or all of their content
constitutes requirements of this document. For dated references, only the edition cited applies. For
undated references, the latest edition of the referenced document (including any amendments) applies.
EN 16803-1:2016, Space - Use of GNSS-based positioning for road Intelligent Transport Systems (ITS) -
Part 1: Definitions and system engineering procedures for the establishment and assessment of
performances
3 Terms and definitions
For the purposes of this document, the terms and definitions given in EN 16803-1 apply.
ISO and IEC maintain terminological databases for use in standardization at the following addresses:
— ISO Online browsing platform: available at http://www.iso.org/obp
— IEC Electropedia: available at http://www.electropedia.org/
4 List of acronyms
ADAS Advanced Driver Assistance Systems
CAN Controller Area Network
CDF Cumulative Distribution Function
CEN Comité Européen de Normalization — (European Committee for Standardization)
CENELEC Comité Européen de Normalization Électrotechnique — (European Committee for
Electrotechnical Standardization)
ECEF Earth Centred Earth Fixed
ETSI European Telecommunications Standards Institute
GBPT GNSS-Based Positioning Terminal
GNSS Global Navigation Satellite Systems
HPA Horizontal Position Error
HPL Horizontal Protection Level
IMU Inertial Measurement Unit
ITS Intelligent Transport Systems
KOM Kick-Off Meeting
MEMS Micro Electro-Mechanical Systems
NMEA National Marine Electronics Association
PPP Precise Point Positioning
RTCA Radio Technical Commission for Aeronautics
RTK Real Time Kinematics
SPP Standard Point Positioning
TTFF Time To First Fix
5 Review of EN 16803-1 Performance Metrics
5.1 Potential Improvements of unstable definitions
5.1.1 Position accuracy metrics
5.1.1.1 Vectors vs their Norms
One thing that draws immediate attention when reviewing the metrics is some degree of ambiguity in
some of the definitions. For instance, the first Accuracy metric (EN 16803-1:2016, Table 1) refers to the
“3D position error”, which has not been explicitly defined anywhere along the document:
3D Position Accuracy is defined as the set of three statistical values given by the 50th, 75th and 95th
percentiles of the cumulative distribution of 3D position errors.
There is some discussion in EN 16803-1:2016, 3.2.1 regarding vector and scalar quantities, but no
explicit definition of the 3D position error is proposed. The position error (without the “3D” adjective)
is defined in EN 16803-1:2016, 4.3 as follows:
Position error: is the difference between the true position and the position provided by the positioning
terminal. It shall be understood as a vector expressed in some convenient local reference frame (e.g. local
horizontal frame).
This definition explicitly states that the position error shall be understood as a vector quantity. Then,
the use of the expression “3D position error” in the definition of the metric seems to emphasize the
vector character of the position error, which may be misleading since the metric actually refers to the
norm of the position error vector, which is actually a scalar quantity.
The same concern can be raised about the horizontal position error. It is therefore recommended to
include explanations on the meaning of expressions such as “3D position error” and “horizontal position
error”, making it clear that they refer to norms of vectors rather than vectors. Note that footnote 5 on
EN 16803-1:2016, A.2.1 of the document contains such a clarification for the case of the horizontal
position error, but a footnote in an annex may not be the best place for it (besides, the expression “it is
recalled” seems to indicate that the definition was written in some other, more prominent place within
the document and later removed).
NOTE The norm of a vector is not uniquely defined. To overcome this problem, it could be further specified
that the norm of interest is the Euclidean norm (square root of the sum of squared coordinates) of the vector when
expressed in a linear (and orthonormal) coordinate system. Suppose, for instance, that the position is expressed
in geodetic coordinates (latitude, longitude and height) and the position error is expressed as a latitude error, a
longitude error and a height error. The square root of the sum of the squares of these 3 quantities has no physical
meaning, and is not what is meant in the above proposed definition. It could be worth making this sort of
considerations in the standard.
A related remark (although not concerned with performance metrics) is on the identification of the
GBPT outputs made in EN 16803-1:2016, 4.2, which may require some review and perhaps include
attitude parameters (e.g. heading) or make some additional considerations on the reference frame used
to represent position and velocity (e.g. horizontal velocity could be represented in polar coordinates as
a pair consisting of speed and heading).
5.1.1.2 Along Track and Cross Track Components
Another potential issue that has been detected is the fact that the expressions “along track” and “cross
track” are undefined, yielding the definitions of “along track” and “cross track” position accuracy a little
ambiguous. It is recommended to include the definitions of these terms somewhere in the document,
especially considering that there is no general agreement as to their meanings. Note that these terms
have their roots in aeronautics and astronautics, and have been widely use to describe the motion of
space vehicles, such as artificial satellites, especially when in orbit around the Earth. Each satellite is
assigned a body-centred orthogonal reference frame with axes pointing:
— in the satellite’s direction of motion;
— in the direction orthogonal to the orbital plane;
— in the direction orthogonal to the previous 2.
However, since most orbits are nearly circular, the third direction is roughly pointing to the centre of
the Earth, and in some cases, this is how the third axis is defined, implying a slight misalignment of the
first with respect to the satellite’s direction of motion. Besides, the direction of motion is not well defined
unless the satellite’s trajectory is referred to an external (not body-centred) reference frame, such as
one with origin at the centre of the Earth. Depending on how this external frame is chosen (e.g. an inertial
frame vs one which rotates with the Earth), the satellite’s direction of motion may be different.
In road applications the situation is also somewhat complicated. It may seem natural to define the along
track direction as the one parallel to the vehicle’s velocity vector, but caution shall be taken as to the
reference frame used to define the vehicle’s motion. A natural choice would be an Earth-centred, Earth-
fixed (ECEF) frame, such as WGS84. Of course, when the vehicle is standing still, the along track direction
is not well defined using the velocity vector (which in this case is the null vector), but still the last along
track direction computed before the vehicle stopped could be used (besides, there is no actual “track”
when the vehicle is not moving, so the along track and cross track errors may not make much sense in
that case either). However, there is still the problem of defining the cross-track direction, and now there
is no such thing as an orbital plane. Among all directions orthogonal to the along track axis, a natural
choice seems to be the one lying on the horizontal plane (well defined unless the vehicle’s motion is
purely vertical, which is an extremely unlikely situation in road applications). Another natural option
seems to be the one lying on the local road plane, which may differ from the horizontal plane due to road
banking. This second option may be of interest when an inertial measurement unit (IMU) is involved in
the navigation process, as the local road plane is nearly fixed with respect to the IMU axes. However, the
first option seems better for most implementations as it does not require any prior knowledge of the
road geometry or of the vehicle’s attitude. There’s yet a third option to be considered in which the cross-
track direction is the one defined by the normal acceleration vector, but this has an important drawback,
namely that the normal acceleration is nearly zero when in low-dynamics situations (such as driving
along a nearly straight road or a highway). Hence the first option continues to seem the most convenient
one. With this in mind, the following definition is proposed:
Along track and cross track components are coordinates in a reference frame whose definition is based

on the vehicle’s true velocity vector (relative to some ECEF reference frame) and the local upward unit
v

η
vector . Namely, the said reference frame is defined by the following 3 orthogonal unit vectors:
 
  
  

τ= vv/ , and . The along track and cross track components of a vector
b τ× n ε
nv=ηη××/ v
attached to the user’s position (such as the position error vector) are then defined as the scalar products
  
and , respectively.
ε⋅τ ε⋅ n

NOTE 1 The vector n as defined above corresponds to the first of the 3 options previously discussed: it is


orthogonal to the along track direction (given by v ) and lies on the horizontal plane (as it is orthogonal to η ).
NOTE 2 The notation used to define the reference frame is commonly used to denote the so-called Frenet
trihedron, although the reference frame defined above and the Frenet trihedron are not exactly the same (rather,
the Frenet trihedron would correspond to the third option, which has been readily discarded).
=
5.1.2 Velocity Accuracy Metrics
The same considerations made in 5.1.1 with regard to position accuracy metrics can be directly applied
to velocity accuracy metrics, in particular those regarding the ambiguity in the use of expressions such
as 3D or horizontal, along track and cross track, etc. Also, identical recommendations are made and
analogous rewordings are proposed.
In addition, it has been pointed out that 3D and horizontal velocity accuracy metrics may not be relevant
and could be deleted. It has also been pointed out that there may be some redundancy between 3D
velocity accuracy and speed accuracy. At this point it is worth discussing the difference between both.
Suppose that the true velocity vector (expressed in some orthonormal coordinate system, such as the

local horizontal system with coordinates along the East, North and up axes) is v = 100,, and that the
( )
t


estimated velocity is v −100,, . Then the speed error would be , whereas the
( ) vv− =−=1 1 0
e
et

3D velocity error (in norm) would be , thus illustrating the difference between
vv−=−2,,0 0=2
( )
et
both concepts. The underlying idea is that the norm of the error is not the same thing as the error of the
norm, which is a consequence of the Triangle Inequality illustrated in Figure 1. Speed accuracy refers to
the error of the norm, whereas 3D velocity accuracy refers to the norm of the error, so this shows that
3D Velocity and Speed metrics proposed in EN 16803-1 are not redundant.

Figure 1 — Triangle Inequality
The question remains as to whether all of them (an in particular those of 3D and horizontal velocity
accuracy) are of relevance. Relevance is hard to assess, and rather subjective. Many of the metrics
proposed in EN 16803-1 could be questioned in terms of their relevance (it seems difficult to think of
an application which makes use of the East Velocity Protection Level, to give an example). However, they
have been included for completeness. Whether or not they should be deleted is a relevant discussion
but goes beyond the scope of this study, but we can envisage:
— the 3D Velocity Accuracy metric could be removed;
— the Horizontal Velocity Accuracy metric could be transformed into the Horizontal Speed Accuracy
metric (then moved to the “Speed” section of the table, in which now it would make sense to make
a distinction between 3D and Horizontal), where the horizontal speed is to be understood as the
norm of the horizontal projection of the velocity vector.
5.1.3 Integrity Metrics
5.1.3.1 General
Similar issues have been detected as with accuracy metrics. Namely:
— 3D/Horizontal Protection Levels have not been explicitly defined (neither for position nor for
velocity). However, if 3D and horizontal errors are properly defined as norms of the corresponding
vectors (following the recommendations made in 5.1.1), then expressions such as 3D and Horizontal
Protection Levels can be assumed to be self-explanatory without the need of explicit definitions;
=
— 3D/Horizontal Integrity Risk definitions contain references to such things as 3D/Horizontal
position (or velocity) errors, which are undefined. This would be solved by implementing the
recommendations stated in 5.1.1;
— along track, cross track, etc. are undefined. This would be solved by implementing the
recommendations stated in 5.1.1;
— there is a 3D Velocity Protection Level metric, but there is no Speed Protection Level Metric, which
is probably more interesting. In this regard it is proposed to turn 3D and Horizontal Velocity
Protection Level metrics into 3D and Horizontal Speed Protection Level metrics, and then move
them to a new “Speed” section within the table (much in the same way as in the Accuracy Metrics
table).
5.1.3.2 Percentile Computation Procedure
It is stated in EN 16803-1:2016, A.2.1 that Accuracy metrics shall not take into account those epochs in
which the output of interest (e.g. horizontal position) is not provided by the GBPT. However, in
EN 16803-1:2016, A.2.2.2 it is said that Protection Level performance metrics shall include those epochs
in which there is no protection level, which shall be understood as if the protection level was infinite.
These two approaches are exactly opposite and it seems contradictory to adopt one of them for accuracy
and the other one for protection levels. We briefly discus here both approaches using an example and
show a few of their advantages and drawbacks. The example addresses the case of protection level
percentile computation, but the same ideas apply to error percentiles (and hence to Accuracy metrics).
Suppose that 10 % of the time there is no position output or no associated protection level and that the
CDF of the protection levels is computed taking into account all epochs. Then the maximum value of the
th
protection level, which would normally correspond to the 100 percentile, will rather correspond to
th
the 90 percentile (smaller or equal protection levels than the maximum have been obtained only 90 %
of the time, since there is another 10 % without protection levels). Actually, this way of reasoning shows
that the whole CDF plot shrinks by a factor 0,9 (with respect to the one computed using only epochs
with a protection level) along the ordinate axis. This is illustrated in Figure 2. As a result, all percentiles
th th
up to the 90 yield higher values. In particular, the 95 percentile is undefined.

Figure 2 — CDF considering all epochs or only those with Protection Levels
Each approach presents different advantages and drawbacks. If all epochs are considered, the CDF can
be used to assess the Protection Level size and Protection Level Availability in a single plot (note that
Protection Level Availability metrics have not been defined in EN 16803-1:2016, see 5.2.1). Likewise, in
the case of error percentiles, Accuracy and Availability could be assessed in one single plot, which may
be seen as an advantage. As a drawback, both metrics (accuracy and availability) get coupled,
complicating validation, certification and comparison of different solutions. Besides, some of the
th
percentiles defining the accuracy metric (such as the 95 in the preceding example) may be not
computable. Benefits of both approaches are summarized in Table 1.
Table 1 — Benefits of both percentile computation approaches
Metrics Use all epochs Only epochs with a valid output
PL size and availability are
PL size and availability are
assessed in a single plot.
decoupled.
Protection Level Performance
PL availability can be assessed
All PL size percentiles are well-
without the need of additional
defined.
availability metrics.
Accuracy and availability are
Accuracy and availability are
assessed in a single plot.
decoupled.
Accuracy
Availability can be assessed
All error percentiles are well-
without the need of additional
defined.
availability metrics.
Regardless of the final decision, it is recommended to emphasize in the document the approach to be
taken in each case, perhaps in a more prominent place than the Annex A (where such explanations are
currently placed), in order to avoid misunderstandings. This is especially encouraged if it is decided to
keep using different approaches for Accuracy and Integrity metrics.
As to the question how a “valid” output is to be understood (in order to filter out epochs without a valid
output when that is the selected approach), the proposed answer is to consider an output valid when no
flag indicates otherwise. For instance, assuming that the NMEA standard is used to output the data, any
value of the “fix quality” flag in a GGA sentence other than 0 would indicate a valid position.
5.1.3.3 Integrity Risk Computation Procedure
Similar to what has been pointed out previously, a minor concern can be raised about Integrity Risk
metrics, which refer to probabilities whose computation procedure has not been clearly specified. For
the sake of clarity let us focus on position (rather than velocity) Integrity Risk metrics, although all what
is said here can be applied to any of the Integrity Risk metrics. As previously, we are faced with the
decision whether to consider all epochs or those with a valid position and an associated Protection Level.
When no position and no PL exist, that could be counted as a “safe” epoch (one with no integrity event
taking place). The same could be said of an epoch with a position and without a PL. Therefore, those
epochs could either be discarded or considered in the IR computation as safe epochs. If they are
discarded, the results will be slightly worse (by a factor 1/0,9 approximately 1,1 taking the example
of 5.1.3.2) than if they are taken into consideration. However, the impact in the case of the Integrity Risk
is somewhat smaller, as a 1,1 factor (taking again the example of 5.1.3.2) applied to an IR figure which
E
is already very small (e.g. 1 -6) does not make much of a difference, especially considering that such
small figures can hardly be measured to a high degree of accuracy. Nonetheless, it is recommended to
specify whether or not epochs with a valid position and PL are to be considered in the computation of
the Integrity Risk.
Although either approach can be acceptable, we are inclined in this case to consider only epochs with
valid outputs, which is a slightly more conservative approach than the other one (as it yields slightly
worse integrity risk figures).
5.1.4 Availability Metrics
Two availability metrics have been defined in EN 16803-1, both identical except that one refers to
position and the other one to velocity/speed. In both cases Availability is defined in terms of the
existence of the output of interest (either position or velocity/speed) but nothing is said as to the criteria
that an output shall meet to be acceptable and thus be taken into account when computing Availability
figures. Since no criteria are specified, it can be interpreted as if all outputs are acceptable, which would
allow for unwanted situations such us a GNSS-standalone GBPT which keeps outputting (at its nominal
rate) the last known position after a complete loss of GNSS signal (e.g. inside a tunnel) and those outputs
being added to the availability count.
In order to avoid this kind of situations it is proposed to reword these metrics specifying that only valid
outputs are to be taken into account. For example, Position Availability could be reworded as follows:
Position Availability (T) is the percentage of operating time intervals of length T during which the
positioning terminal provides at least one valid position output
Note that This wording differs from the original one only in the appearance of the word “valid”. This
should be accompanied by an explanation somewhere along the document as to what is meant by a
“valid” output. In line with what was said at the end of 5.1.3.2, this could be understood as an output that
is accompanied by a flag (similar to the “fix quality” flag that can be found in the GGA sentence of the
NMEA protocol) indicating that the output is healthy (or not indicating otherwise). Similar criteria could
be followed in the case of velocity and/or speed.
In order to make things simple, it could be established that one single flag be used to indicate the health
status of all provided outputs (position, velocity, speed or protection levels if they are also provided)
and that a value of such flag indicating good health shall be understood as all outputs being okay.
Otherwise, maybe the existing standard protocols (such as NMEA) would have to be modified to include
health status flags for the different outputs (not only for the position). However, for terminals which
provide protection levels the NMEA protocol may already be insufficient, so maybe the protocol needs
to some evolution anyway.
It may also be worth specifying how the time intervals of length T are to be handled when computing
Availability (T) figures, namely whether or not such time intervals shall be contiguous or they can
overlap, and if they can, what is the allowed overlapping. If the above definition is taken strictly, each
possible time interval of length T should be considered, which implies a sliding T-window whose offset
takes values in a continuum (rather than a discrete set of possible displacements). The computation of
the Availability (T) figure would therefore imply some integral calculus, not difficult conceptually but
rather cumbersome. However, when T is small compared to the time length of the data set at hand (a
desirable situation for the sake of statistical significance), the Availability (T) figure will show low
sensitivity to allowed overlapping, yielding similar values in all cases, from the continuous case to the
contiguous one (the latter obviously being computationally much simpler than the former). Whatever
approach is taken, it is important that everybody takes the same, and hence it should be specified in the
standard. For the reasons just explained it is proposed to use contiguous T-windows for Availability (T)
computation, and to specify that the first T interval shall start at the first epoch of the data set under
consideration.
5.1.5 Timing Metrics
5.1.5.1 General
It has been pointed out that the Timestamp Resolution metric might not really be a metric, but rather a
feature of the GBPT. It is, certainly, a GBPT feature, and can be easily observed by a simple inspection of
the GBPT output. The reason to include it in the list of performance metrics seems to be the fact that it
may have an impact on performance.
For instance, when a GBPT combines GNSS with an IMU (or any sensors which delivers data at a high
rate, such as car odometers), the synchronisation of the data coming from the different sensors is usually
a delicate matter. Matching the GNSS fix (or set of raw measurements) with the right piece of data from
the sensor’s stream is critical for a smooth and accurate functioning of the navigation system. A poor
timestamp resolution of the GNSS output may therefore degrade navigation performance.
The above example addresses the importance of the GNSS timestamp resolution, but it can be argued
that the metric does not refer to the GNSS timestamp but to the overall timestamp delivered by the GBPT
after obtaining its navigation solution (which in the above example would be a GNSS/IMU hybrid
solution). The fact that the GBPT timestamp has poor resolution does not imply that the GNSS timestamp
used internally by the GBPT to combine the GNSS and IMU data are also coarse. However, the application
layer that used the GBPT position as input may also require high resolution and/or accuracy of the
timestamp (e.g. suppose it is to be used by a collision avoidance system).
It shall be noted that, following this line of reasoning, other features of the receiver may need to be
included as metrics, including:
— position output resolution (e.g. 1 m could be enough for road tolling but insufficient for autonomous
driving);
— velocity/speed output resolution;
— protection level resolution;
— output latency (e.g. a road tolling application may wait for a long time to get the GBPT output, but a
safety-critical one may not);
— output rate (e.g. autonomous driving may require higher output rate than road tolling).
On the other hand, Output Latency and Output Rate stability have been said to be not relevant. Again,
deciding about the relevance of a metric is no easy task. There is no technical specification known to the
authors of this document in which these parameters are specifically addressed, but it could very well be
the case that such specifications exist within the automotive industry, especially for systems involving
data exchange through the CAN bus. Unfortunately, CAN communications are far from standard, and
specifications are kept secret by manufacturers. However, output rate and latency stability may be
important when several sensors shall be synchronised (e.g. in hybrid navigation systems). Sensor
outputs are subject to delays which are usually calibrated. When these delays are not stable over time,
synchronisation issues can occur which have a direct impact on performance. Therefore, it is not clear
to the authors that these metrics should be removed. However, if could be proposed to take them out
until their relevance can be confirmed (e.g. through consultation to manufacturers).
It has also been pointed out that timestamp accuracy, which was not included as a metric in EN 16803-1
may be relevant. One GP-START partner has reported problems processing PVT data obtained with a
COTS mass-market receiver which delivered wrongly timestamped PVT data. However, it is agreed not
to include timestamp accuracy as a metric based on:
— timestamp inaccuracy will manifest itself as navigation inaccuracy as long as the vehicle moves;
— it is agreed that inaccurate or corrupted timestamping may lead to hybridization problems as far as
other sensors are concerned, but this will result on poor navigation accuracy of the hybridized
system;
— if timestamping errors do not result in navigation errors (e.g. in a stationary receiver), then they are
not relevant for ITS;
— along the lines of previous point, timing applications, which may require accurate timestamping,
are not considered among ITS applications.
This decision could be revised if evidence is shown as to the common need of accurate timestamping at
ITS application level as a separate requirement, independent from positioning performance.
5.1.5.2 Time To First Fix
During GP-START internal review activities, some partners have pointed out that with regard to TTFF
metrics:
— rather than using 3 CDF values as proposed in EN 16803-1, simple average values along with
dispersion measures (e.g. standard deviation) may suffice for most purposes;
— references to cold, warm and hot start concepts should be avoided as they are not sufficiently well
defined. Instead, it is proposed to refer to the amount of time the GBPT has been inactive prior to
the start procedure, considering several different values (e.g. 3 values) for the length of this
inactivity period.
These recommendations are supported by all partners as well as by the customer, as they contribute to
eliminate ambiguity and to simplify concepts and procedures where additional complexity does not
seem to be necessary. Therefore, in line with the above recommendations, the following TTFF metric
definitions are proposed to substitute the ones in EN 16803-1 (see Table A.4):
— Long-term TTFF is the average value, which shall be accompanied by the corresponding standard
deviation, obtained from 30 consecutive TTFF measurements taken immediately after a GBPT
inactivity period of one week.
— Mid-term TTFF is the average value, which shall be accompanied by the corresponding standard
deviation, obtained from 30 consecutive TTFF measurements taken immediately after a GBPT
inactivity period of one day.
— Short-term TTFF is the average value, which shall be accompanied by the corresponding standard
deviation, obtained from 30 consecutive TTFF measurements taken immediately after a GBPT
inactivity period of 1 min.s
The definitions above shall be accompanied by the following explanatory notes:
— by consecutive it is meant that they are obtained by switching on the GBPT at consecutive seconds,
thus covering all possibilities within the length of a GPS navigation frame. This one second
displacement can be achieved by using recorded signal and replying it from different points in the
data set. This may pose, however, some other issues (e.g. coherency and/or synchronisation with
assistance services or additional sensors if they are used). It is noted that the start time within the
second (e.g. in the range of the millisecond) may also have an impact but it is considered of second
order.
— the 30 s period is chosen based on GPS L1 C/A navigation data structure. Different values may shall
be considered for other constellations and/or signals (e.g. Galileo I/NAV on E1B may require up to
60 s). Still, the average and standard deviation could be estimated based on 30 consecutive TTFF
measurements (e.g. spaced 2 s in the case of Galileo I/NAV).
— it is to be understood that inactivity of the GBPT implies that no GNSS signals are being received
(even less stored in internal memory). However, this does not preclude the GBPT receiving
assistance data or any kind of information or aiding during startup. This may need to be taken into
account during TTFF testing.
— as most of GBPT performance metrics, TTFF may be influenced by the environmental conditions.
TTFF performance figures for different environment types may be needed in order to properly
characterize the GBPT (just as it happens with other metrics, such as position accuracy).
5.1.6 Additional Considerations
It should be considered to simplify most of the metrics (including Accuracy, Integrity, Availability and
Continuity) by turning them into generic concepts without explicit mention to the output parameter they
refer to, allowing also for the possibility that they refer to a group formed by several output parameters.
This approach would simplify the standard’s logic and at the same time cover a wider range of
possibilities. Consider, for instance, that for the navigation service to be considered available, it may be
necessary to have not only a valid position output but also speed or protection levels, so an appropriate
definition of availability for some applications may require that not just a specific output be available,
but a group of them. Likewise, Integrity could require that several different parameters are kept within
their respective tolerances (for instance, in aviation, integrity requires that both the vertical and
horizontal errors be bounded by their corresponding protection levels simultaneously).
This would be complemented by a close review of 4.2 of the standard as suggested in 5.1.1.1, perhaps
including attitude parameters (e.g. heading) and/or making some additional considerations on the
coordinate system used to represent position and velocity (e.g. horizontal velocity could be represented
in polar coordinates as a pair consisting of speed and heading).
5.2 Completion With Additional Metrics
5.2.1 Protection Level Availability Metrics
Protection Levels were excluded deliberately from availability metrics based on the legitimate
considerations that:
— availability in the GNSS framework is usually understood as the existence of a position fix,
regardless of its integrity;
— protection level performance metrics implicitly include protection level availability since they are
computed based on all epochs and not only on those at which protection level outputs exist.
However, a number of reasons exist to take a different approach:
— availability can also be understood (and is indeed understood in some contexts) as availability of
the whole system or service of interest, including protection levels if they are part of it;
— since the Availability concept is already split into several metrics, including additional metrics
addressing protection level availability would not imply a big change to the current philosophy, nor
would it change the interpretation of existing ones. It is not necessary to reword the existing metrics
so that they refer to position (or velocity, speed, etc.) and its associated protection level, but just to
include new, independent metrics which refer to the existence of protection level outputs.
NOTE The new metrics should require the existence of both the protection level and the output it refers to
(position, velocity, etc.) in order to be meaningful. But existing position (velocity, etc.) availability metrics
would remain untouched and keep their current meanings (it is not for the changes proposed in 5.1.4).
— if protection level availability metrics are defined, the computation approach for protection level
percentiles could be changed to be the same used for error percentiles (i.e. using only “valid” epochs,
rather than all epochs). That would make metrics more consistent. It is desirable that percentiles
are computed in the same way whether they refer to errors or to protection levels, as it not only
avoids confusion, but also allows comparison of accuracy and protection level percentiles, which is
otherwise impossible;
— by computing protection levels using only “valid” epochs we would be decoupling protection level
performance from its availability, thus being consistent with the general approach of keeping
metrics as decoupled as possible.
Based on these considerations we recommend to include protection level availability metrics which
could be based on the following sample wording:
Horizontal Position Protection Level Availability (T) is the percentage of operating time intervals of length
T during which the positioning terminal provides at least one valid pair of outputs constituted by a
horizontal position and its associated protection level.
For the above sample wording we have taken horizontal position protection levels, but analogous
statements are proposed for any other type of output and component thereof (e.g. horizontal velocity).
5.2.2 Continuity Metrics
Integrity metrics were included in EN 16803-1 based primarily on the needs of liability-critical
applications (such as road user charging). Now we are seeing an increasing interest in the use of GNSS
positioning for safety-critical applications such as ADAS or autonomous driving. As of now it is not clear
to which extent GNSS positioning will play a safety-critical role in those applications: it is possible that
the safety aspects are handled independently of the GNSS subsystem. For instance, detection of obstacles
and collision avoidance measures have a very weak dependence (if any) on GNSS positioning. At first
sight it seems that the most critical functions with regard to safety are more related to relative
positioning (with respect to nearby objects) than to absolute positioning. But there are reasons to
believe that GNSS navigation will be also affected by safety requirements. For instance, speed limit
supervision may be based on a speed limit database whose use requires some knowledge of the absolute
position.
When safety is at stake, continuity of the (positioning) service may become critical. A disruption to the
provision of position, velocity, speed
...

Questions, Comments and Discussion

Ask us and Technical Secretary will try to provide an answer. You can facilitate discussion about the standard in here.

Loading comments...