Urban search and rescue - Guideline for the application of a test method for innovative technologies to detect victims in debris

This document specifies requirements and recommendations on the set-up of a field test and a test methodology for Urban Search and Rescue (USaR) equipment for the detection of victims under debris. A realistic field test is described to gather information to test for example a Soft Miniaturized Underground Robot (SMURF) or drones equipped with specialized sensors, e.g. preparation of debris cones made of different materials. Furthermore, a performance test method for each component and the complete USaR system is described. The purpose of the test method is to specify the apparatuses, procedures and performance metrics necessary to quantitatively measure a search and rescue kit’s abilities.
This document is intended to be used by Urban Search and Rescue (USaR) equipment manufacturers and developers. The document is not primary intended to be used by first responders, although the user community is benefitted by the relevant guidelines to be put in place.
The current document discusses and provides guidelines around the following questions:
—   How to set up a test field for an innovative USaR kit?
—   What should be tested?
—   How should be tested?
—   Who should conduct the testing?
—   What is the minimum set of specifications for the technological tools?

Städtische Suche und Rettung - Leitfaden für die Anwendung eines Prüfverfahrens für innovative Technologien zur Erkennung von Opfern in Trümmern

Recherche et sauvetage en milieu urbain - Directive pour l’application d’une méthode pour tester les technologies innovantes de détection des victimes dans les débris

Iskanje in reševanje v mestih - Smernice za uporabo preskusne metode za inovativne tehnologije za odkrivanje žrtev v ruševinah

Ta dokument določa zahteve in priporočila v zvezi z vzpostavitvijo terenskega preskusa in metodologije za preskušanje opreme za iskanje in reševanje v mestih (USaR) za odkrivanje žrtev v ruševinah. Opisan je realističen terenski preskus za zbiranje informacij za preskušanje, na primer mehek miniaturni podzemni robot (SMURF) ali brezpilotni letalniki, opremljeni s posebnimi senzorji (npr. priprava stožcev iz različnih materialov). Opisana je tudi metoda za preskušanje delovanja posameznih komponent in celotnega sistema iskanja in reševanja v mestih. Namen preskusne metode je določiti naprave, postopke in metrike delovanja, potrebne za kvantitativno merjenje zmogljivosti kompleta za iskanje in reševanje.
Ta dokument naj bi uporabljali proizvajalci oziroma razvijalci opreme za iskanje in reševanje v mestih. Dokument ni prvotno namenjen reševalcem, čeprav imajo uporabniki korist od ustreznih smernic, ki jih je treba uvesti.
Ta dokument obravnava in zagotavlja smernice v zvezi z naslednjimi vprašanji:
–   Kako vzpostaviti terenski preskus za inovativen komplet za iskanje in reševanje v mestih?
–   Kaj naj bi se preskušalo?
–   Kako naj bi se preskušalo?
–   Kdo naj bi izvajal preskušanje?
–   Katere so minimalne specifikacije za tehnološka orodja?

General Information

Status
Published
Publication Date
13-Dec-2022
Technical Committee
Current Stage
6060 - National Implementation/Publication (Adopted Project)
Start Date
07-Dec-2022
Due Date
11-Feb-2023
Completion Date
14-Dec-2022

Buy Standard

Standardization document
CWA 17947:2023
English language
25 pages
sale 10% off
Preview
sale 10% off
Preview
e-Library read for
1 day
Technical report
TP CWA 17947:2023
English language
25 pages
sale 10% off
Preview
sale 10% off
Preview
e-Library read for
1 day

Standards Content (Sample)

SLOVENSKI STANDARD
SIST CWA 17947:2023
01-februar-2023
Iskanje in reševanje v mestih - Smernice za uporabo preskusne metode za
inovativne tehnologije za odkrivanje žrtev v ruševinah
Urban search and rescue - Guideline for the application of a test method for innovative
technologies to detect victims in debris
Städtische Suche und Rettung - Leitfaden für die Anwendung eines Prüfverfahrens für
innovative Technologien zur Erkennung von Opfern in Trümmern
Recherche et sauvetage en milieu urbain - Directive pour l’application d’une méthode
pour tester les technologies innovantes de détection des victimes dans les débris
Ta slovenski standard je istoveten z: CWA 17947:2022
ICS:
13.200 Preprečevanje nesreč in Accident and disaster control
katastrof
SIST CWA 17947:2023 en,fr,de
2003-01.Slovenski inštitut za standardizacijo. Razmnoževanje celote ali delov tega standarda ni dovoljeno.

---------------------- Page: 1 ----------------------
SIST CWA 17947:2023

---------------------- Page: 2 ----------------------
SIST CWA 17947:2023


CEN
CWA 17947

WORKSHOP
November 2022

AGREEMENT


ICS 13.200
English version


Urban search and rescue - Guideline for the application of
a test method for innovative technologies to detect victims
in debris
This CEN Workshop Agreement has been drafted and approved by a Workshop of representatives of interested parties, the
constitution of which is indicated in the foreword of this Workshop Agreement.

The formal process followed by the Workshop in the development of this Workshop Agreement has been endorsed by the
National Members of CEN but neither the National Members of CEN nor the CEN-CENELEC Management Centre can be held
accountable for the technical content of this CEN Workshop Agreement or possible conflicts with standards or legislation.

This CEN Workshop Agreement can in no way be held as being an official standard developed by CEN and its Members.

This CEN Workshop Agreement is publicly available as a reference document from the CEN Members National Standard Bodies.

CEN members are the national standards bodies of Austria, Belgium, Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France,
Germany, Greece, Hungary, Iceland, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, Netherlands, Norway, Poland, Portugal, Republic of North
Macedonia, Romania, Serbia, Slovakia, Slovenia, Spain, Sweden, Switzerland, Türkiye and United Kingdom.



EUROPEAN COMMITTEE FOR STANDARDIZATION
COMITÉ EUROPÉEN DE NORMALISATION

EUROPÄISCHES KOMITEE FÜR NORMUNG

CEN-CENELEC Management Centre: Rue de la Science 23, B-1040 Brussels
© 2022 CEN All rights of exploitation in any form and by any means reserved worldwide for CEN national Members.


Ref. No.:CWA 17947:2022 E

---------------------- Page: 3 ----------------------
SIST CWA 17947:2023
CWA 17947:2022(E)
Contents Page
European foreword . 3
Introduction . 4
1 Scope . 5
2 Normative references . 5
3 Terms and definitions . 5
4 Test procedures for Urban Search and Rescue (USaR) equipment. 6
4.1 General . 6
4.2 Select technology to be tested . 7
4.3 Roles and tasks in collaborative and field tests . 7
4.4 Identify and define evaluation criteria . 8
4.5 Define test scenario and use case . 8
4.6 Documentation of the evaluation tests . 9
5 Testing evaluation methodology development . 9
5.1 General . 9
5.2 Factors for choosing the evaluation methodology . 10
5.2.1 Verification process . 10
5.2.2 Validation process . 11
5.2.3 Collaboration lab test or field test . 11
5.3 Evaluation methodology . 11
5.3.1 Collaborative lab test evaluation . 12
5.3.2 Field test evaluation . 12
5.3.3 Integration test evaluation . 13
5.4 Key Performance Indicators . 16
6 Tools and technologies. 16
6.1 General . 16
6.2 Levels of USaR team capacities . 17
6.3 Checklist for selecting technical solutions . 17
6.4 Categorisation of a typical USaR toolkit at present . 19
6.5 Categories of novel tools and technologies candidates eligible for the USaR toolkit . 20
6.6 Mapping of ASR levels with novel tools and technologies . 21
Bibliography . 25

2

---------------------- Page: 4 ----------------------
SIST CWA 17947:2023
CWA 17947:2022(E)
European foreword
This CEN Workshop Agreement (CWA 17947:2022) has been developed in accordance with CEN-
CENELEC Guide 29 “CEN/CENELEC Workshop Agreements– A rapid way to standardization” and with
the relevant provision of CEN/CENELEC Internal Regulations – Part 2. It was approved by a Workshop of
representatives of interested parties on 2022-11-04, the constitution of which was supported by CEN
following the public call for participation made on 2021-10-29. However, this CEN Workshop Agreement
does not necessarily reflect the views of all stakeholders who may have an interest in its subject matter.
The final text of CWA 17947:2022 was submitted to CEN for publication on 2022-11-10.
Results incorporated in this CEN Workshop Agreement received funding from the European Union’s
Horizon 2020 research and innovation program under the grant agreement numbers 832790 (CURSOR).
The following organizations and individuals developed and approved this CEN Workshop Agreement:
— ASTRIAL GmbH/ Evangelos Sdongos (Chairperson)
— Centre for Research and Technology Hellas (CERTH)/ Anastasios Dimou
— Commissariat à L’Energie Atomique et aux Energies Alternatives (CEA)/ Emmanuel Scorsone
— Defence Research and Development Canada (DRDC)/ Gerry Doucette
— Entente pour la Forêt Méditerranéenne (Valabre)/ Nathalie Bozabalian
— German Federal Agency for Technical Relief (THW)/ Tiina Ristmäe (Vice-Chairperson)
— Institute of Communication and Computer Systems (ICCS)/ Dimitra Dionysiou, Panagiotis Michalis
— International Security Competence Centre GmbH (ISCC)/ Friedrich Steinhäuser
— Netherlands Institute for Public Safety (NIPV)/ Theo Uffink
— Public Safety Community Europe (PSCE)/ Anthony Lamaudiere
— SINTEF/ Giacarlo Marafioti
— Tohoku University/ Satoshi Tadokoro
— University of Manchester/ Krishna Persaud
— Vicomtech/ Harbil Arregui
Attention is drawn to the possibility that some elements of this document may be subject to patent rights.
CEN and CENELEC policy on patent rights is described in CEN/CENELEC Guide 8 “Guidelines for
Implementation of the Common IPR Policy on Patent”. CEN shall not be held responsible for identifying
any or all such patent rights.
Although the Workshop parties have made every effort to ensure the reliability and accuracy of technical
and non-technical descriptions, the Workshop is not able to guarantee, explicitly or implicitly, the
correctness of this document. Anyone who applies this CEN Workshop Agreement shall be aware that
neither the Workshop, nor CEN, can be held liable for damages or losses of any kind whatsoever. The use
of this CEN Workshop Agreement does not relieve users of their responsibility for their own actions, and
they apply this document at their own risk. The CEN Workshop Agreement should not be construed as
legal advice authoritatively endorsed by CEN.
3

---------------------- Page: 5 ----------------------
SIST CWA 17947:2023
CWA 17947:2022(E)
Introduction
In the face of natural or man-made disasters, search and rescue teams and other first responders like
police, medical units, civil protection or volunteers, race against the clock to locate survivors within the
critical 72-hour timeframe (Golden Hours), facing challenges such as instable structures or hazardous
environments but also insufficient situational awareness – all resulting in lengthy search and rescue
processes. In order to speed up the detection of survivors trapped in collapsed buildings and to improve
working conditions for the first responders, the EU-funded research project CURSOR designed an
innovative Search and Rescue Kit (CURSOR USaR Kit) based on drones, miniaturized robotic equipment,
advanced sensors and incident management applications. The overreaching aim of CURSOR is to develop
a USaR kit that will be easy and fast to deploy, leading to a reduced time in detecting and locating trapped
victims in disaster areas. To make sure that these solutions meet the needs of the first responders in the
field, the system was tested by first responders of the CURSOR consortium as well as by external
practitioners (e.g. INSARAG secretariat, Regione Liguria, USaR NL, Bavarian Red Cross, Japan NRIFD)
throughout the whole development process. Several lab and small scale field trials were conducted.
Against this background the consortium identified the standardisation potential for this CEN Workshop
Agreement, which describes a field test and the associated methodology for assessing the use of
innovative technologies such as the USaR kit.
In this document, the following verbal forms are used:
— “shall” indicates a requirement,
— “should” indicates a recommendation,
— “may” indicates a permission,
— “can” indicates a possibility or capability.
4

---------------------- Page: 6 ----------------------
SIST CWA 17947:2023
CWA 17947:2022(E)
1 Scope
This document specifies requirements and recommendations on the set-up of a field test and a test
methodology for Urban Search and Rescue (USaR) equipment for the detection of victims under debris.
A realistic field test is described to gather information to test for example a Soft Miniaturized
Underground Robot (SMURF) or drones equipped with specialized sensors, e.g. preparation of debris
cones made of different materials. Furthermore, a performance test method for each component and the
complete USaR system is described. The purpose of the test method is to specify the apparatuses,
procedures and performance metrics necessary to quantitatively measure a search and rescue kit’s
abilities.
This document is intended to be used by Urban Search and Rescue (USaR) equipment manufacturers and
developers. The document is not primary intended to be used by first responders, although the user
community is benefitted by the relevant guidelines to be put in place.
The current document discusses and provides guidelines around the following questions:
— How to set up a test field for an innovative USaR kit?
— What should be tested?
— How should be tested?
— Who should conduct the testing?
— What is the minimum set of specifications for the technological tools?
2 Normative references
There are no normative references in this document.
3 Terms and definitions
For the purposes of this document, the following terms and definitions apply.
ISO and IEC maintain terminological databases for use in standardization at the following addresses:
— ISO Online browsing platform: available at https://www.iso.org/obp
— IEC Electropedia: available at https://www.electropedia.org/
3.1
field test
test that is performed in near real-life conditions in collaboration between solution provider and end user
3.2
use case
intended use of a technology within an application
3.3
collaborative lab test
test that is performed in a laboratory-controlled environment in collaboration between solution provider
and end user
5

---------------------- Page: 7 ----------------------
SIST CWA 17947:2023
CWA 17947:2022(E)
3.4
end user
person or group of persons that ultimately uses the evaluated technology, first or second responder
3.5
search and rescue
use of specialised personnel and equipment to locate people in distress or in danger and remove them
from a place of actual or potential danger to a place of relative safety
Note 1 to entry: Urban search and rescue refers to scenarios in metropolitan areas.
[SOURCE: EN 17173:2020-09, definition 3.548, modified – added note]
3.6
personal protective equipment
special device or appliance designed to be worn or held by an individual for protection against one or
more health and safety hazards
[SOURCE: IEC 82079-1:2012, definition 3.27]
3.7
integration test
type of testing in which the different units, modules or components of a solution/technology are tested
as a combined entity
3.8
sniffer
device with inherited capability to detect and analyse a variety of chemical substances
4 Test procedures for Urban Search and Rescue (USaR) equipment
4.1 General
The fundamental question Urban Search and Rescue (USaR) operators, industry solution providers and
interested stakeholders are trying to answer is: To what extent does the technology solution under
consideration address capability gaps articulated by the end users?
This assessment involves an iterative exchange of information between the solution provider and end
user on the instrument or device under consideration.
NOTE From the perspective of the end user, the INSARAG guidelines [1] will be a familiar way to help frame
the various roles, responsibilities, detailed operating procedures, and doctrine such as the ‘INSARAG marking and
signalling system’ during actual USaR operation.
For their part, the end users should articulate and cite any standards or other objective measures of
performance that they perceive to be relevant to how their offerings may perform in the USaR
environment. The testing procedures of any lab or field test is potentially complex, requiring a resource
intensive planning, implementation and follow-up activities.
This document positions end users to measure capabilities necessary to perform operational tasks
defined by end users. Standardised test approaches encourage evaluations of the performance of USaR
technologies in a realistic environment.
This clause is structured as followed:
— Select technology to be tested
6

---------------------- Page: 8 ----------------------
SIST CWA 17947:2023
CWA 17947:2022(E)
— Identify test environment (lab or field)
— Identify and define evaluation criteria
— Define test scenario (e.g. earthquake, floods) and use case (detailed description of the test set-up)
— Define documentation
4.2 Select technology to be tested
The first step is determining and selecting the technologies for the evaluation test.
Who determines the technologies for testing depends on the evaluation test objective and intended
audience of the results.
If the test takes place for commercialisation purposes then the solution provider determines the concrete
tested technologies and functionalities.
EXAMPLE The technology to be tested is a ground robot and the functionality to be tested is its mobility.
4.3 Roles and tasks in collaborative and field tests
The following table defines roles and tasks during the test that assesses, if a technology solution under
consideration addresses capability gaps articulated by end users.
Table 1 — Roles and tasks in collaborative lab tests and field tests
Role Tasks in collaborative lab tests Tasks in field tests
Solution provider Provides the location and the Provides the solution.
technology.
Explains the testing purpose.
Demonstrates the solution.
Provides the basic training for the end
Explains the functionalities. user.
Actively supports the test Actively supports the test coordinator with
coordinator with test preparations. test preparations.
Observes the technology
End user Hosts the test.
demonstration or participates hands
Defines the requirements, scenario and use
on if applicable.
case.
Provides feedback about the test
Sets up the testing site.
based on the provided evaluation
Makes sure that the suitable end user
method.
profiles are considered when choosing the
Actively supports the test
test participants (e.g. for drones test,
coordinator with test preparations.
certified pilots shall be chosen).
Conducts the hands-on testing.
Provides feedback about the test based on
the provided evaluation method.
Actively supports the test coordinator with
test preparations.
Test coordinator* Coordinates the preparations and Coordinates the preparations and
communication between solution communication between solution provider
provider and end user. and end user.
Informs the participants about the Informs the participants about the agenda,
agenda, test aims. test aims.
Provides all the relevant templates Provides all the relevant templates and
and forms for the test evaluation. forms for the test evaluation. This is done
7

---------------------- Page: 9 ----------------------
SIST CWA 17947:2023
CWA 17947:2022(E)
This is done together with end user together with end user and solution
and solution provider. provider.
Coordinates the evaluation. Coordinates the evaluation.
Observers Observes the test. Observes the test.
Provides feedback, if required. Provides feedback.
*
In some countries (e.g. United States or Canada) there are third party organisations who are able to take over the test
organisation and implementation completely. They also have facilities that provide the necessary structures for field testing.
Collaborative lab tests take place in the solution provider premises and serve the purpose of early
feedback from the end user. Collaborative lab tests are in most cases technology demonstrations, but if
the maturity of the technology allows, end users can also hands-on test them.
The solution provider demonstrates the technology and explains the development and functionalities
during the collaborative lab tests. End users’ feedback shall be collected and documented.
Field tests usually take place in emergency forces exercise sites, which require the usage of personal
protective equipment (PPE). Every test shall have a dedicated safety officer, who instructs the
participants before entering the testing site and monitors the safety conditions throughout the test. If
necessary, the test shall to be stopped to make sure that the testing ground is safe for all the participants.
Special attention to safety shall be given, when unmanned aerial vehicles are tested. The safety protocol
shall be agreed upon between the test partners before the field test, considering the test nature and the
technologies tested.
4.4 Identify and define evaluation criteria
The identification and definition of evaluation criteria is a critical task of the end users. Criteria can be
categorised into:
— functional (e.g. mobility, usability, deployability etc.), and
— non-functional requirements (e.g. affordability, maintenance etc.).
Followed by identifying the operational requirements.
Each evaluation criterion has to be prioritised and weighted.
NOTE Supporting material for defining the requirements can be found on the International Forum to Advance
First Responders Innovation (IFAFRI) webpage [2]. IFAFRI has defined ten first responder capability gaps and those
gap descriptions also include requirements for the technology considered in the respective gap.
In addition to functional and non-functional requirements, it may be relevant to consider regulatory
authorities that may have a role in approving the use of a solution in their respective jurisdictions. These
authorities may be separate from the intended customers themselves. Some jurisdictions may insist that
equipment's, devices, or apparatus designed for a particular part of fire-fighting domain comply with
national standards.
EXAMPLE National Fire Protection Association (NFPA) standards.
These standards or codes may be voluntary or prescribed in laws, regulations or local procurement rules.
EXAMPLE A fire service or regulatory authority may make it obligatory that thermal imagers comply with
NFPA 1801 Standard on Thermal Imagers for the Fire Service. It is then necessary to design scenarios and use cases
in which the equipment will be used by the responder evaluators in the assessment.
4.5 Define test scenario and use case
Based on the technologies chosen, test aims and requirements identified, the test scenario and use cases
are designed.
8

---------------------- Page: 10 ----------------------
SIST CWA 17947:2023
CWA 17947:2022(E)
The test scenario shall indicate in what kind of disaster the equipment will be used (e.g. earthquake,
floods, etc.).
The use case should specify the concrete application case (e.g. type of the building, which building
materials, day/night time, duration etc.) of the technology.
Use cases provide a more detailed description of the test set-up. Given the risks and hazards presented
in a USaR operating environment, the vantage point(s) or positioning of the end user in the response
environment should be specified. For instance, some end users will be in situ, some operating from a safe
stand-off vantage and other consumers of the solutions information may be located in command and
control or partner vantage points.
NOTE For USaR technology tests it is useful to consider the INSARAG Guidelines, which determine the process
flow during a deployment. In addition to the activities of end users during a deployment, the INSARAG Guidelines
may illuminate the possible roles of logistics, information technology support, and communications personnel
during a use case testing. The mission has been divided into five Assessment, Search and Rescue (ASR) levels, each
level can be considered as one use case.
4.6 Documentation of the evaluation tests
Evaluation tests shall be documented so that the data collected is captured and so that it provides input
for further research and development. The reports typically provide an overview of the tests conducted
and present results as well as weighted scores. The test report should differentiate the results based on
the test nature (verification or validation). Validation document tests are used to confirm solution
provider claims, those of interest to end users making acquisition or operational decisions.
Table 2 — Example of test documentation
Test procedure:
Test ID:
Functionality to be tested:
Required test environment:
Overview of the test procedure:
No. Requirement Pass/Fail/Undefined Verification Validation Comments
description


Date of execution:
5 Testing evaluation methodology development
5.1 General
Designing and developing a technology involves regular testing and evaluation to make sure that the
requirements and quality standards are satisfied. Test evaluation is a process that critically examines the
progress of the technology development and achievements done to accomplish the set objectives. It
involves collecting and analysing information and data about a characteristic of the certain technology
and its performance in different development stages. This evaluation methodology targets to measure
the fulfilment of the user requirements, but could be adapted also to evaluate the achievement of the
technical requirements.
9

---------------------- Page: 11 ----------------------
SIST CWA 17947:2023
CWA 17947:2022(E)
Evaluation can be used to create a commercial and product advantage. It is recommended to include the
evaluation activities early in the technology design cycle. Identifying the system limits and capabilities
early, helps to plan the resources and make informed decisions.
The following questions help to plan the test evaluation methodology:
— Will the end user choose to use (or buy) the technology (keyword: value)?
— Can the end user figure out how to use the technology (user friendliness) (keyword: usability)?
— Can the solution provider build the system (keyword: feasibility)?
— Is this solution viable for further exploitation (keyword: exploitation potential)?
5.2 Factors for choosing the evaluation methodology
The solution provider choses the test subjects and aim. The solution provider also decides, if it is a:
— platform test (e.g. ground robot with all the integrated sensors),
— sub-component test (e.g. sniffer which will be later integrated into the robot), or
— standalone technology (e.g. geophones).
These decisions are made based on the development progress depending in what stage and maturity level
the technology is and what is needed to enter the next phase.
The test organiser decides what the scope and aim of the test is. This shall be clear and communicated to
all the test stakeholders (technology provider, end users, test coordinator, observers). Evaluation testing
in the context of this document requires (hands-on) testing conducted jointly by the end user and solution
provider.
Evaluation testing is a broader term combining both processes of verification and validation.
5.2.1 Verification process
The purpose of the verification process is to provide objective evidence that a system or a system element
fulfils its specified requirement and characteristics [3]. Verification results indicate whether a product,
service, or system complies with a regulation, requirement, specification, or imposed condition.
EXAMPLE In the testing context, if the technology requirement is to provide a video from remote area, then
through verification it has to be concluded if the video is provided or not.
Verification testing can be performed at different stages in a product life cycle and aims to show that a
system or component is built according to its specifications, which are closely related to its technical
requirements. Questions to be asked are for example: Are we building the system right? Does the system
do what it has to do?
Verification tests usually evaluate intermediary products and are generally considered an internal
process to facilitate failure analysis, accomplished by test personnel in a controlled environment.
Verification tests involve reviews, tests, simulations, calculations, and/or inspections in order to
investigate, if the results given match the expected ones (investigate the reasons for deviations, decide
about acceptance of deviations) and are conducted by technology providers during the development
phase.
10

---------------------- Page: 12 ----------------------
SIST CWA 17947:2023
CWA 17947:2022(E)
5.2.2 Validation process
The purpose of the validation process is to provide objective evidence that the system, when in use, fulfils
its business or mission objectives and stakeholder requirements, achieving its intended use in its
intended operational environment [3]. Validation testing checks if a product, service, or system meets the
needs of the customer and other identified stakeholders.
Validation testing is conducted under realistic conditions (or simulated ne
...

SLOVENSKI STANDARD
SIST-TP CWA 17947:2023
01-februar-2023
Iskanje in reševanje v mestih - Smernice za uporabo preskusne metode za
inovativne tehnologije za odkrivanje žrtev v ruševinah
Urban search and rescue - Guideline for the application of a test method for innovative
technologies to detect victims in debris
Städtische Suche und Rettung - Leitfaden für die Anwendung eines Prüfverfahrens für
innovative Technologien zur Erkennung von Opfern in Trümmern
Recherche et sauvetage en milieu urbain - Directive pour l’application d’une méthode
pour tester les technologies innovantes de détection des victimes dans les débris
Ta slovenski standard je istoveten z: CWA 17947:2022
ICS:
13.200 Preprečevanje nesreč in Accident and disaster control
katastrof
SIST-TP CWA 17947:2023 en,fr,de
2003-01.Slovenski inštitut za standardizacijo. Razmnoževanje celote ali delov tega standarda ni dovoljeno.

---------------------- Page: 1 ----------------------
SIST-TP CWA 17947:2023

---------------------- Page: 2 ----------------------
SIST-TP CWA 17947:2023


CEN
CWA 17947

WORKSHOP
November 2022

AGREEMENT


ICS 13.200
English version


Urban search and rescue - Guideline for the application of
a test method for innovative technologies to detect victims
in debris
This CEN Workshop Agreement has been drafted and approved by a Workshop of representatives of interested parties, the
constitution of which is indicated in the foreword of this Workshop Agreement.

The formal process followed by the Workshop in the development of this Workshop Agreement has been endorsed by the
National Members of CEN but neither the National Members of CEN nor the CEN-CENELEC Management Centre can be held
accountable for the technical content of this CEN Workshop Agreement or possible conflicts with standards or legislation.

This CEN Workshop Agreement can in no way be held as being an official standard developed by CEN and its Members.

This CEN Workshop Agreement is publicly available as a reference document from the CEN Members National Standard Bodies.

CEN members are the national standards bodies of Austria, Belgium, Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France,
Germany, Greece, Hungary, Iceland, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, Netherlands, Norway, Poland, Portugal, Republic of North
Macedonia, Romania, Serbia, Slovakia, Slovenia, Spain, Sweden, Switzerland, Türkiye and United Kingdom.



EUROPEAN COMMITTEE FOR STANDARDIZATION
COMITÉ EUROPÉEN DE NORMALISATION

EUROPÄISCHES KOMITEE FÜR NORMUNG

CEN-CENELEC Management Centre: Rue de la Science 23, B-1040 Brussels
© 2022 CEN All rights of exploitation in any form and by any means reserved worldwide for CEN national Members.


Ref. No.:CWA 17947:2022 E

---------------------- Page: 3 ----------------------
SIST-TP CWA 17947:2023
CWA 17947:2022(E)
Contents Page
European foreword . 3
Introduction . 4
1 Scope . 5
2 Normative references . 5
3 Terms and definitions . 5
4 Test procedures for Urban Search and Rescue (USaR) equipment. 6
4.1 General . 6
4.2 Select technology to be tested . 7
4.3 Roles and tasks in collaborative and field tests . 7
4.4 Identify and define evaluation criteria . 8
4.5 Define test scenario and use case . 8
4.6 Documentation of the evaluation tests . 9
5 Testing evaluation methodology development . 9
5.1 General . 9
5.2 Factors for choosing the evaluation methodology . 10
5.2.1 Verification process . 10
5.2.2 Validation process . 11
5.2.3 Collaboration lab test or field test . 11
5.3 Evaluation methodology . 11
5.3.1 Collaborative lab test evaluation . 12
5.3.2 Field test evaluation . 12
5.3.3 Integration test evaluation . 13
5.4 Key Performance Indicators . 16
6 Tools and technologies. 16
6.1 General . 16
6.2 Levels of USaR team capacities . 17
6.3 Checklist for selecting technical solutions . 17
6.4 Categorisation of a typical USaR toolkit at present . 19
6.5 Categories of novel tools and technologies candidates eligible for the USaR toolkit . 20
6.6 Mapping of ASR levels with novel tools and technologies . 21
Bibliography . 25

2

---------------------- Page: 4 ----------------------
SIST-TP CWA 17947:2023
CWA 17947:2022(E)
European foreword
This CEN Workshop Agreement (CWA 17947:2022) has been developed in accordance with CEN-
CENELEC Guide 29 “CEN/CENELEC Workshop Agreements– A rapid way to standardization” and with
the relevant provision of CEN/CENELEC Internal Regulations – Part 2. It was approved by a Workshop of
representatives of interested parties on 2022-11-04, the constitution of which was supported by CEN
following the public call for participation made on 2021-10-29. However, this CEN Workshop Agreement
does not necessarily reflect the views of all stakeholders who may have an interest in its subject matter.
The final text of CWA 17947:2022 was submitted to CEN for publication on 2022-11-10.
Results incorporated in this CEN Workshop Agreement received funding from the European Union’s
Horizon 2020 research and innovation program under the grant agreement numbers 832790 (CURSOR).
The following organizations and individuals developed and approved this CEN Workshop Agreement:
— ASTRIAL GmbH/ Evangelos Sdongos (Chairperson)
— Centre for Research and Technology Hellas (CERTH)/ Anastasios Dimou
— Commissariat à L’Energie Atomique et aux Energies Alternatives (CEA)/ Emmanuel Scorsone
— Defence Research and Development Canada (DRDC)/ Gerry Doucette
— Entente pour la Forêt Méditerranéenne (Valabre)/ Nathalie Bozabalian
— German Federal Agency for Technical Relief (THW)/ Tiina Ristmäe (Vice-Chairperson)
— Institute of Communication and Computer Systems (ICCS)/ Dimitra Dionysiou, Panagiotis Michalis
— International Security Competence Centre GmbH (ISCC)/ Friedrich Steinhäuser
— Netherlands Institute for Public Safety (NIPV)/ Theo Uffink
— Public Safety Community Europe (PSCE)/ Anthony Lamaudiere
— SINTEF/ Giacarlo Marafioti
— Tohoku University/ Satoshi Tadokoro
— University of Manchester/ Krishna Persaud
— Vicomtech/ Harbil Arregui
Attention is drawn to the possibility that some elements of this document may be subject to patent rights.
CEN and CENELEC policy on patent rights is described in CEN/CENELEC Guide 8 “Guidelines for
Implementation of the Common IPR Policy on Patent”. CEN shall not be held responsible for identifying
any or all such patent rights.
Although the Workshop parties have made every effort to ensure the reliability and accuracy of technical
and non-technical descriptions, the Workshop is not able to guarantee, explicitly or implicitly, the
correctness of this document. Anyone who applies this CEN Workshop Agreement shall be aware that
neither the Workshop, nor CEN, can be held liable for damages or losses of any kind whatsoever. The use
of this CEN Workshop Agreement does not relieve users of their responsibility for their own actions, and
they apply this document at their own risk. The CEN Workshop Agreement should not be construed as
legal advice authoritatively endorsed by CEN.
3

---------------------- Page: 5 ----------------------
SIST-TP CWA 17947:2023
CWA 17947:2022(E)
Introduction
In the face of natural or man-made disasters, search and rescue teams and other first responders like
police, medical units, civil protection or volunteers, race against the clock to locate survivors within the
critical 72-hour timeframe (Golden Hours), facing challenges such as instable structures or hazardous
environments but also insufficient situational awareness – all resulting in lengthy search and rescue
processes. In order to speed up the detection of survivors trapped in collapsed buildings and to improve
working conditions for the first responders, the EU-funded research project CURSOR designed an
innovative Search and Rescue Kit (CURSOR USaR Kit) based on drones, miniaturized robotic equipment,
advanced sensors and incident management applications. The overreaching aim of CURSOR is to develop
a USaR kit that will be easy and fast to deploy, leading to a reduced time in detecting and locating trapped
victims in disaster areas. To make sure that these solutions meet the needs of the first responders in the
field, the system was tested by first responders of the CURSOR consortium as well as by external
practitioners (e.g. INSARAG secretariat, Regione Liguria, USaR NL, Bavarian Red Cross, Japan NRIFD)
throughout the whole development process. Several lab and small scale field trials were conducted.
Against this background the consortium identified the standardisation potential for this CEN Workshop
Agreement, which describes a field test and the associated methodology for assessing the use of
innovative technologies such as the USaR kit.
In this document, the following verbal forms are used:
— “shall” indicates a requirement,
— “should” indicates a recommendation,
— “may” indicates a permission,
— “can” indicates a possibility or capability.
4

---------------------- Page: 6 ----------------------
SIST-TP CWA 17947:2023
CWA 17947:2022(E)
1 Scope
This document specifies requirements and recommendations on the set-up of a field test and a test
methodology for Urban Search and Rescue (USaR) equipment for the detection of victims under debris.
A realistic field test is described to gather information to test for example a Soft Miniaturized
Underground Robot (SMURF) or drones equipped with specialized sensors, e.g. preparation of debris
cones made of different materials. Furthermore, a performance test method for each component and the
complete USaR system is described. The purpose of the test method is to specify the apparatuses,
procedures and performance metrics necessary to quantitatively measure a search and rescue kit’s
abilities.
This document is intended to be used by Urban Search and Rescue (USaR) equipment manufacturers and
developers. The document is not primary intended to be used by first responders, although the user
community is benefitted by the relevant guidelines to be put in place.
The current document discusses and provides guidelines around the following questions:
— How to set up a test field for an innovative USaR kit?
— What should be tested?
— How should be tested?
— Who should conduct the testing?
— What is the minimum set of specifications for the technological tools?
2 Normative references
There are no normative references in this document.
3 Terms and definitions
For the purposes of this document, the following terms and definitions apply.
ISO and IEC maintain terminological databases for use in standardization at the following addresses:
— ISO Online browsing platform: available at https://www.iso.org/obp
— IEC Electropedia: available at https://www.electropedia.org/
3.1
field test
test that is performed in near real-life conditions in collaboration between solution provider and end user
3.2
use case
intended use of a technology within an application
3.3
collaborative lab test
test that is performed in a laboratory-controlled environment in collaboration between solution provider
and end user
5

---------------------- Page: 7 ----------------------
SIST-TP CWA 17947:2023
CWA 17947:2022(E)
3.4
end user
person or group of persons that ultimately uses the evaluated technology, first or second responder
3.5
search and rescue
use of specialised personnel and equipment to locate people in distress or in danger and remove them
from a place of actual or potential danger to a place of relative safety
Note 1 to entry: Urban search and rescue refers to scenarios in metropolitan areas.
[SOURCE: EN 17173:2020-09, definition 3.548, modified – added note]
3.6
personal protective equipment
special device or appliance designed to be worn or held by an individual for protection against one or
more health and safety hazards
[SOURCE: IEC 82079-1:2012, definition 3.27]
3.7
integration test
type of testing in which the different units, modules or components of a solution/technology are tested
as a combined entity
3.8
sniffer
device with inherited capability to detect and analyse a variety of chemical substances
4 Test procedures for Urban Search and Rescue (USaR) equipment
4.1 General
The fundamental question Urban Search and Rescue (USaR) operators, industry solution providers and
interested stakeholders are trying to answer is: To what extent does the technology solution under
consideration address capability gaps articulated by the end users?
This assessment involves an iterative exchange of information between the solution provider and end
user on the instrument or device under consideration.
NOTE From the perspective of the end user, the INSARAG guidelines [1] will be a familiar way to help frame
the various roles, responsibilities, detailed operating procedures, and doctrine such as the ‘INSARAG marking and
signalling system’ during actual USaR operation.
For their part, the end users should articulate and cite any standards or other objective measures of
performance that they perceive to be relevant to how their offerings may perform in the USaR
environment. The testing procedures of any lab or field test is potentially complex, requiring a resource
intensive planning, implementation and follow-up activities.
This document positions end users to measure capabilities necessary to perform operational tasks
defined by end users. Standardised test approaches encourage evaluations of the performance of USaR
technologies in a realistic environment.
This clause is structured as followed:
— Select technology to be tested
6

---------------------- Page: 8 ----------------------
SIST-TP CWA 17947:2023
CWA 17947:2022(E)
— Identify test environment (lab or field)
— Identify and define evaluation criteria
— Define test scenario (e.g. earthquake, floods) and use case (detailed description of the test set-up)
— Define documentation
4.2 Select technology to be tested
The first step is determining and selecting the technologies for the evaluation test.
Who determines the technologies for testing depends on the evaluation test objective and intended
audience of the results.
If the test takes place for commercialisation purposes then the solution provider determines the concrete
tested technologies and functionalities.
EXAMPLE The technology to be tested is a ground robot and the functionality to be tested is its mobility.
4.3 Roles and tasks in collaborative and field tests
The following table defines roles and tasks during the test that assesses, if a technology solution under
consideration addresses capability gaps articulated by end users.
Table 1 — Roles and tasks in collaborative lab tests and field tests
Role Tasks in collaborative lab tests Tasks in field tests
Solution provider Provides the location and the Provides the solution.
technology.
Explains the testing purpose.
Demonstrates the solution.
Provides the basic training for the end
Explains the functionalities. user.
Actively supports the test Actively supports the test coordinator with
coordinator with test preparations. test preparations.
Observes the technology
End user Hosts the test.
demonstration or participates hands
Defines the requirements, scenario and use
on if applicable.
case.
Provides feedback about the test
Sets up the testing site.
based on the provided evaluation
Makes sure that the suitable end user
method.
profiles are considered when choosing the
Actively supports the test
test participants (e.g. for drones test,
coordinator with test preparations.
certified pilots shall be chosen).
Conducts the hands-on testing.
Provides feedback about the test based on
the provided evaluation method.
Actively supports the test coordinator with
test preparations.
Test coordinator* Coordinates the preparations and Coordinates the preparations and
communication between solution communication between solution provider
provider and end user. and end user.
Informs the participants about the Informs the participants about the agenda,
agenda, test aims. test aims.
Provides all the relevant templates Provides all the relevant templates and
and forms for the test evaluation. forms for the test evaluation. This is done
7

---------------------- Page: 9 ----------------------
SIST-TP CWA 17947:2023
CWA 17947:2022(E)
This is done together with end user together with end user and solution
and solution provider. provider.
Coordinates the evaluation. Coordinates the evaluation.
Observers Observes the test. Observes the test.
Provides feedback, if required. Provides feedback.
*
In some countries (e.g. United States or Canada) there are third party organisations who are able to take over the test
organisation and implementation completely. They also have facilities that provide the necessary structures for field testing.
Collaborative lab tests take place in the solution provider premises and serve the purpose of early
feedback from the end user. Collaborative lab tests are in most cases technology demonstrations, but if
the maturity of the technology allows, end users can also hands-on test them.
The solution provider demonstrates the technology and explains the development and functionalities
during the collaborative lab tests. End users’ feedback shall be collected and documented.
Field tests usually take place in emergency forces exercise sites, which require the usage of personal
protective equipment (PPE). Every test shall have a dedicated safety officer, who instructs the
participants before entering the testing site and monitors the safety conditions throughout the test. If
necessary, the test shall to be stopped to make sure that the testing ground is safe for all the participants.
Special attention to safety shall be given, when unmanned aerial vehicles are tested. The safety protocol
shall be agreed upon between the test partners before the field test, considering the test nature and the
technologies tested.
4.4 Identify and define evaluation criteria
The identification and definition of evaluation criteria is a critical task of the end users. Criteria can be
categorised into:
— functional (e.g. mobility, usability, deployability etc.), and
— non-functional requirements (e.g. affordability, maintenance etc.).
Followed by identifying the operational requirements.
Each evaluation criterion has to be prioritised and weighted.
NOTE Supporting material for defining the requirements can be found on the International Forum to Advance
First Responders Innovation (IFAFRI) webpage [2]. IFAFRI has defined ten first responder capability gaps and those
gap descriptions also include requirements for the technology considered in the respective gap.
In addition to functional and non-functional requirements, it may be relevant to consider regulatory
authorities that may have a role in approving the use of a solution in their respective jurisdictions. These
authorities may be separate from the intended customers themselves. Some jurisdictions may insist that
equipment's, devices, or apparatus designed for a particular part of fire-fighting domain comply with
national standards.
EXAMPLE National Fire Protection Association (NFPA) standards.
These standards or codes may be voluntary or prescribed in laws, regulations or local procurement rules.
EXAMPLE A fire service or regulatory authority may make it obligatory that thermal imagers comply with
NFPA 1801 Standard on Thermal Imagers for the Fire Service. It is then necessary to design scenarios and use cases
in which the equipment will be used by the responder evaluators in the assessment.
4.5 Define test scenario and use case
Based on the technologies chosen, test aims and requirements identified, the test scenario and use cases
are designed.
8

---------------------- Page: 10 ----------------------
SIST-TP CWA 17947:2023
CWA 17947:2022(E)
The test scenario shall indicate in what kind of disaster the equipment will be used (e.g. earthquake,
floods, etc.).
The use case should specify the concrete application case (e.g. type of the building, which building
materials, day/night time, duration etc.) of the technology.
Use cases provide a more detailed description of the test set-up. Given the risks and hazards presented
in a USaR operating environment, the vantage point(s) or positioning of the end user in the response
environment should be specified. For instance, some end users will be in situ, some operating from a safe
stand-off vantage and other consumers of the solutions information may be located in command and
control or partner vantage points.
NOTE For USaR technology tests it is useful to consider the INSARAG Guidelines, which determine the process
flow during a deployment. In addition to the activities of end users during a deployment, the INSARAG Guidelines
may illuminate the possible roles of logistics, information technology support, and communications personnel
during a use case testing. The mission has been divided into five Assessment, Search and Rescue (ASR) levels, each
level can be considered as one use case.
4.6 Documentation of the evaluation tests
Evaluation tests shall be documented so that the data collected is captured and so that it provides input
for further research and development. The reports typically provide an overview of the tests conducted
and present results as well as weighted scores. The test report should differentiate the results based on
the test nature (verification or validation). Validation document tests are used to confirm solution
provider claims, those of interest to end users making acquisition or operational decisions.
Table 2 — Example of test documentation
Test procedure:
Test ID:
Functionality to be tested:
Required test environment:
Overview of the test procedure:
No. Requirement Pass/Fail/Undefined Verification Validation Comments
description


Date of execution:
5 Testing evaluation methodology development
5.1 General
Designing and developing a technology involves regular testing and evaluation to make sure that the
requirements and quality standards are satisfied. Test evaluation is a process that critically examines the
progress of the technology development and achievements done to accomplish the set objectives. It
involves collecting and analysing information and data about a characteristic of the certain technology
and its performance in different development stages. This evaluation methodology targets to measure
the fulfilment of the user requirements, but could be adapted also to evaluate the achievement of the
technical requirements.
9

---------------------- Page: 11 ----------------------
SIST-TP CWA 17947:2023
CWA 17947:2022(E)
Evaluation can be used to create a commercial and product advantage. It is recommended to include the
evaluation activities early in the technology design cycle. Identifying the system limits and capabilities
early, helps to plan the resources and make informed decisions.
The following questions help to plan the test evaluation methodology:
— Will the end user choose to use (or buy) the technology (keyword: value)?
— Can the end user figure out how to use the technology (user friendliness) (keyword: usability)?
— Can the solution provider build the system (keyword: feasibility)?
— Is this solution viable for further exploitation (keyword: exploitation potential)?
5.2 Factors for choosing the evaluation methodology
The solution provider choses the test subjects and aim. The solution provider also decides, if it is a:
— platform test (e.g. ground robot with all the integrated sensors),
— sub-component test (e.g. sniffer which will be later integrated into the robot), or
— standalone technology (e.g. geophones).
These decisions are made based on the development progress depending in what stage and maturity level
the technology is and what is needed to enter the next phase.
The test organiser decides what the scope and aim of the test is. This shall be clear and communicated to
all the test stakeholders (technology provider, end users, test coordinator, observers). Evaluation testing
in the context of this document requires (hands-on) testing conducted jointly by the end user and solution
provider.
Evaluation testing is a broader term combining both processes of verification and validation.
5.2.1 Verification process
The purpose of the verification process is to provide objective evidence that a system or a system element
fulfils its specified requirement and characteristics [3]. Verification results indicate whether a product,
service, or system complies with a regulation, requirement, specification, or imposed condition.
EXAMPLE In the testing context, if the technology requirement is to provide a video from remote area, then
through verification it has to be concluded if the video is provided or not.
Verification testing can be performed at different stages in a product life cycle and aims to show that a
system or component is built according to its specifications, which are closely related to its technical
requirements. Questions to be asked are for example: Are we building the system right? Does the system
do what it has to do?
Verification tests usually evaluate intermediary products and are generally considered an internal
process to facilitate failure analysis, accomplished by test personnel in a controlled environment.
Verification tests involve reviews, tests, simulations, calculations, and/or inspections in order to
investigate, if the results given match the expected ones (investigate the reasons for deviations, decide
about acceptance of deviations) and are conducted by technology providers during the development
phase.
10

---------------------- Page: 12 ----------------------
SIST-TP CWA 17947:2023
CWA 17947:2022(E)
5.2.2 Validation process
The purpose of the validation process is to provide objective evidence that the system, when in use, fulfils
its business or mission objectives and stakeholder requirements, achieving its intended use in its
intended operational environment [3]. Validation testing checks if a product, service, or system meets the
needs of the customer and other identified stakeholders.
Validation testing is conducted u
...

Questions, Comments and Discussion

Ask us and Technical Secretary will try to provide an answer. You can facilitate discussion about the standard in here.