Emergency Urban Search

Nikola Dourtchev, Mark Witkowski, James Mardell and Robert Spence Department of Electrical and Electronic Engineering Imperial College London United Kingdom

Email: ndourtchev@gmail.com

http://dx.doi.org/10.61618/NYZQ3334

Abstract

The use of a video camera to support search immediately raises the question “for target location,
what is the most effective way of presenting video camera output to a human observer (‘spotter’)”. We
examine three presentation modes: (a) unprocessed video output, (b) ‘static visual presentation’
(SVP) in which a series of static views of the search area can be examined in turn while keeping up
with drone movement, and (c), a novel mode called ‘Live SVP’ in which the locations sequentially
captured by a camera are presented discretely in real time, thereby preserving any movement such
as a person waving to attract attention.


The dynamics of aerial video were modelled using game development software. The resulting videos
were used to support realistic search exercises using human participants. The task attempted by each
participant was the identification of lost school children in the simulated environment, using one of the
video presentations described above.


It was found that the new LSVP viewing mode is superior in those tested for moving targets in a low
distraction environment. Another principal finding was that the density of distractors (i.e., non-target
objects) had a significant influence on the success of target identification.


KEY WORDS: Search, Rescue, Drone, Video, Target, Presentation

Leave a Comment