Multi-source Multimodal Data and Deep Learning for Disaster Response: A Systematic Review

Loading...
Thumbnail Image
Date
2022-01
Open Access Location
Authors
Prasanna R
Stock K
Doyle EEH
Journal Title
Journal ISSN
Volume Title
Publisher
Springer Nature
Rights
© The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd 2021
Abstract
Mechanisms for sharing information in a disaster situation have drastically changed due to new technological innovations throughout the world. The use of social media applications and collaborative technologies for information sharing have become increasingly popular. With these advancements, the amount of data collected increases daily in different modalities, such as text, audio, video, and images. However, to date, practical Disaster Response (DR) activities are mostly depended on textual information, such as situation reports and email content, and the benefit of other media is often not realised. Deep Learning (DL) algorithms have recently demonstrated promising results in extracting knowledge from multiple modalities of data, but the use of DL approaches for DR tasks has thus far mostly been pursued in an academic context. This paper conducts a systematic review of 83 articles to identify the successes, current and future challenges, and opportunities in using DL for DR tasks. Our analysis is centred around the components of learning, a set of aspects that govern the application of Machine learning (ML) for a given problem domain. A flowchart and guidance for future research are developed as an outcome of the analysis to ensure the benefits of DL for DR activities are utilized.
Description
Keywords
Deep learning, Disaster management, Disaster response, Literature review
Citation
SN Comput Sci, 2022, 3 (1), pp. 92 - ?
URI
Collections