Multi-source multimodal deep learning to improve situation awareness : an application of emergency traffic management : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Emergency Management at Massey University, Wellington, New Zealand

Loading...
Thumbnail Image
Date
2023
DOI
Open Access Location
Journal Title
Journal ISSN
Volume Title
Publisher
Massey University
Rights
The Author
Abstract
Traditionally, disaster management has placed a great emphasis on institutional warning systems, and people have been treated as victims rather than active participants. However, with the evolution of communication technology, today, the general public significantly contributes towards performing disaster management tasks challenging traditional hierarchies in information distribution and acquisition. With mobile phones and Social Media (SM) platforms widely being used, people in disaster scenes act as non-technical sensors that provide contextual information in multiple modalities (e.g., text, image, audio and video) through these content-sharing applications. Research has shown that the general public has extensively used SM applications to report injuries or deaths, damage to infrastructure and utilities, caution, evacuation needs and missing or trapped people during disasters. Disaster responders significantly depend on data for their Situation Awareness (SA) or the dynamic understanding of “the big picture” in space and time for decision-making. However, despite the benefits, processing SM data for disaster response brings multiple challenges. Among them, the most significant challenge is that SM data contain rumours, fake information and false information. Thus, responding agencies have concerns regarding utilising SM for disaster response. Therefore, a high volume of important, real-time data that is very useful for disaster responders’ SA gets wasted. In addition to SM, many other data sources produce information during disasters, including CCTV monitoring, emergency call centres, and online news. The data from these sources come in multiple modalities such as text, images, video, audio and meta-data. To date, researchers have investigated how such data can be automatically processed for disaster response using machine learning and deep learning approaches using a single source/ single modality of data, and only a few have investigated the use of multiple sources and modalities. Furthermore, there is currently no real-time system designed and tested for real-world scenarios to improve responder SA while cross-validating and exploiting SM data. This doctoral project, written within a “PhD-thesis-withpublication” format, addresses this gap by investigating the use of SM data for disaster response while improving reliability through validating data from multiple sources in real-time. This doctoral research was guided by Design Science Research (DSR), which studies the creation of artefacts to solve practical problems of general interest. An artefact: a software prototype that integrates multisource multimodal data for disaster response was developed adopting a 5-stage design science method framework proposed by Johannesson et al. [175] as the roadmap for designing, developing and evaluating. First, the initial research problem was clearly stated, positioned, and root causes were identified. During this stage, the problem area was narrowed down to Emergency traffic management instead of all disaster types. This was done considering the real-time nature and data availability for the artefact’s design, development and evaluation. Second, the requirements for developing the software artefacts were captured using the interviewing technique. Interviews were conducted with stakeholders from a number of disaster and emergency management and transport and traffic agencies in New Zealand. Moreover, domain knowledge and experimental information were captured by analysing academic literature. Third, the artefact was designed and developed. The fourth and final step was focused on the demonstration and evaluation of the artefact. The outcomes of this doctoral research underpin the potential for using validated SM data to enhance the responder’s SA. Furthermore, the research explored appropriate ways to fuse text, visual and voice data in real-time, to provide a comprehensive picture for disaster responders. The achievement of data integration was made through multiple components. First, methodologies and algorithms were developed to estimate traffic flow from CCTV images and CCTV footage by counting vehicle objects. These outcomes extend the previous work by annotating a large New Zealand-based vehicle dataset for object detection and developing an algorithm for vehicle counting by vehicle class and movement direction. Second, a novel deep learning architecture is proposed for making short-term traffic flow predictions using weather data. Previous research has mostly used only traffic data for traffic flow prediction. This research goes beyond previous work by including the correlation between traffic flow and weather conditions. Third, an event extraction system is proposed to extract event templates from online news and SM text data, answering What (semantic), Where (spatial) and When (temporal) questions. Therefore, this doctoral project provides several contributions to the body of knowledge for deep learning and disaster research. In addition, an important practical outcome of this research is an extensible event extraction system for any disaster capable of generating event templates by integrating text and visual formats from online news and SM data that could assist disaster responders’ SA.
Description
Keywords
Traffic congestion, Management, Situational awareness, Social media, Data Processing, Emergency management, Information services, Deep learning (Machine learning)
Citation