Research Activity Highlights
We have several areas of ongoing research. These included (a) reliable and timely data collection architecture over intermittent networks, (b) spatial tagging and alerting mechanisms, and (c) scalable storage, retrieval, and presentation of multimodal streaming data. Progress has been made in each of these, with early results already beginning to be incorporated into the working system prototype.
(a) SAFIREnet :
architecture for reliable and timely data collection over intermittent networks SAFIREnet incorporates two novel technologies. The first of these is the use of multi-access networks to combine various technologies into an integrated communication pipeline. The second is the use of data mule technologies which exploit mobility of network nodes carried by firefighters for the purpose of transparently moving data through the incident site when connectivity to infrastructure is intermittent or unavailable. The combination of these technologies in SAFIREnet allows sensor data to flow from firefighters inside the structure to the incident commander in a reliable and timely fashion, with no impact on the firefighters' ability to perform normal activities. We have also designed a geo-messaging service that enables mobile users (firefighters) to send information to others who currently are or will appear at a specific location or user-defined region (e.g. area clear, potential chemical release in vicinity). Our solution utilizes concepts from disruption-tolerant networking to build a flexible service by leveraging the intermittent ad-hoc connectivity between users. Through intelligent protocol design, we are able to achieve high levels of reliability and storage/transmission efficiency.
(b) Sensor Processing:
Localization & Speech Analysis We have developed a technology agnostic localization framework which allows multiple location technologies to be combined to provide the right level of localization based on the task. The framework is designed to maintain the necessary level of localization both outdoors (through GPS) and indoors (through combination of technologies ranging from scene analysis using WiFi fingerprinting and signal strength trileteration). The framework will enable each of these technologies to be incorporated into a powerful location awareness system. The SAFIRE speech analysis system captures and analyzes speech and other acoustic signals. This allows triggers/alerts to be created on abnormal acoustic patterns (e.g., too long a period when someone is quiet), presence or absence of speech, emotional features, and ambient sounds. Novel semantic techniques are used to boost the accuracy of the ASR to insure high quality of alerts. Triggers can be merged with other sensors such as location to provide a very powerful sensing technology. In addition, we are in the process of launching a study to explore the role of speech itself in localization. The main idea is to extract from speech information pertinent to localization (i.e., spatial information such as nearness to certain landmarks uttered by the fire fighters that can help in reducing uncertainty in location).(c) SAFIREstreams:
multimodal stream management SAFIREstreams is a middleware technology that allows rapid development of multi-sensor applications for situation monitoring and awareness. It provides a separation between sensors and the semantic abstraction (or meaning) of the data. This allows situation monitoring applications to be written at the semantic level relieving the application writer from having to deal with the complexity of programming heterogeneous sensors. Sensor streams produced are converted into semantic event streams that can be used to implement triggers/alerts/visualization. SAFIREstreams offers mechanisms to archive both sensor and event data and to implement triggers or alerts over these streams. This has enabled us to build a spatial tagging and alerting mechanisms which are key capabilities of the FICB. For example, alerts raised through speech, when coupled with SAFIRE localization technology, allow the system to automatically annotate the map at the location where the speaker is located with an appropriate status icon. This annotation can be made even if the firefighter himself does not know his exact position, or cannot describe it accurately.