Multimodal Deep Learning for Mobile and Wearable Sensing

 
Nick_Radcliffe.jpg

SPEAKER

Valentin Radu

AFFILIATION

Research Associate, University of Edinburgh, School of Informatics


Abstract: An increasing number of devices around us come equipped with a variety of sensors and enough computation power to appreciate them as smart devices (smartphones, smartwatches, smart-toothbrush, etc.). However, they often perform observations by using only one dedicated sensor for the task, e.g., accelerometer to count the number of steps, barometer to detect changes in elevation. The potential to combine data from multiple sensor sources is underutilised at the moment, these systems missing out on the opportunity to improve detection accuracy achievable only by fusing diverse perspectives of multiple sensors, which is not an easy task. In this work we propose to use deep learning for a graceful integration of diverse sensing modalities. In our proposed solution we dedicate neural network structures to extract specific features on each sensing modality followed by additional joint neural layers to perform the class detection. We show this approach generalises well across a number of detection tasks specific to mobile and wearable devices, while operating within their energy budget.


Bio: Research Associate at the University of Edinburgh, School of Informatics. Developing machine learning tools to accelerate the execution of deep learning models on mobile and embedded systems. Holding a PhD from the University of Edinburgh in Mobile Systems.


TalkSteven ScottTalk