A Robust User Interface for IoT using Context-Aware Bayesian Fusion

Jian Wu, Reese Grimsley, Roozbeh Jafari1

  • 1Texas A&M University

Details

09:20 - 09:35 | Wed 7 Mar | Antilles CD | WeAT1.3

Session: BSN Session # 5 – Machine Learning and Signal Processing for BSN

Abstract

As the Internet of Things (IoT) continues to expand into our daily lives, consumers are finding a growing catalogue of smart devices to boost the intelligence of their homes. Currently, the user must manage a proprietary user interface (UI) for each device, and each application comes with its own UI, creating a cumbersome app environment. Clearly, a single UI that can control all of these devices would be preferable. This interface should be accessible using forms of communication that feel natural, for example, speech, body language, and facial expressions, to name a few. In this paper, we propose a framework for multimodal UI using a flexible, slotted command ontology and decision-level Bayesian fusion. Our case study explores command recognition for device control with a wearable system accessed via speech and gestures, using a wrist-mounted inertial measurement unit (IMU) for hand gesture recognition. We achieve an accuracy of 94.82% on a set of 17 commands.