< Back to previous page

Project

Towards Context-aware Implicit Multimodal Human-Computer Interaction (IWT623)

Problem Statement
Over the last decade, we have seen a clear trend towards smart environments and living spaces where information processing is embedded into everyday objects. This introduces some challenges in terms of human-computer interaction: How should we design any implicit interaction for these sensor-based systems? The intelligent interpretation and adaptation of human-machine interaction is addressed by the rapidly emerging research field of Human-Computer Intelligent Interaction (HCII). This future form of HCII is dealing with the processing of implicit multimodal input and we need solutions to process multimodal input signals in a context-dependent manner. Furthermore, we can only move towards HCII, if we find methods for efficient context reasoning over longer time periods of multimodal behavioural cues.
(ii) Methods and Technologies:
To make a step forward towards implicit multimodal human-computer interaction, we plan do develop the appropriate conceptual context model to represent the various multimodal behavioural cues and sensor data. Moreover, we will investigate solutions for the efficient reasoning over a large amount of multimodal behavioural cues. The data managed by the context model will be interpreted by means of a powerful multimodal-oriented declarative rule language. We plan to build on existing work that is carried out in the promoter's lab on a multimodal interaction framework and add the necessary components for context reasoning. Last but not least, we are going to design and develop an authoring tool to define context rules as well as the corresponding actions to be executed in a given context in order to support the design of any implicit human-computer interaction. There will be an empirical evaluation of our conceptual context model as well as the authoring tool by implementing applications in cooperation with researchers from other domains.
(iii) Scientific Context
This project will be embedded in the research on multi-touch and multimodal interaction frameworks that is currently carried out in the Web & Information Systems Engineering (WISE) lab at the Vrije Universiteit Brussel (VUB) and is strongly related to the promoter's research vision on cross-media information spaces. The proposed project on context-aware implicit multimodal humancomputer interaction further fits into the long term strategy of the lab towards intelligent offices and smart living spaces (Office of the Future). Within the department, there might further be some collaboration with the Software Languages Lab (SOFT) on complex event processing and the Computational Modeling Lab (COMO) for incorporating machine learning components in our evaluation step.
Date:1 Jan 2013 →  31 Dec 2016
Keywords:Databases, Programming languages, Mobile Computing, Artificial Intelligence, Web systems, Software agents
Disciplines:Development of bioinformatics software, tools and databases, Applied mathematics in specific fields