< Back to previous page

Project

ASSIST: Assistive Speech Interface for Smart Technologies

Designing a Spoken Language Understanding (SLU) system for Command-and– Control (CaC) applications is challenging. Components like Automatic Speech Recognition (ASR) and Natural Language Understanding (NLU) are often language and application dependent. Even with a lot of design effort the users often still have to know what to say to the system for it to do what they want. We propose to use an end-to-end SLU system that maps speech directly to semantics and that can be trained by the user with demonstrations. The user can teach the system a new command by uttering the command and subsequently demonstrating its meaning through an alternative interface. From the demonstration we can extract a representation of the demonstrated task. The system will learn the mapping from the spoken command to the task. Teaching the system requires effort from the user, so it is crucial that the system learns quickly.

Date:18 Aug 2014 →  22 Feb 2019
Keywords:Assisterende technologie, Zelflerend, ASSIST, SpraakInterface
Disciplines:Modelling, Multimedia processing, Biological system engineering, Signal processing
Project type:PhD project