< Back to previous page

Project

Explicit and Implicit Tensor Decomposition-based Algorithms and Applications

Various real-life data such as time series and multi-sensor recordings can be represented by vectors and matrices, which are one-way and two-way arrays of numerical values, respectively. Valuable information can be extracted from these measured data matrices by means of matrix factorizations in a broad range of applications within signal processing, data mining, and machine learning. While matrix-based methods are powerful and well-known tools for various applications, they are limited to single-mode variations, making them ill-suited to tackle multi-way data without loss of information. Higher-order tensors are a natural extension of vectors (first order) and matrices (second-order), enabling us to represent multi-way arrays of numerical values, which have become ubiquitous in signal processing and data mining applications. By leveraging the powerful utitilies offered by tensor decompositions such as compression and uniqueness properties, we can extract more information from multi-way data than what is possible by using only matrix tools.

While higher-order tensors allow us to properly accommodate for multiple modes of variation in data, tensor problems are often large-scale because the number of entries in a tensor increases exponentially with the tensor order. This curse of dimensionality can, however, be alleviated or even broken by various techniques such as representing the tensor by an approximate but compact tensor model. While a pessimist only sees the curse, an optimist sees a significant opportunity for the compact representation of large-scale data vectors: by representing a large-scale vector (first order) using a compact (higher-order) tensor model, the number of parameters needed to represent the underlying vector decreases exponentially in the order of the tensor representation. The key assumption to employ this blessing of dimensionality is that the data can be described by much fewer parameters than the actual number of samples, which is often true in large-scale applications.

By leveraging the blessing of dimensionality in this thesis for blind source separation and (blind) system identification, we can tackle large-scale applications through explicit and implicit tensor decomposition-based methods. While explicit decompositions decompose a tensor that is known a priori, implicit decompositions decompose a tensor that is only known implicitly. In this thesis, we present a single-step framework for a particular type of implicit tensor decomposition, consisting of optimization-based and algebraic algorithms as well as generic uniqueness results. By properly exploiting additional structure in specific applications, we can significantly reduce the computational complexity of our optimization-based method. Our approach for large-scale instantaneous blind source separation and (blind) system identification enables various applications such as direction-of-arrival estimation in large-scale arrays and neural spike sorting in high-density recordings. Furthermore, we link implicit tensor decompositions to multilinear systems of equations, which are a generalization of linear systems, allowing us to propose a novel tensor-based classification scheme that we use for face recognition and irregular heartbeat classification with excellent performance.

 

Date:2 Sep 2014 →  10 Sep 2019
Keywords:higher-order tensor, multilinear algebra, numerical linear algebra, optimization, blind source separation, blind system identification, pattern recognition
Disciplines:Signal processing
Project type:PhD project