< Back to previous page

Project

Representation Learning with Restricted Kernel Machines

In this thesis, we extend the use of Restricted Kernel Machines (RKM) to combine the advantages of classical mathematical methods with deep learning models to solve problems in various domains, including generative modeling, disentangled representation learning, robust representation learning, and time-series forecasting. Our work builds on the framework of multi-view kernel principal component analysis and extends it to learn a common representation from various modalities, such as images and text, and to detect outliers and learn a robust representation. We propose several objective functions to promote disentangled representation learning and derive the evidence lower bound for the proposed model, showing its connections with variational autoencoders. Additionally, we extend the framework to time-series forecasting by adding a regularization term to the objective that captures the correlation between the current and past latent variables. The optimality conditions of this objective led to a multi-view kernel PCA problem, which is used to explicitly add one-step ahead information during the training in a static regression setting. Finally, we experimentally validate and benchmark all proposed models on standard publicly available datasets, demonstrating their effectiveness in solving various real-world problems.

Date:8 Nov 2018 →  5 Jun 2023
Keywords:restricted kernel machines, dynamical systems, deep learning
Disciplines:Applied mathematics in specific fields, Computer architecture and networks, Distributed computing, Information sciences, Information systems, Programming languages, Scientific computing, Theoretical computer science, Visual computing, Other information and computing sciences, Modelling, Biological system engineering, Signal processing, Control systems, robotics and automation, Design theories and methods, Mechatronics and robotics, Computer theory
Project type:PhD project