< Back to previous page

Project

Learning to optimize nonconvex problems: provably and efficiently

Today's information society is centered on the collection of large amounts of data, from which machine learning (ML) aims at extracting information. The key step in any ML technique is "training", where an optimization problem is solved to tune the parameters of the ML model. Such optimization problems are becoming extremely challenging due to their large dimension and presence of highly nonconvex as well as nonsmooth terms. Optimization algorithms for solving learning problems addressing such problems usually involve a large number of hyperparameters that need to be hand-tuned after time-consuming experimentation and are prone to ill-conditioning and to getting stuck to local minima. As the size of datasets keeps growing rapidly and ML models become more complex, these issues are becoming more pronounced jeopardizing the applicability of ML techniques to an even wider area of applications. To address this challenge, in this project we aim at learning-based techniques for devising ad-hoc tuning-free optimization algorithms. A novel universal framework will be developed, which will serve as a solid theoretical ground for the development of new learning paradigms to train optimization methods subject to certificates of (speed of) convergence and quality of output solution.

Date:1 Jan 2020 →  31 Dec 2023
Keywords:nonconvex problems, machine learning (ML)
Disciplines:Numerical computation, Mathematical software, Automation and control systems, Systems theory, modelling and identification