Efficient first-order algorithms for large-scale distributed optimization is the main subject of investigation in this thesis. The algorithms considered cover a wide array of applications in machine learning, signal processing and control. In recent years, a large number of algorithms have been introduced that rely on (possibly a reformulation of) one of the classical splitting algorithms, specifically forward-backward, Douglas-Rachford and ...
Multi-Pattern Recognition Through Maximization of Signal-to-Peak-Interference Ratio With Application to Neural Spike Sorting KU Leuven
On the convexity of bit depth allocation for linear MMSE estimation in wireless sensor networks KU Leuven
Nonlinear Model Predictive Control for Distributed Motion Planning in Road Intersections Using PANOC KU Leuven
Safe Learning-Based Control of Stochastic Jump Linear Systems: a Distributionally Robust Approach KU Leuven
SuperMann: A Superlinearly Convergent Algorithm for Finding Fixed Points of Nonexpansive Operators KU Leuven
We propose a Forward-Backward Truncated-Newton method (FBTN) for minimizing the sum of two convex functions, one of which smooth. Unlike other proximal Newton methods, our approach does not involve the employment of variable metrics, but is rather based on a reformulation of the original problem as the unconstrained minimization of a continuously differentiable function, the forward-backward envelope (FBE). We introduce a generalized Hessian for ...