< Terug naar vorige pagina

Publicatie

Reweighted L2-regularized dual averaging approach for highly sparse stochastic learning

Boekbijdrage - Boekhoofdstuk Conferentiebijdrage

© Springer International Publishing Switzerland 2014. Recent advances in dual averaging schemes for primal-dual subgradient methods and stochastic learning revealed an ongoing and growing interest inmaking stochastic and online approaches consistent and tailored towards sparsity inducing norms. In this paper we focus on the reweighting scheme in the l2-Regularized Dual Averaging approach which favors properties of a strongly convex optimization objective while approximating in a limit the l0-type of penalty. In our analysis we focus on a regret and convergence criteria of such an approximation. We derive our results in terms of a sequence of strongly convex optimization objectives obtained via the smoothing of a sub-differential and non-smooth loss function, e.g. hinge loss. We report an empirical evaluation of the convergence in terms of the cumulative training error and the stability of the selected set of features. Experimental evaluation shows some improvements over the l1-RDA method in the generalization error as well.
Boek: Proc. of the 11th International Symposium on Neural Networks
Pagina's: 232 - 242
ISBN:978-3-319-12435-3
Jaar van publicatie:2014
BOF-keylabel:ja
IOF-keylabel:ja
Authors from:Higher Education