This blog post extends the convergence theory from the first part of these notes on the
Frank-Wolfe (FW) algorithm with convergence guarantees on the primal-dual gap which generalize
and strengthen the convergence guarantees obtained in the first part.
Most proofs in optimization consist in using inequalities for a particular function class in some creative way.
This is a cheatsheet with inequalities that I use most often. It considers class of functions that are convex,
strongly convex and $L$-smooth.
The main contribution is to develop a parallel (fully asynchronous, no locks) variant of the SAGA algorighm. This is a stochastic variance-reduced method for general optimization, specially adapted for problems …
Announce: first public release of lightning!, a library for large-scale linear classification, regression and ranking in Python. The library was started a couple of years ago by Mathieu Blondel who also contributed the vast majority of source code. I joined recently its development and decided it was about time for …
Together with other scikit-learn developers we've created an umbrella organization for scikit-learn-related projects named scikit-learn-contrib. The idea is for this organization to host projects that are deemed too specific or too experimental to be included in the scikit-learn codebase but still offer an API which is compatible with scikit-learn and …
Recently I've implemented, together with Arnaud Rachez, the SAGA[1] algorithm in the lightning machine learning library (which by the way, has been recently moved to the new scikit-learn-contrib project). The lightning library uses the same API as scikit-learn but is particularly adapted to online learning. As for the SAGA …