Differentiating through optimization is a fundamental problem in hyperparameter optimization, dataset distillation, meta-learning and optimization as a layer, to name a few. In this blog post we'll look into one of the main approaches to differentiate through optimization: unrolled differentiation. With the help of polynomials, we'll be able to derive …
This blog post discusses the convergence rate of the Stochastic Gradient Descent with Stochastic Polyak Step-size (SGD-SPS) algorithm for minimizing a finite sum objective. Building upon the proof of the previous post, we show that the convergence rate can be improved to O(1/t) under the additional assumption that …
The stochastic Polyak step-size (SPS) is a practical variant of the Polyak step-size for stochastic optimization. In this blog post, we'll discuss the algorithm and provide a simple analysis for convex objectives with bounded gradients.
Backtracking step-size strategies (also known as adaptive step-size or approximate line-search) that set the step-size based on a sufficient decrease condition are the standard way to set the step-size on gradient descent and quasi-Newton methods. However, these techniques are much less common for Frank-Wolfe-like algorithms. In this blog post I …
Six: All of this has happened before. Baltar: But the question remains, does all of this have to happen again? Six: This time I bet no. Baltar: You know, I've never known you to play the optimist. Why the change of heart? Six: Mathematics. Law of averages. Let a complex …
This is the first of a series of blog posts on short and beautiful proofs in optimization (let me know what you think in the comments!). For this first post in the series I'll show that stochastic gradient descent (SGD) converges exponentially fast to a neighborhood of the solution.
While the most common accelerated methods like Polyak and Nesterov incorporate a momentum term, a little known fact is that simple gradient descent –no momentum– can achieve the same rate
through only a well-chosen sequence of step-sizes. In this post we'll derive this method and through simulations discuss its practical …
I've seen things you people wouldn't believe.
Valleys sculpted by trigonometric functions.
Rates on fire off the shoulder of divergence.
Beams glitter in the dark near the Polyak gate.
All those landscapes will be lost in time, like tears in rain. Time to halt.
We can tighten the analysis of gradient descent with momentum through a cobination of Chebyshev polynomials of the first and second kind. Following this connection, we'll derive one of the most iconic methods in optimization: Polyak momentum.
There's a fascinating link between minimization of quadratic functions and polynomials. A link
that goes
deep and allows to phrase optimization problems in the language of polynomials and vice versa.
Using this connection, we can tap into centuries of research in the theory of polynomials and
shed new light on …
This blog post extends the convergence theory from the first part of these notes on the
Frank-Wolfe (FW) algorithm with convergence guarantees on the primal-dual gap which generalize
and strengthen the convergence guarantees obtained in the first part.
Most proofs in optimization consist in using inequalities for a particular function class in some creative way.
This is a cheatsheet with inequalities that I use most often. It considers class of functions that are convex,
strongly convex and $L$-smooth.
The main contribution is to develop a parallel (fully asynchronous, no locks) variant of the SAGA algorighm. This is a stochastic variance-reduced method for general optimization, specially adapted for problems …