Six: All of this has happened before. Baltar: But the question remains, does all of this have to happen again? Six: This time I bet no. Baltar: You know, I've never known you to play the optimist. Why the change of heart? Six: Mathematics. Law of averages. Let a complex …
While the most common accelerated methods like Polyak and Nesterov incorporate a momentum term, a little known fact is that simple gradient descent –no momentum– can achieve the same rate
through only a well-chosen sequence of step-sizes. In this post we'll derive this method and through simulations discuss its practical …
I've seen things you people wouldn't believe.
Valleys sculpted by trigonometric functions.
Rates on fire off the shoulder of divergence.
Beams glitter in the dark near the Polyak gate.
All those landscapes will be lost in time, like tears in rain. Time to halt.
We can tighten the analysis of gradient descent with momentum through a cobination of Chebyshev polynomials of the first and second kind. Following this connection, we'll derive one of the most iconic methods in optimization: Polyak momentum.
There's a fascinating link between minimization of quadratic functions and polynomials. A link
that goes
deep and allows to phrase optimization problems in the language of polynomials and vice versa.
Using this connection, we can tap into centuries of research in the theory of polynomials and
shed new light on …