Keep the gradient flowing

LLE comes in different flavours

I haven't worked in the manifold module since last time, yet thanks to Jake VanderPlas there are some cool features I can talk about. First of, the ARPACK backend is finally working and gives factor one speedup over the lobcpg + PyAMG approach. The key is to use ARPACK's shift-invert mode instead of the regular mode, a subtle change that drove me crazy for weeks and that Jake spotted by comparing it to his C++ LLE implementation. More importantly, some variants of Locally Linear Embedding (LLE) have been added to the module: Modified LLE, Hessian LLE and LTSA. These seem to generate better solutions than the classical LLE with timings that are not far apart. All the LLE variants currently implemented can be seen in this example, where they are applied to an S-shaped dataset.