Announce: first public release of lightning!, a library for large-scale linear classification, regression and ranking in Python. The library was started a couple of years ago by Mathieu Blondel who also contributed the vast majority of source code. I joined recently its development and decided it was about time for a v0.1!.

Prebuild conda packages are available for all operating systems (god thank appveyor). More information on lightning's website.

Together with other scikit-learn developers we've created an umbrella organization for scikit-learn-related projects named scikit-learn-contrib. The idea is for this organization to host projects that are deemed too specific or too experimental to be included in the scikit-learn codebase but still offer an API which is compatible with scikit-learn and would like to benefit of the visibility of being labeled as scikit-learn-compatible.

We've set two requirements to being under this umbrella right now (this might evolve in the future). The first requirement is to have a scikit-learn compatible API, i.e., to follow the guide on the scikit-learn documentation so that objects can be used by scikit-learn meta-estimators (such as GridSearchCV). The second condition is that projects should be actively maintaned and have a high-quality codebase. Judging the quality of a codebase is difficult and subjective, but we agreed that at the bare minimum, the source code should be tested using continuous integration tools such as travis and reach a good test coverage (above 80%). More information is not available on the scikit-learn-contrib repository.

The first project to be hosted by this organization is lightning, but we hope that others will follow. If you would like to submit a new project, open an issue at the main project and we will look into it. There is also a project template for new and old projects.

Recently I've implemented, together with Arnaud Rachez, the SAGA[^{1}] algorithm in the lightning machine learning library (which by the way, has been recently moved to the new scikit-learn-contrib project). The lightning library uses the same API as scikit-learn but is particularly adapted to online learning. As for the SAGA algorithm, its performance is similar to other variance-reduced stochastic algorithms such as SAG[^{3}] or SVRG[^{2}] but it has the advantage with respect to SAG[^{3}] that it allows non-smooth penalty terms (such as $\ell_1$ regularization). It is implemented in lightning as SAGAClassifier and SAGARegressor.

We have taken care to make this implementation as efficient as possible. As for most stochastic gradient algorithms, a naive implementation takes 3 lines of code and is straightforward to implement. However, there are many tricks that are time-consuming and error-prone to implement but make a huge difference in efficiency.

A small example, more as a sanity check than to claim anything. The following plot shows the suboptimality as a function of time for three similar methods: SAG, SAGA and SVRG. The dataset used in the RCV1 dataset (test set, obtained from the libsvm webpage), consisting of 677.399 samples and 47.236 features. Interestingly, all methods can solve this rather large-scale problem within a few seconds. Within them, SAG and SAGA have a very similar performance and SVRG seems to be reasonably faster.

A note about the benchmarks: it is difficult to compare fairly stochastic gradient methods because at the end it usually boils down to how you choose the step size. In this plot I set the step size of all methods to 1/(3L) , where L is the Lipschitz constant of the objective function, as I think this is a popular choice. I would have prefered 1/L but SVRG was not converging for this step size. The code for the benchmarks can be found here.

A. Defazio, F. Bach & S. Lacoste-Julien. "SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives" (2014). ↩

Rie Johnson and Tong Zhang. "Accelerating stochastic gradient descent using predictive variance reduction." Advances in Neural Information Processing Systems. 2013. ↩

Mark Schmidt, Nicolas Le Roux, and Francis Bach. "Minimizing finite sums with the stochastic average gradient." arXiv preprint arXiv:1309.2388 (2013). ↩↩

Cross-validation iterators in scikit-learn are simply generator objects, that is, Python objects that implement the __iter__ method and that for each call to this method return (or more precisely, yield) the indices or a boolean mask for the train and test set. Hence, implementing new cross-validation iterators that behave as the ones in scikit-learn is easy with this in mind. Here goes a small code snippet that implements a holdout cross-validator generator following the scikit-learn API.

importnumpyasnpfromsklearn.utilsimportcheck_random_stateclassHoldOut:""" Hold-out cross-validator generator. In the hold-out, the data is split only once into a train set and a test set. Unlike in other cross-validation schemes, the hold-out consists of only one iteration. Parameters ---------- n : total number of samples test_size : 0 < float < 1 Fraction of samples to use as test set. Must be a number between 0 and 1. random_state : int Seed for the random number generator. """def__init__(self,n,test_size=0.2,random_state=0):self.n=nself.test_size=test_sizeself.random_state=random_statedef__iter__(self):n_test=int(np.ceil(self.test_size*self.n))n_train=self.n-n_testrng=check_random_state(self.random_state)permutation=rng.permutation(self.n)ind_test=permutation[:n_test]ind_train=permutation[n_test:n_test+n_train]yieldind_train,ind_test

Contrary to other cross-validation schemes, holdout relies on a single split of the data. It is well known than in practice holdout performs much worse than KFold or LeaveOneOut schemes. However, holdout has the advantage that its theoretical properties are easier to derive. For examples of this see e.g. Section 8.7 of Theory of classification: a survey of some recent advances and the very recent The reusable holdout.

TL;DR I created a gallery for IPython/Jupyter notebooks. Check it out :-)

A couple of months ago I put online a website that displays a collection of IPython/Jupyter notebooks. The is a website that collects user-submitted and publicly available notebooks and displays them with a nice screenshot. The great thing about this website compared to othersimilar efforts is that this one updates and categorizes the notebooks (by date and views) automatically. You can even search for anything in this database that contains already more than 400 notebooks!

Vision

I would like to make a website where it is possible to

Find the notebook you are looking for. There is precious information (examples, documentation, tutorials, etc.) contained within IPython/Jupyter notebooks. It should be easy to find this information.

Discover new notebooks. I would like to see new notebooks as they are submitted in order to discover new notebooks, possibly about new technologies that I did not know about.

How it works

Under the hood there's a django app, for which the source code lives here (don't hesitate to use the issues feature in github to suggest features or to report bugs). To propose new notebooks there's a tab where anyone can leave the URL of a notebook. If all goes right, the django app will take a screenshot will be taken and incorporated into the database. The only non-trivial part of this process is to take the screenshot, for which I use a bit of javascript around phantomjs.

Future plans

What you see is just the beginning!. As time permits I would like to implement user authentication so that registered users can bookmark their favorite notebooks, up and down-vote notebooks etc. Cathegorization of notebooks (e.g. in bins such as Math, Phisics, Machine Learning, R, Python, Julia, etc.) is also high on my list. Leave me a comment if you would like to see some specific feature!

Last Friday was PyData Paris, in words of the organizers, ''a gathering of users and developers of data analysis tools in Python''.

The organizers did a great job in putting together and the event started already with a full room for Gael's keynote

My take-away message from the talks is that Python has grown in 5 years from a language marginally used in some research environments into one of the main languages for data science used both in research labs and industrial environment.

My personal highlights were (note that there were two parallel tracks)

Chloe-Agathe gave a short talk on DREAM challenges. In her talk she mentioned GPy. One year ago, I visited Neil Lawrence at his lab in Sheffield and at that point they were in the process of migrating their Matlab codebase into Python (the GPy project). I'm very glad to see that the project is succeeding and being used by other research institutions.

Serge Guelton and Pierrick Brunet presented “Pythran: Static Compilation of Parallel Scientific Kernels”. From their own documentation: “Pythran is a python to c++ compiler for a subset of the python language. It takes a python module annotated with a few interface description and turns it into a native python module with the same interface, but (hopefully) faster”. The project seems promising although I do not have had experience as to judge the quality of their implementation.

Antoine Pitrou presented: “Numba, a JIT compiler for fast numerical code”. I must say that I'm an avid user of Numba so of course I was looking forward to this talk. One thing I didn't know is that support for CUDA is being implemented into Numba via the @cuda.jit decorator. From their website it looks like this is only available in the Numba Pro version (not free).

Kirill Smelkov presented wendelin.core, an approach to perform out-of-core computations with numpy. Slides can be found here.

Finally, Frances Alted gave the final keynote on “New Trends In Storing And Analyzing Large Data Silos With Python”. Among the projects he mentioned, I found particularly interesting bcolz, his current main project and DyND, a Python wrapper around a multi-dimensional array library.

My latest research paper[^{1}] deals with the estimation of the hemodynamic response function (HRF) from fMRI data.

This is an important topic since the knowledge of a hemodynamic response function is what makes it possible to extract the brain activation maps that are used in most of the impressive applications of machine learning to fMRI, such as (but not limited to) the reconstruction of visual images from brain activity [^{2}] [^{3}] or the decoding of numbers [^{4}].

Besides the more traditional paper that describes the method, I've put online the code I used for the experiments. The code at this stage is far from perfect but it should help in reproducing the results or improving the method. I've also put online an ipython notebook with the analysis of a small piece of data. I'm obviously glad to receive feedback/bug reports/patches for this code.

As part of the development of
memory_profiler I've tried
several ways to get memory usage of a program from within Python. In this post
I'll describe the different alternatives I've tested.

The psutil library

psutil is a python library that provides
an interface for retrieving information on running processes. It provides
convenient, fast and cross-platform functions to access the memory usage of a
Python module:

defmemory_usage_psutil():# return the memory usage in MBimportpsutilprocess=psutil.Process(os.getpid())mem=process.get_memory_info()[0]/float(2**20)returnmem

The above function returns the memory usage of the current Python process in
MiB. Depending on the platform it will choose the most accurate and fastest
way to get this information. For example, in Windows it will use the C++ Win32
API while in Linux it will read from /proc, hiding the implementation
details and proving on each platform a fast and accurate measurement.

If you are looking for an easy way to get the memory consumption within Python
this in my opinion your best shot.

The resource module

The resource module is part
of the standard Python library. It's basically a wrapper around
getrusage,
which is a POSIX standard but some methods are still missing in
Linux . However, the ones we are
interested seem to work fine in Ubuntu 10.04. You can get the memory usage
with this function:

defmemory_usage_resource():importresourcerusage_denom=1024.ifsys.platform=='darwin':# ... it seems that in OSX the output is different units ...rusage_denom=rusage_denom*rusage_denommem=resource.getrusage(resource.RUSAGE_SELF).ru_maxrss/rusage_denomreturnmem

In my experience this approach is several times faster than the one based in
psutil as was the default way to get the memory usage that I used in
memory_profiler from version 0.23 up to 0.26. I changed this behavior in
0.27 after a bug report by Philippe Gervais.
The problem with this approach is that it seems to report results that are
slightly different in some cases. Notably it seems to differ when objects
have been recently liberated from the python interpreter.

In the following example, orphaned arrays are liberated by the python
interpreter, which is correctly seen by psutil but not by resource:

By the way I would be delighted to be corrected if I'm doing something wrong
or informed of a workaround if this exists (I've got the code to reproduce the
figures ^{1})

querying ps directly

The method based on psutils works great but is not available by default on all
Python systems. Because of this in memory_profiler we use as last resort
something that's pretty ugly but works reasonably well when all else fails:
invoking the system's ps command and parsing the output. The code is
something like::

The main disadvantage of this approach is that it needs to fork a process for
each measurement. For some tasks where you need to get memory usage very fast,
like in line-by-line memory usage then this be a huge overhead on the code.
For other tasks such as getting information of long-running processes, where
the memory usage is anyway working on a separate process this is not too bad.

benchmarks

Here is a benchmark of the different alternatives presented above. I am
plotting the time it takes the different approaches to make 100 measurements
of the memory usage (lower is better). As can be seen the smallest one is
resource (although it suffers from the issues described above) followed
closely by psutil which is in my opinion the best option if you can count on
it being installed on the host system and followed far away by ps which is
roughly a hundred times slower than psutil.

IPython notebook to reproduce the figures: htmlipynb↩

In this post I compar several implementations of
Logistic Regression. The task was to implement a Logistic Regression model
using standard optimization tools from scipy.optimize and compare
them against state of the art implementations such as
LIBLINEAR.

In this blog post I'll write down all the implementation details of this
model, in the hope that not only the conclusions but also the process would be
useful for future comparisons and benchmarks.

Function evaluation

We consider the case in which the decision function is an affine function, i.e., $f(x) = \langle x, w \rangle + c$ where $w$ and $c$ are parameters to estimate. The loss function for the $\ell_2$-regularized logistic regression, i.e. the
function to be minimized is

$$
\mathcal{L}(w, \lambda, X, y) = - \frac{1}{n}\sum_{i=1}^n \log(\phi(y_i (\langle X_i, w \rangle + c))) + \frac{\lambda}{2} w^T w
$$

where $\phi(t) = 1. / (1 + \exp(-t))$ is the logistic
function, $\lambda w^T w$ is
the regularization term and $X, y$ is the input data, with $X \in
\mathbb{R}^{n \times p}$ and $y \in \{-1, 1\}^n$. However, this formulation is
not great from a practical standpoint. Even for not unlikely values of $t$
such as $t = -100$, $\exp(100)$ will overflow, assigning the loss an
(erroneous) value of $+\infty$. For this reason ^{1}, we evaluate
$\log(\phi(t))$ as

$$
\log(\phi(t)) =
\begin{cases}
- \log(1 + \exp(-t)) \text{ if } t > 0 \\
t - \log(1 + \exp(t)) \text{ if } t \leq 0\\
\end{cases}
$$

Similarly, the logistic function $\phi$ used here can be computed in a more
stable way using the formula

$$
\phi(t) = \begin{cases}
1 / (1 + \exp(-t)) \text{ if } t > 0 \\
\exp(t) / (1 + \exp(t)) \text{ if } t \leq 0\\
\end{cases}
$$

Finally, we will also need the Hessian for some second-order methods, which is given by

$$
\nabla_w ^2 \mathcal{L} = X^T D X + \lambda I
$$

where $I$ is the identity matrix and $D$ is a diagonal matrix given by $D_{ii} = \phi(y_i w^T X_i)(1 - \phi(y_i w^T X_i))$.

In Python, these function can be written as

importnumpyasnpdefphi(t):# logistic function, returns 1 / (1 + exp(-t))idx=t>0out=np.empty(t.size,dtype=np.float)out[idx]=1./(1+np.exp(-t[idx]))exp_t=np.exp(t[~idx])out[~idx]=exp_t/(1.+exp_t)returnoutdefloss(x0,X,y,alpha):# logistic loss function, returns Sum{-log(phi(t))}w,c=x0[:X.shape[1]],x0[-1]z=X.dot(w)+cyz=y*zidx=yz>0out=np.zeros_like(yz)out[idx]=np.log(1+np.exp(-yz[idx]))out[~idx]=(-yz[~idx]+np.log(1+np.exp(yz[~idx])))out=out.sum()/X.shape[0]+.5*alpha*w.dot(w)returnoutdefgradient(x0,X,y,alpha):# gradient of the logistic lossw,c=x0[:X.shape[1]],x0[-1]z=X.dot(w)+cz=phi(y*z)z0=(z-1)*ygrad_w=X.T.dot(z0)/X.shape[0]+alpha*wgrad_c=z0.sum()/X.shape[0]returnnp.concatenate((grad_w,[grad_c]))

Benchmarks

I tried several methods to estimate this $\ell_2$-regularized logistic regression. There is
one first-order method (that is, it only makes use of the gradient and not of
the Hessian), Conjugate
Gradient
whereas all the others are Quasi-Newton methods. The method I tested are:

CG = Conjugate Gradient as implemented in scipy.optimize.fmin_cg

TNC = Truncated Newton as implemented in scipy.optimize.fmin_tnc

BFGS = Broyden–Fletcher–Goldfarb–Shanno method, as implemented in scipy.optimize.fmin_bfgs.

L-BFGS = Limited-memory BFGS as implemented in scipy.optimize.fmin_l_bfgs_b. Contrary to the BFGS algorithm, which is written in Python, this one wraps a C implementation.

Trust Region = Trust Region Newton method ^{1}. This is the solver used by LIBLINEAR that I've wrapped to accept any Python function in the package pytron

To assure the most accurate results across implementations, all timings were
collected by callback functions that were called from the algorithm on each
iteration. Finally, I plot the maximum absolute value of the gradient (=the
infinity norm of the gradient) with respect to time.

The synthetic data used in the benchmarks was generated as described in ^{2} and consists
primarily of the design matrix $X$ being Gaussian noise, the vector of
coefficients is drawn also from a Gaussian distribution and the explained
variable $y$ is generated as $y = \text{sign}(X w)$. We then perturb matrix
$X$ by adding Gaussian noise with covariance 0.8. The number of samples and features
was fixed to $10^4$ and $10^3$ respectively. The penalization parameter $\lambda$ was
fixed to 1.

In this setting variables are typically uncorrelated and most solvers perform
decently:

Here, the Trust Region and L-BFGS solver perform almost equally good, with
Conjugate Gradient and Truncated Newton falling shortly behind. I was surprised
by the difference between BFGS and L-BFGS, I would have thought that when memory was not an issue both algorithms should perform similarly.

To make things more interesting, we now make the design to be slightly more
correlated. We do so by adding a constant term of 1 to the matrix $X$ and
adding also a column vector of ones this matrix to account for the intercept. These are the results:

Here, we already see that second-order methods dominate over first-order
methods (well, except for BFGS), with Trust Region clearly dominating the
picture but with TNC not far behind.

Finally, if we force the matrix to be even more correlated (we add 10. to the
design matrix $X$), then we have:

Here, the Trust-Region method has the same timing as before, but all other
methods have got substantially worse.The Trust Region
method, unlike the other methods is surprisingly robust to correlated designs.

To sum up, the Trust Region method performs extremely well for optimizing the
Logistic Regression model under different conditionings of the design matrix.
The LIBLINEAR software uses
this solver and thus has similar performance, with the sole exception that the
evaluation of the logistic function and its derivatives is done in C++ instead
of Python. In practice, however, due to the small number of iterations of this
solver I haven't seen any significant difference.

A similar development can be found in the source code of LIBLINEAR, and is probably also used elsewhere. ↩↩

"A comparison of numerical optimizers for logistic regression", P. Minka, URL↩

"Newton's Method for Large Bound-Constrained Optimization Problems", Chih-Jen Lin, Jorge J. More URL↩

TL;DR: I've implemented a logistic ordinal regression or
proportional odds model. Here is the Python code

The logistic ordinal regression model, also known as the
proportional odds was introduced in the early 80s by McCullagh [^{1}, ^{2}]
and is a generalized linear model specially tailored for the case of
predicting ordinal variables, that is, variables that are discrete (as
in classification) but which can be ordered (as in regression). It can
be seen as an extension of the logistic regression model to the
ordinal setting.

Given $X \in \mathbb{R}^{n \times p}$ input data and $y \in
\mathbb{N}^n$ target values. For simplicity we assume $y$ is a
non-decreasing vector, that is, $y_1 \leq y_2 \leq ...$. Just as the
logistic regression models posterior probability $P(y=j|X_i)$ as the
logistic function, in the logistic ordinal regression we model the
cummulative probability as the logistic function. That is,

where $w, \theta$ are vectors to be estimated from the data and $\phi$
is the logistic function defined as $\phi(t) = 1 / (1 + \exp(-t))$.

Compared to multiclass logistic regression, we have added the
constrain that the hyperplanes that separate the different classes are
parallel for all classes, that is, the vector $w$ is common across
classes. To decide to which class will $X_i$ be predicted we make use
of the vector of thresholds $\theta$. If there are $K$ different
classes, $\theta$ is a non-decreasing vector (that is, $\theta_1 \leq
\theta_2 \leq ... \leq \theta_{K-1}$) of size $K-1$. We will then
assign the class $j$ if the prediction $w^T X$ (recall that it's a
linear model) lies in the interval $[\theta_{j-1}, \theta_{j}[$. In
order to keep the same definition for extremal classes, we define
$\theta_{0} = - \infty$ and $\theta_K = + \infty$.

The intuition is that we are seeking a vector $w$ such that $X w$
produces a set of values that are well separated into the different
classes by the different thresholds $\theta$. We choose a logistic
function to model the probability $P(y \leq j|X_i)$ but other choices
are possible. In the proportional hazards model ^{1} the probability
is modeled as $-\log(1 - P(y \leq j | X_i)) = \exp(\theta_j - w^T
X_i)$. Other link functions are possible, where the link function
satisfies $\text{link}(P(y \leq j | X_i)) = \theta_j - w^T X_i$. Under
this framework, the logistic ordinal regression model has a logistic
link function and the proportional hazards model has a log-log link
function.

The logistic ordinal regression model is also known as the
proportional odds model, because the
ratio of corresponding odds
for two different samples $X_1$ and $X_2$ is $\exp(w^T(X_1 - X_2))$ and
so does not depend on the class $j$ but only on the difference between
the samples $X_1$ and $X_2$.

Optimization

Model estimation can be posed as an optimization problem. Here, we
minimize the loss function for the model, defined as minus the
log-likelihood:

In this sum all terms are convex on $w$, thus the loss function is
convex over $w$. It might be also jointly convex over $w$ and
$\theta$, although I haven't checked. I use the function
fmin_slsqp in scipy.optimize to optimize
$\mathcal{L}$ under the constraint that $\theta$ is a non-decreasing
vector. There might be better options, I don't know. If you do know,
please leave a comment!.

Using the formula $\log(\phi(t))^\prime = (1 - \phi(t))$, we can compute the gradient of the loss function as

I've implemented a Python version of this algorithm using Scipy's
optimize.fmin_slsqp function. This takes as arguments the
loss function, the gradient denoted before and a function that is
> 0 when the inequalities on $\theta$ are satisfied.

Code
can be found here as part of the minirank package, which
is my sandbox for code related to ranking and ordinal regression. At
some point I would like to submit it to scikit-learn but right now the
I don't know how the code will scale to medium-scale problems, but I
suspect not great. On top of that I'm not sure if there is a real demand
of these models for scikit-learn and I don't want to bloat the package
with unused features.

Performance

I compared the prediction accuracy of this model in the sense of mean absolute
error (IPython
notebook) on the boston
house-prices dataset. To have an ordinal variable, I
rounded the values to the closest integer, which gave me a problem of
size 506 $\times$ 13 with 46 different target values. Although not a
huge increase in accuracy, this model did give me better results on
this particular dataset:

Here, ordinal logistic regression is the best-performing model,
followed by a Linear Regression model and a One-versus-All Logistic
regression model as implemented in scikit-learn.

"Regression models for ordinal data", P. McCullagh, Journal of
the royal statistical society. Series B (Methodological), 1980 ↩↩

"Generalized Linear Models", P. McCullagh and J. A. Nelder (Book) ↩

"Loss Functions for Preference Levels : Regression with Discrete
Ordered Labels", Jason D. M. Rennie, Nathan Srebro ↩