<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0"><channel><title>Keep the gradient flowing</title><link>http://fa.bianp.net/</link><description></description><lastBuildDate>Sat, 04 May 2024 00:00:00 +0200</lastBuildDate><item><title>On the Link Between Optimization and Polynomials, Part 6.</title><link>http://fa.bianp.net/blog/2024/unrolling/</link><description>
    &lt;p&gt;
      Differentiating through optimization is a fundamental problem in hyperparameter optimization, dataset distillation, meta-learning and optimization as a layer, to name a few. In this blog post we'll look into one of the main approaches to differentiate through optimization: unrolled differentiation. With the help of polynomials, we'll be able to derive …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Sat, 04 May 2024 00:00:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2024-05-04:/blog/2024/unrolling/</guid><category>optimization</category><category>polynomials</category><category>bi-level</category></item><item><title>Optimization Nuggets: Stochastic Polyak Step-size, Part 2</title><link>http://fa.bianp.net/blog/2023/sps2/</link><description>

  &lt;p&gt;
    This blog post discusses the convergence rate of the Stochastic Gradient Descent with Stochastic Polyak Step-size (SGD-SPS) algorithm for minimizing a finite sum objective. Building upon the proof of the previous post, we show that the convergence rate can be improved to O(1/t) under the additional assumption that …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa and &lt;a href='https://fabian-sp.github.io/'&gt;Fabian Schaipp&lt;/a&gt;</dc:creator><pubDate>Sun, 19 Nov 2023 00:00:00 +0100</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2023-11-19:/blog/2023/sps2/</guid><category>optimization</category><category>SGD</category><category>proofs</category></item><item><title>Optimization Nuggets: Stochastic Polyak Step-size</title><link>http://fa.bianp.net/blog/2023/sps/</link><description>

  &lt;p&gt;
    The stochastic Polyak step-size (SPS) is a practical variant of the Polyak step-size for stochastic optimization. In this blog post, we'll discuss the algorithm and provide a simple analysis for convex objectives with bounded gradients.
  &lt;/p&gt;


  &lt;script type="text/javascript" src="/theme/js/bibtexParse.js"&gt;
  &lt;/script&gt;

&lt;script&gt;
  MathJax = {
    tex: {
      inlineMath: [['$', '$'], ['\\(', '\\)']],
      tags: 'ams'
    },
    svg: {
      fontCache: 'global'
    }
  };
  &lt;/script&gt; 
  &lt;script type="text/javascript" id="MathJax-script" async src="/node_modules/mathjax3/es5/tex-svg.js"&gt;
&lt;/script&gt;

  &lt;script type="text/javascript" src="/theme/js/refs_v1.js"&gt;&lt;/script&gt;
  &lt;script type="text/javascript"&gt;
    const bibtex = `
@inproceedings{gunasekar2018characterizing,
  title={Characterizing implicit …&lt;/script&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Fri, 29 Sep 2023 00:00:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2023-09-29:/blog/2023/sps/</guid><category>optimization</category><category>SGD</category><category>proofs</category></item><item><title>On the Convergence of the Unadjusted Langevin Algorithm</title><link>http://fa.bianp.net/blog/2023/ulaq/</link><description>

  &lt;p&gt;
    The Langevin algorithm is a simple and powerful method to sample from a probability distribution. It's a key ingredient
    of some machine learning methods such as diffusion models and differentially private
    learning.
    In this post, I'll derive a simple convergence analysis of this method in the special case when the …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">&lt;a href='http://fa.bianp.net/pages/about.html'&gt;Fabian Pedregosa&lt;/a&gt;</dc:creator><pubDate>Wed, 14 Jun 2023 00:00:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2023-06-14:/blog/2023/ulaq/</guid><category>sampling</category><category>Langevin</category><category>diffusion</category></item><item><title>The Russian Roulette: An Unbiased Estimator of the Limit</title><link>http://fa.bianp.net/blog/2022/russian-roulette/</link><description>
  &lt;blockquote class="pullquote" style="margin-left: 20px"&gt;
    &lt;p&gt;
      &lt;q&gt;The idea for what was later called Monte Carlo method occurred to me when I was playing solitaire during my illness.&lt;/q&gt; 
  &lt;/p&gt;
  &lt;p style="text-align: right;"&gt;
    Stanislaw Ulam, &lt;i&gt;&lt;a href="https://www.goodreads.com/book/show/423246.Adventures_of_a_Mathematician"&gt;Adventures of a Mathematician&lt;/a&gt;&lt;/i&gt;
  &lt;/p&gt;
  &lt;/blockquote&gt;

  &lt;p&gt;
    The Russian Roulette offers a simple way to construct an unbiased estimator for the limit of a sequence. It allows for example to …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Sat, 15 Oct 2022 00:00:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2022-10-15:/blog/2022/russian-roulette/</guid><category>statistics</category><category>nuggets</category><category>unbiased</category></item><item><title>Notes on the Frank-Wolfe Algorithm, Part III: backtracking line-search</title><link>http://fa.bianp.net/blog/2022/adaptive_fw/</link><description>

&lt;p&gt;
    Backtracking step-size strategies (also known as adaptive step-size or approximate line-search) that set the step-size based on a sufficient decrease condition are the standard way to set the step-size on gradient descent and quasi-Newton methods. However, these techniques are much less common for &lt;a href="/blog/2018/notes-on-the-frank-wolfe-algorithm-part-i/"&gt;Frank-Wolfe-like&lt;/a&gt; algorithms. In this blog post I …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Fri, 26 Aug 2022 00:00:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2022-08-26:/blog/2022/adaptive_fw/</guid><category>optimization</category><category>Frank-Wolfe</category><category>backtracking</category><category>line-search</category><category>step-size</category><category>adaptive</category></item><item><title>On the Link Between Optimization and Polynomials, Part 5</title><link>http://fa.bianp.net/blog/2022/cyclical/</link><description>
  &lt;blockquote class="pullquote" style="margin-left: 20px"&gt;
    &lt;p&gt;
      &lt;br&gt;&lt;i&gt;
        &lt;b style="font-style: normal;"&gt;Six&lt;/b&gt;: All of this has happened before. &lt;br&gt;
        &lt;b style="font-style: normal;"&gt;Baltar&lt;/b&gt;: But the question remains, does all of this have to happen again?&lt;br&gt;
        &lt;b style="font-style: normal;"&gt;Six&lt;/b&gt;: This time I bet no.&lt;br&gt;
        &lt;b style="font-style: normal;"&gt;Baltar&lt;/b&gt;: You know, I've never known you to play the optimist. Why the change of heart?&lt;br&gt;
        &lt;b style="font-style: normal;"&gt;Six&lt;/b&gt;: Mathematics. Law of averages. Let a complex …&lt;/i&gt;&lt;/p&gt;&lt;/blockquote&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">&lt;a href='https://scholar.google.com/citations?user=93PAG2AAAAAJ&amp;hl=en'&gt;Baptiste Goujaud&lt;/a&gt; and &lt;a href='http://fa.bianp.net/pages/about.html'&gt;Fabian Pedregosa&lt;/a&gt;</dc:creator><pubDate>Fri, 27 May 2022 00:00:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2022-05-27:/blog/2022/cyclical/</guid><category>optimization</category><category>polynomials</category><category>acceleration</category></item><item><title>Optimization Nuggets: Implicit Bias of Gradient-based Methods</title><link>http://fa.bianp.net/blog/2022/implicit-bias-regression/</link><description>

    &lt;p&gt;
      When an optimization problem has multiple global minima, different algorithms can find different solutions, a phenomenon often referred to as the &lt;i&gt;implicit bias&lt;/i&gt; of optimization algorithms. In this post we'll characterize the implicit bias of gradient-based methods on a class of regression problems that includes linear least squares and Huber …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Mon, 10 Jan 2022 00:00:00 +0100</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2022-01-10:/blog/2022/implicit-bias-regression/</guid><category>learning theory</category><category>implicit bias</category><category>proofs</category><category>nuggets</category></item><item><title>Optimization Nuggets: Exponential Convergence of SGD</title><link>http://fa.bianp.net/blog/2021/exponential-sgd/</link><description>

    &lt;p&gt;
        This is the first of a series of blog posts on short and beautiful proofs in optimization (let me know what you think in the comments!). For this first post in the series I'll show that stochastic gradient descent (SGD) converges exponentially fast to a neighborhood of the solution.
    &lt;/p&gt;

  
    &lt;script type="text/javascript" src="/theme/js/bibtexParse.js"&gt;
    &lt;/script&gt;
  
    &lt;script type="text/x-mathjax-config"&gt;
        MathJax …&lt;/script&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Wed, 15 Dec 2021 00:00:00 +0100</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2021-12-15:/blog/2021/exponential-sgd/</guid><category>optimization</category><category>SGD</category><category>proofs</category><category>nuggets</category></item><item><title>On the Link Between Optimization and Polynomials, Part 4</title><link>http://fa.bianp.net/blog/2021/no-momentum/</link><description>
  &lt;p&gt;
    While the most common accelerated methods like Polyak and Nesterov incorporate a momentum term, a little known fact is that simple gradient descent &amp;ndash;no momentum&amp;ndash; can achieve the same rate  
    through only a well-chosen sequence of step-sizes. In this post we'll derive this method and through simulations discuss its practical …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Tue, 13 Apr 2021 00:00:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2021-04-13:/blog/2021/no-momentum/</guid><category>optimization</category><category>polynomials</category><category>acceleration</category></item><item><title>On the Link Between Optimization and Polynomials, Part 3</title><link>http://fa.bianp.net/blog/2021/hitchhiker/</link><description>
  &lt;blockquote class="pullquote"&gt;
    &lt;p&gt;
      &lt;q&gt;&lt;i&gt;I've seen things you people wouldn't believe. &lt;br&gt;
      Valleys sculpted by trigonometric functions. &lt;br&gt;
      Rates on fire off the shoulder of divergence. &lt;br&gt;
      Beams glitter in the dark near the Polyak gate. &lt;br&gt;
      All those landscapes will be lost in time, like tears in rain. &lt;br&gt;Time to halt.&lt;/i&gt;&lt;/q&gt; &lt;br&gt;
  &lt;/p&gt;
  &lt;p style="text-align: right;"&gt;
    A momentum optimizer &lt;a href="https://en.wikipedia.org/wiki/Tears_in_rain_monologue"&gt;*&lt;/a&gt;
  &lt;/p&gt;
  &lt;/blockquote&gt;
  &lt;figure class="fullwidth"&gt;
    &lt;img style="max-width: 1000px;" src="/images/2021/rate_convergence_momentum.png" alt=""&gt;
  &lt;/figure&gt;

  &lt;script type="text/javascript" src="/theme/js/bibtexParse.js"&gt;
  &lt;/script&gt;

  &lt;script type="text/x-mathjax-config"&gt;
  MathJax.Hub.Config …&lt;/script&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Tue, 02 Mar 2021 00:00:00 +0100</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2021-03-02:/blog/2021/hitchhiker/</guid><category>optimization</category><category>polynomials</category><category>acceleration</category><category>momentum</category><category>Chebyshev</category></item><item><title>On the Link Between Optimization and Polynomials, Part 2</title><link>http://fa.bianp.net/blog/2020/momentum/</link><description>
  &lt;p&gt;
    We can tighten the analysis of gradient descent with momentum through a cobination of Chebyshev polynomials of the first and second kind. Following this connection, we'll derive one of the most iconic methods in optimization: Polyak momentum. 
  &lt;/p&gt;


  &lt;script type="text/javascript" src="/theme/js/bibtexParse.js"&gt;
  &lt;/script&gt;

  &lt;script type="text/x-mathjax-config"&gt;
    MathJax.Hub.Config({
      extensions: ["tex2jax.js"],
      jax: ["input/TeX", "output/HTML-CSS"],
      tex2jax: {
        inlineMath …&lt;/script&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Mon, 21 Dec 2020 00:00:00 +0100</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2020-12-21:/blog/2020/momentum/</guid><category>optimization</category><category>polynomials</category><category>acceleration</category><category>momentum</category><category>Chebyshev</category></item><item><title>On the Link Between Polynomials and Optimization, Part 1</title><link>http://fa.bianp.net/blog/2020/polyopt/</link><description>

  &lt;p&gt;
    There's a fascinating link between minimization of quadratic functions and polynomials. A link
    that goes
    deep and allows to phrase optimization problems in the language of polynomials and vice versa.
    Using this connection, we can tap into centuries of research in the theory of polynomials and
    shed new light on …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Tue, 07 Apr 2020 00:00:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2020-04-07:/blog/2020/polyopt/</guid><category>optimization</category><category>polynomials</category><category>acceleration</category><category>Chebyshev</category></item><item><title>How to Evaluate the Logistic Loss and not NaN trying</title><link>http://fa.bianp.net/blog/2019/evaluate_logistic/</link><description>

&lt;p&gt;A naive implementation of the logistic regression loss can results in numerical indeterminacy even for moderate values. This post takes a closer look into the source of these instabilities and discusses more robust Python implementations.
&lt;/p&gt;


&lt;!-- for highlighting --&gt;
&lt;link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.15.5/styles/default.min.css"&gt;
&lt;script src="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.15.5/highlight.min.js"&gt;&lt;/script&gt;
&lt;script&gt;hljs.initHighlightingOnLoad();&lt;/script&gt;

&lt;!-- Mathjax--&gt;
&lt;script type="text/x-mathjax-config"&gt;
  MathJax.Hub.Config({
    extensions: ["tex2jax.js"],
    jax: ["input/TeX", "output/HTML-CSS"],
    tex2jax: {
      inlineMath …&lt;/script&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">&lt;a href='http://fa.bianp.net/pages/about.html'&gt;Fabian Pedregosa&lt;/a&gt;</dc:creator><pubDate>Fri, 27 Sep 2019 00:00:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2019-09-27:/blog/2019/evaluate_logistic/</guid><category>coding</category><category>logistic regression</category><category>numerical stability</category></item><item><title>Notes on the Frank-Wolfe Algorithm, Part II: A Primal-dual Analysis</title><link>http://fa.bianp.net/blog/2018/fw2/</link><description>
  &lt;p&gt;This blog post extends the convergence theory from the &lt;a href="/blog/2018/notes-on-the-frank-wolfe-algorithm-part-i/"&gt;first part of these notes&lt;/a&gt; on the
    Frank-Wolfe (FW) algorithm with convergence guarantees on the primal-dual gap which generalize
    and strengthen the convergence guarantees obtained in the first part. &lt;/p&gt;
  &lt;script type="text/javascript" src="/theme/js/bibtexParse.js"&gt;
  &lt;/script&gt;

  &lt;script type="text/x-mathjax-config"&gt;
MathJax.Hub.Config({
  extensions: ["tex2jax.js"],
  jax: ["input/TeX", "output/HTML-CSS"],
  tex2jax …&lt;/script&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Sat, 17 Nov 2018 00:00:00 +0100</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2018-11-17:/blog/2018/fw2/</guid><category>optimization</category><category>Frank-Wolfe</category><category>conditional gradient</category><category>convergence analysis</category></item><item><title>Three Operator Splitting</title><link>http://fa.bianp.net/blog/2018/tos/</link><description>I discuss a recently proposed optimization algorithm: the Davis-Yin three operator splitting.</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Thu, 06 Sep 2018 00:00:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2018-09-06:/blog/2018/tos/</guid><category>optimization</category><category>proximal splitting</category><category>three operator splitting</category><category>convergence analysis</category></item><item><title>Notes on the Frank-Wolfe Algorithm, Part I</title><link>http://fa.bianp.net/blog/2018/notes-on-the-frank-wolfe-algorithm-part-i/</link><description>
&lt;p&gt;This blog post is the first in a series discussing different theoretical and practical aspects of the Frank-Wolfe algorithm.&lt;/p&gt;


&lt;script type="text/x-mathjax-config"&gt;
MathJax.Hub.Config({
  tex2jax: {
    inlineMath: [ ['$','$'], ["\\(","\\)"] ],
    displayMath: [ ['$$','$$'], ["\\[","\\]"] ],
    processEscapes: true
  },
  TeX: {
    equationNumbers: { autoNumber: "AMS" },
  },
  "HTML-CSS": { fonts: ["TeX"] }
});
&lt;/script&gt;
&lt;script type="text/javascript" async src="/node_modules/mathjax2/MathJax.js?config=TeX-AMS-MML_HTMLorMML"&gt;
&lt;/script&gt;


&lt;!-- for highlighting --&gt;
&lt;link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/default.min.css"&gt;
&lt;script src="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/highlight.min.js"&gt;&lt;/script&gt;
&lt;script&gt;hljs.initHighlightingOnLoad();&lt;/script&gt;


&lt;div style="display: none"&gt;
  $$
   \def\xx{\boldsymbol x}
   \def\yy{\boldsymbol y}
   \def\ss{\boldsymbol s}
   \def\dd …&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Wed, 21 Mar 2018 00:00:00 +0100</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2018-03-21:/blog/2018/notes-on-the-frank-wolfe-algorithm-part-i/</guid><category>optimization</category><category>Frank-Wolfe</category><category>conditional gradient</category><category>convergence analysis</category></item><item><title>Optimization inequalities cheatsheet</title><link>http://fa.bianp.net/blog/2017/optimization-inequalities-cheatsheet/</link><description>
  &lt;p&gt;Most proofs in optimization consist in using inequalities for a particular function class in some creative way.
    This is a cheatsheet with inequalities that I use most often. It considers class of functions that are convex,
    strongly convex and $L$-smooth.
  &lt;/p&gt;

  &lt;script type="text/javascript" src="/theme/js/bibtexParse.js"&gt;
  &lt;/script&gt;

  &lt;script type="text/x-mathjax-config"&gt;
      MathJax.Hub.Config({
        extensions: ["tex2jax.js"],
        jax: ["input/TeX …&lt;/script&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Wed, 11 Jan 2017 00:00:00 +0100</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2017-01-11:/blog/2017/optimization-inequalities-cheatsheet/</guid><category>optimization</category><category>optimization</category><category>cheatsheet</category></item><item><title>A fully asynchronous variant of the SAGA algorithm</title><link>http://fa.bianp.net/blog/2016/a-fully-asynchronous-variant-of-the-saga-algorithm/</link><description>&lt;p&gt;My friend &lt;a href="http://www.di.ens.fr/~rleblond/"&gt;Rémi Leblond&lt;/a&gt; has recently uploaded to ArXiv &lt;a href="https://arxiv.org/abs/1606.04809"&gt;our preprint on an asynchronous version of the SAGA optimization algorithm&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The main contribution is to develop a parallel (fully asynchronous, no locks) variant of the &lt;a href="http://papers.nips.cc/paper/5258-saga-a-fast-incremental-gradient-method-with-support-for-non-strongly-convex-composite-objectives.pdf"&gt;SAGA algorighm&lt;/a&gt;. This is a stochastic variance-reduced method for general optimization, specially adapted for problems …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Wed, 12 Oct 2016 00:00:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2016-10-12:/blog/2016/a-fully-asynchronous-variant-of-the-saga-algorithm/</guid><category>optimization</category><category>optimization</category><category>asynchronous</category><category>SAGA</category></item><item><title>Hyperparameter optimization with approximate gradient</title><link>http://fa.bianp.net/blog/2016/hyperparameter-optimization-with-approximate-gradient/</link><description>

      &lt;script type="text/x-mathjax-config"&gt;
        MathJax.Hub.Config({
          extensions: ["tex2jax.js"],
          jax: ["input/TeX", "output/HTML-CSS"],
          tex2jax: {
            inlineMath: [ ['$','$'], ["\\(","\\)"] ],
            displayMath: [ ['$$','$$'], ["\\[","\\]"] ],
            processEscapes: true
          },
            TeX: {
              equationNumbers: { autoNumber: "AMS" },
              Macros: {
                RR: "{\\mathbb{R}}",
                argmin: "{\\mathop{\\mathrm{arg\\,min}}}",
                bold: ["{\\bf #1}",1]
              }
            },
          "HTML-CSS": { availableFonts: ["TeX"] }
        });
      &lt;/script&gt;

      &lt;script type="text/javascript" async src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-MML-AM_CHTML"&gt;
      &lt;/script&gt;      

    &lt;p&gt;&lt;b&gt;TL;DR:&lt;/b&gt; I describe a &lt;a href="http://arxiv.org/abs/1602.02355"&gt;method for hyperparameter optimization&lt;/a&gt; by gradient descent.&lt;/p&gt;
    &lt;p&gt;Most machine …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Wed, 25 May 2016 00:00:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2016-05-25:/blog/2016/hyperparameter-optimization-with-approximate-gradient/</guid><category>optimization</category><category>machine learning</category><category>hyperparameters</category><category>HOAG</category></item><item><title>Lightning v0.1</title><link>http://fa.bianp.net/blog/2016/lightning-v01/</link><description>&lt;p&gt;Announce: first public release of &lt;a href="http://contrib.scikit-learn.org/lightning/"&gt;lightning&lt;/a&gt;!, a library for large-scale linear classification, regression and ranking in Python. The library was started a couple of years ago by &lt;a href="http://mblondel.org"&gt;Mathieu Blondel&lt;/a&gt; who also contributed the vast majority of source code. I joined recently its development and decided it was about time for …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Fri, 25 Mar 2016 00:00:00 +0100</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2016-03-25:/blog/2016/lightning-v01/</guid><category>software</category><category>Python</category><category>scikit-learn</category><category>machine learning</category><category>lightning</category></item><item><title>scikit-learn-contrib, an umbrella for scikit-learn related projects.</title><link>http://fa.bianp.net/blog/2016/scikit-learn-contrib-an-umbrella-for-scikit-learn-related-projects/</link><description>&lt;p&gt;Together with other scikit-learn developers we've created an umbrella organization for scikit-learn-related projects named &lt;a href="https://github.com/scikit-learn-contrib"&gt;scikit-learn-contrib&lt;/a&gt;. The idea is for this organization to host projects that are deemed too specific or too experimental to be included in the scikit-learn codebase but still offer an API which is compatible with scikit-learn and …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Sun, 06 Mar 2016 00:00:00 +0100</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2016-03-06:/blog/2016/scikit-learn-contrib-an-umbrella-for-scikit-learn-related-projects/</guid><category>software</category><category>Python</category><category>scikit-learn</category><category>machine learning</category><category>lightning</category></item><item><title>SAGA algorithm in the lightning library</title><link>http://fa.bianp.net/blog/2016/saga-algorithm-in-the-lightning-library/</link><description>&lt;p&gt;Recently I've implemented, together with &lt;a href="http://arachez.com/"&gt;Arnaud Rachez&lt;/a&gt;, the SAGA[&lt;sup id="fnref:1"&gt;&lt;a class="footnote-ref" href="#fn:1" rel="footnote"&gt;1&lt;/a&gt;&lt;/sup&gt;] algorithm in the &lt;a href="http://contrib.scikit-learn.org/lightning/"&gt;lightning&lt;/a&gt; machine learning library (which by the way, has been recently moved to the new &lt;a href="https://github.com/scikit-learn-contrib"&gt;scikit-learn-contrib&lt;/a&gt; project). The lightning library uses the same API as scikit-learn but is particularly adapted to online learning. As for the SAGA …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Mon, 22 Feb 2016 00:00:00 +0100</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2016-02-22:/blog/2016/saga-algorithm-in-the-lightning-library/</guid><category>misc</category><category>Python</category><category>scikit-learn</category><category>machine learning</category><category>lightning</category></item><item><title>On the consistency of ordinal regression methods</title><link>http://fa.bianp.net/blog/2015/on-the-consistency-of-ordinal-regression-methods/</link><description>&lt;script type="text/x-mathjax-config"&gt;
MathJax.Hub.Config({
  extensions: ["tex2jax.js"],
  jax: ["input/TeX", "output/HTML-CSS"],
  tex2jax: {
    inlineMath: [ ['$','$'], ["\\(","\\)"] ],
    displayMath: [ ['$$','$$'], ["\\[","\\]"] ],
    processEscapes: true
  },
  TeX: {
  equationNumbers: { autoNumber: "AMS" },
  extensions: ["AMSmath.js", "AMSsymbols.js"]
  },
  "HTML-CSS": { fonts: ["TeX"] }
});
&lt;/script&gt;

&lt;script type="text/javascript" async
  src="/node_modules/mathjax2/MathJax.js"&gt;
&lt;/script&gt;

&lt;p&gt;My latests work (with &lt;a href="http://www.di.ens.fr/~fbach/"&gt;Francis Bach&lt;/a&gt; and &lt;a href="http://alexandre.gramfort.net/"&gt;Alexandre Gramfort&lt;/a&gt;) is on the consistency of ordinal regression methods. It has the wildly imaginative …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Fri, 09 Oct 2015 00:00:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2015-10-09:/blog/2015/on-the-consistency-of-ordinal-regression-methods/</guid><category>learning theory</category><category>consistency</category><category>machine learning</category></item><item><title>Holdout cross-validation generator</title><link>http://fa.bianp.net/blog/2015/holdout-cross-validation-generator/</link><description>&lt;p&gt;&lt;a href="http://scikit-learn.org/stable/modules/cross_validation.html#cross-validation-iterators"&gt;Cross-validation iterators&lt;/a&gt; in scikit-learn are simply generator objects, that is, Python objects that implement the &lt;code&gt;__iter__&lt;/code&gt; method and that for each call to this method return (or more precisely, &lt;code&gt;yield&lt;/code&gt;) the indices or a boolean mask for the train and test set. Hence, implementing new cross-validation iterators that behave as …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Thu, 20 Aug 2015 00:00:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2015-08-20:/blog/2015/holdout-cross-validation-generator/</guid><category>misc</category><category>Python</category><category>scikit-learn</category><category>machine learning</category><category>model selection</category></item><item><title>IPython/Jupyter notebook gallery</title><link>http://fa.bianp.net/blog/2015/ipythonjupyter-notebook-gallery/</link><description>&lt;p style="font-weight: bold; color:red"&gt;Due to lack of time and interest, I'm no longer maintaining this project. Feel free to grab the sources from &lt;a href="https://github.com/fabianp/nbgallery"&gt;https://github.com/fabianp/nbgallery&lt;/a&gt; and fork the project. &lt;/p&gt;

&lt;p&gt;TL;DR I created a gallery for IPython/Jupyter notebooks. &lt;a href="http://nb.bianp.net"&gt;Check it out :-)&lt;/a&gt;&lt;/p&gt;
&lt;div style="text-align: center"&gt;
&lt;img alt="Notebook gallery" width="600px" src="http://fa.bianp.net/uploads/2015/screenshot_nbgallery.png" /&gt;
&lt;/div&gt;

&lt;p&gt;A couple of months ago I put online …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Tue, 21 Apr 2015 00:00:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2015-04-21:/blog/2015/ipythonjupyter-notebook-gallery/</guid><category>misc</category><category>Python</category><category>Jupyter</category></item><item><title>PyData Paris - April 2015</title><link>http://fa.bianp.net/blog/2015/pydata-paris-april-2015/</link><description>&lt;p&gt;Last Friday was &lt;a href="http://pydataparis.joinux.org/schedule.html"&gt;PyData Paris&lt;/a&gt;, in words of the organizers, ''a gathering of users and developers of data analysis tools in Python''. &lt;/p&gt;
&lt;p&gt;&lt;img width="600px" src="http://pydataparis.joinux.org/static/images/PyDataLogoBig-Paris2015.png" /&gt;&lt;/p&gt;
&lt;p&gt;The organizers did a great job in putting together and the event started already with a full room for &lt;a href="http://gael-varoquaux.info/"&gt;Gael's&lt;/a&gt; keynote&lt;/p&gt;
&lt;div style="text-align: center"&gt;
&lt;img width="600px" alt="Gael's keynote" src="https://pbs.twimg.com/media/CBplCb_WIAEzytd.jpg" /&gt;&lt;/div&gt;

&lt;p&gt;My take-away message from the talks is …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Tue, 07 Apr 2015 00:00:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2015-04-07:/blog/2015/pydata-paris-april-2015/</guid><category>misc</category><category>Python</category><category>Paris</category><category>NumPy</category><category>Numba</category></item><item><title>Data-driven hemodynamic response function estimation</title><link>http://fa.bianp.net/blog/2014/data-driven-hemodynamic-response-function-estimation/</link><description>&lt;p&gt;My &lt;a href="http://www.sciencedirect.com/science/article/pii/S1053811914008027"&gt;latest research paper&lt;/a&gt;[&lt;sup id="fnref:1"&gt;&lt;a class="footnote-ref" href="#fn:1" rel="footnote"&gt;1&lt;/a&gt;&lt;/sup&gt;]  deals with the estimation of the hemodynamic response function (HRF) from fMRI data. &lt;/p&gt;
&lt;div style="text-align: center"&gt;
&lt;img width="600px" src="/images/2014/graphical_abstract.jpg" /&gt;
&lt;/div&gt;

&lt;p&gt;This is an important topic since the knowledge of a  hemodynamic response function is what makes it possible to extract the brain activation maps that are used in most of the impressive …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Fri, 05 Dec 2014 00:00:00 +0100</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2014-12-05:/blog/2014/data-driven-hemodynamic-response-function-estimation/</guid><category>misc</category><category>fMRI</category><category>GLM</category><category>python</category></item><item><title>Plot memory usage as a function of time</title><link>http://fa.bianp.net/blog/2014/plot-memory-usage-as-a-function-of-time/</link><description>&lt;p&gt;:og_image: http://fa.bianp.net/blog/images/2014/mprof_example.png&lt;/p&gt;
&lt;p&gt;One of the lesser known features of the &lt;a href="https://pypi.python.org/pypi/memory_profiler"&gt;memory_profiler package&lt;/a&gt; is its ability to plot memory consumption as a function of time. This was implemented by my friend Philippe Gervais, previously a colleague at INRIA and now at Google.&lt;/p&gt;
&lt;p&gt;With …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Fri, 07 Nov 2014 00:00:00 +0100</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2014-11-07:/blog/2014/plot-memory-usage-as-a-function-of-time/</guid><category>misc</category><category>memory_profiler</category><category>mprof</category><category>profile</category></item><item><title>Surrogate Loss Functions in Machine Learning</title><link>http://fa.bianp.net/blog/2014/surrogate-loss-functions-in-machine-learning/</link><description>&lt;!-- &lt;div style="float: left; margin: 20px; width; 200px" &gt;
&lt;img src="http://upload.wikimedia.org/wikipedia/commons/4/46/R._A._Fischer.jpg" /&gt;
&lt;p&gt;Sir R. A. Fisher. Source: Wikipedia &lt;/p&gt;
&lt;/div&gt;
 --&gt;

&lt;p&gt;&lt;script type="text/x-mathjax-config"&gt;
 MathJax.Hub.Config({
   extensions: ["tex2jax.js"],
   jax: ["input/TeX", "output/HTML-CSS"],
   tex2jax: {
     inlineMath: [ ['$','$'], ["\(","\)"] ],
     displayMath: [ ['$$','$$'], ["\[","\]"] ],
     processEscapes: true
   },
   TeX: {
   equationNumbers: { autoNumber: "AMS" },
   extensions: ["AMSmath.js", "AMSsymbols.js"]
   },
   "HTML-CSS": { fonts: ["TeX"] }
 });
 &lt;/script&gt;
 &lt;script type="text/javascript" async
   src="/node_modules/mathjax2/MathJax.js"&gt;
 &lt;/script&gt;&lt;/p&gt;
&lt;p&gt;&lt;span class="bold"&gt;TL; DR&lt;/span&gt; These are some notes on calibration of surrogate loss functions in the context of machine learning. But mostly it is …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Fri, 20 Jun 2014 00:00:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2014-06-20:/blog/2014/surrogate-loss-functions-in-machine-learning/</guid><category>misc</category><category>machine learning</category><category>consistency</category><category>calibration</category></item><item><title>Different ways to get memory consumption or lessons learned from ``memory_profiler``</title><link>http://fa.bianp.net/blog/2013/different-ways-to-get-memory-consumption-or-lessons-learned-from-memory_profiler/</link><description>&lt;p&gt;As part of the development of
&lt;a href="https://pypi.python.org/pypi/memory_profiler"&gt;memory_profiler&lt;/a&gt; I've tried
several ways to get memory usage of a program from within Python. In this post
I'll describe the different alternatives I've tested.&lt;/p&gt;
&lt;h3&gt;The psutil library&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://code.google.com/p/psutil/"&gt;psutil&lt;/a&gt; is a python library that provides
an interface for retrieving information on running processes. It …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Thu, 25 Jul 2013 00:00:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2013-07-25:/blog/2013/different-ways-to-get-memory-consumption-or-lessons-learned-from-memory_profiler/</guid><category>misc</category><category>Python</category><category>memory</category><category>memory_profiler</category></item><item><title>Numerical optimizers for Logistic Regression</title><link>http://fa.bianp.net/blog/2013/numerical-optimizers-for-logistic-regression/</link><description>&lt;script type="text/x-mathjax-config"&gt;
MathJax.Hub.Config({
  extensions: ["tex2jax.js"],
  jax: ["input/TeX", "output/HTML-CSS"],
  tex2jax: {
    inlineMath: [ ['$','$'], ["\\(","\\)"] ],
    displayMath: [ ['$$','$$'], ["\\[","\\]"] ],
    processEscapes: true
  },
  TeX: {
  equationNumbers: { autoNumber: "AMS" },
  extensions: ["AMSmath.js", "AMSsymbols.js"]
  },
  "HTML-CSS": { fonts: ["TeX"] }
});
&lt;/script&gt;

&lt;script type="text/javascript" async
  src="/node_modules/mathjax2/MathJax.js"&gt;
&lt;/script&gt;

&lt;p&gt;In this post I compar several implementations of
Logistic Regression. The task was to implement a Logistic Regression model
using standard optimization …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Mon, 20 May 2013 00:00:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2013-05-20:/blog/2013/numerical-optimizers-for-logistic-regression/</guid><category>misc</category><category>machine learning</category><category>logistic regression</category><category>Python</category><category>SciPy</category></item><item><title>Logistic Ordinal Regression</title><link>http://fa.bianp.net/blog/2013/logistic-ordinal-regression/</link><description>&lt;p&gt;&lt;strong&gt;TL;DR: I've implemented a logistic ordinal regression or
  proportional odds model. &lt;a href="http://github.com/fabianp/minirank/blob/master/minirank/logistic.py"&gt;Here is the Python code&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;script type="text/x-mathjax-config"&gt;
  MathJax.Hub.Config({
    extensions: ["tex2jax.js"],
    jax: ["input/TeX", "output/HTML-CSS"],
    tex2jax: {
      inlineMath: [ ['$','$'] ],
      displayMath: [ ['$$','$$'], ["\[","\]"] ],
      processEscapes: true
    },
    TeX: {
    equationNumbers: { autoNumber: "AMS" },
    extensions: ["AMSmath.js", "AMSsymbols.js"]
    },
    "HTML-CSS": { fonts: ["TeX"] }
  });
  &lt;/script&gt;
  &lt;script type="text/javascript" async
    src="/node_modules/mathjax2/MathJax.js"&gt;
  &lt;/script&gt;&lt;/p&gt;
&lt;p&gt;The &lt;em&gt;logistic ordinal regression&lt;/em&gt; model …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Thu, 02 May 2013 00:00:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2013-05-02:/blog/2013/logistic-ordinal-regression/</guid><category>misc</category><category>machine learning</category><category>ordinal regression</category><category>Python</category><category>ranking</category></item><item><title>Isotonic Regression</title><link>http://fa.bianp.net/blog/2013/isotonic-regression/</link><description>&lt;script type="text/x-mathjax-config"&gt;
MathJax.Hub.Config({
  extensions: ["tex2jax.js"],
  jax: ["input/TeX", "output/HTML-CSS"],
  tex2jax: {
    inlineMath: [ ['$','$'], ["\\(","\\)"] ],
    displayMath: [ ['$$','$$'], ["\\[","\\]"] ],
    processEscapes: true
  },
  TeX: {
  equationNumbers: { autoNumber: "AMS" },
  extensions: ["AMSmath.js", "AMSsymbols.js"]
  },
  "HTML-CSS": { fonts: ["TeX"] }
});
&lt;/script&gt;

&lt;script type="text/javascript" async
  src="/node_modules/mathjax2/MathJax.js"&gt;
&lt;/script&gt;

&lt;p&gt;My latest contribution for &lt;a href="http://scikit-learn.org"&gt;scikit-learn&lt;/a&gt; is
 an implementation of the isotonic regression model that I coded with
 &lt;a href="https://twitter.com/nvaroqua"&gt;Nelle Varoquaux&lt;/a&gt; and
 &lt;a href="http://alexandre.gramfort.net/"&gt;Alexandre Gramfort …&lt;/a&gt;&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Tue, 16 Apr 2013 00:00:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2013-04-16:/blog/2013/isotonic-regression/</guid><category>misc</category><category>isotonic regression</category><category>machine learning</category><category>Python</category><category>scikit-learn</category></item><item><title>Householder matrices</title><link>http://fa.bianp.net/blog/2013/householder-matrices/</link><description>&lt;script type="text/x-mathjax-config"&gt;
MathJax.Hub.Config({
  extensions: ["tex2jax.js"],
  jax: ["input/TeX", "output/HTML-CSS"],
  tex2jax: {
    inlineMath: [ ['$','$'], ["\\(","\\)"] ],
    displayMath: [ ['$$','$$'], ["\\[","\\]"] ],
    processEscapes: true
  },
  TeX: {
  equationNumbers: { autoNumber: "AMS" },
  extensions: ["AMSmath.js", "AMSsymbols.js"]
  },
  "HTML-CSS": { fonts: ["TeX"] }
});
&lt;/script&gt;

&lt;script type="text/javascript" async
  src="/node_modules/mathjax2/MathJax.js"&gt;
&lt;/script&gt;

&lt;p&gt;Householder matrices are square matrices of the form&lt;/p&gt;
&lt;p&gt;$$ P = I - \beta v v^T$$&lt;/p&gt;
&lt;p&gt;where $\beta$ is a scalar and $v$ is …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Sat, 30 Mar 2013 00:00:00 +0100</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2013-03-30:/blog/2013/householder-matrices/</guid><category>misc</category><category>linear algebra</category><category>householder</category><category>QR</category></item><item><title>Loss Functions for Ordinal regression</title><link>http://fa.bianp.net/blog/2013/loss-functions-for-ordinal-regression/</link><description>&lt;script type="text/x-mathjax-config"&gt;
MathJax.Hub.Config({
  extensions: ["tex2jax.js"],
  jax: ["input/TeX", "output/HTML-CSS"],
  tex2jax: {
    inlineMath: [ ['$','$'], ["\\(","\\)"] ],
    displayMath: [ ['$$','$$'], ["\\[","\\]"] ],
    processEscapes: true
  },
  TeX: {
  equationNumbers: { autoNumber: "AMS" },
  extensions: ["AMSmath.js", "AMSsymbols.js"]
  },
  "HTML-CSS": { fonts: ["TeX"] }
});
&lt;/script&gt;

&lt;script type="text/javascript" async
  src="/node_modules/mathjax2/MathJax.js"&gt;
&lt;/script&gt;

&lt;p&gt;&lt;strong&gt; Note: this post contains a fair amount of LaTeX, if you don't
visualize the math correctly come to its &lt;a href="http://fa.bianp.net/blog/2013/loss-functions-for-ordinal-regression/"&gt;original location&lt;/a&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Wed, 27 Feb 2013 00:00:00 +0100</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2013-02-27:/blog/2013/loss-functions-for-ordinal-regression/</guid><category>misc</category><category>machine learning</category><category>ordinal regression</category><category>loss function</category></item><item><title>Memory plots with memory_profiler</title><link>http://fa.bianp.net/blog/2013/memory-plots-with-memory_profiler/</link><description>&lt;p&gt;Besides performing a line-by-line analysis of memory consumption,
&lt;a href="http://pypi.python.org/pypi/memory_profiler"&gt;&lt;code&gt;memory_profiler&lt;/code&gt;&lt;/a&gt;
exposes some functions that allow to retrieve the memory consumption
of a function in real-time, allowing e.g. to visualize the memory
consumption of a given function over time.&lt;/p&gt;
&lt;p&gt;The function to be used is &lt;code&gt;memory_usage&lt;/code&gt;. The first argument
specifies what …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Fri, 04 Jan 2013 00:00:00 +0100</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2013-01-04:/blog/2013/memory-plots-with-memory_profiler/</guid><category>misc</category><category>Python</category><category>memory</category><category>memory_profiler</category></item><item><title>Singular Value Decomposition in SciPy</title><link>http://fa.bianp.net/blog/2012/singular-value-decomposition-in-scipy/</link><description>&lt;p&gt;SciPy contains two methods to compute the singular value decomposition (SVD) of a matrix: &lt;code&gt;scipy.linalg.svd&lt;/code&gt; and &lt;code&gt;scipy.sparse.linalg.svds&lt;/code&gt;. In this post I'll compare both methods for the task of computing the full SVD of a large dense matrix.&lt;/p&gt;
&lt;p&gt;The first method, &lt;code&gt;scipy.linalg.svd&lt;/code&gt;, is perhaps …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Sat, 08 Dec 2012 00:00:00 +0100</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2012-12-08:/blog/2012/singular-value-decomposition-in-scipy/</guid><category>misc</category><category>python</category><category>scipy</category><category>svd</category></item><item><title>Learning to rank with scikit-learn: the pairwise transform</title><link>http://fa.bianp.net/blog/2012/learning-to-rank-with-scikit-learn-the-pairwise-transform/</link><description>&lt;script type="text/x-mathjax-config"&gt;
MathJax.Hub.Config({
  extensions: ["tex2jax.js"],
  jax: ["input/TeX", "output/HTML-CSS"],
  tex2jax: {
    inlineMath: [ ['$','$'], ["\\(","\\)"] ],
    displayMath: [ ['$$','$$'], ["\\[","\\]"] ],
    processEscapes: true
  },
  TeX: {
  equationNumbers: { autoNumber: "AMS" },
  extensions: ["AMSmath.js", "AMSsymbols.js"]
  },
  "HTML-CSS": { fonts: ["TeX"] }
});
&lt;/script&gt;

&lt;script type="text/javascript" async
  src="/node_modules/mathjax2/MathJax.js"&gt;
&lt;/script&gt;

&lt;p&gt;This tutorial introduces the concept of pairwise preference used in most &lt;a href="http://en.wikipedia.org/wiki/Learning_to_rank"&gt;ranking problems&lt;/a&gt;. I'll use scikit-learn and for learning and matplotlib for …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Tue, 23 Oct 2012 00:00:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2012-10-23:/blog/2012/learning-to-rank-with-scikit-learn-the-pairwise-transform/</guid><category>misc</category><category>python</category><category>scikit-learn</category><category>ranking</category></item><item><title>line-by-line memory usage of a Python program</title><link>http://fa.bianp.net/blog/2012/line-by-line-report-of-memory-usage/</link><description>&lt;p&gt;My newest project is a Python library for monitoring memory consumption
of arbitrary process, and one of its most useful features is the
line-by-line analysis of memory usage for Python code. I wrote a basic
prototype six months ago after being surprised by the lack of related
tools. I wanted …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Tue, 24 Apr 2012 07:04:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2012-04-24:/blog/2012/line-by-line-report-of-memory-usage/</guid><category>misc</category><category>python</category><category>memory_profiler</category></item><item><title>Low rank approximation</title><link>http://fa.bianp.net/blog/2011/low-rank-approximation/</link><description>&lt;p&gt;A little experiment to see what low rank approximation looks like. These
are the best rank-k approximations (in the Frobenius norm) to the a
natural image for increasing values of k and an original image of rank
512.&lt;/p&gt;
&lt;img alt="" src="/blog/static/uploads/2011/11/animation1.gif" /&gt;
&lt;p&gt;Python code can be found &lt;a class="reference external" href="https://gist.github.com/1342033"&gt;here&lt;/a&gt;. GIF animation made
using ImageMagic's convert …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Sun, 06 Nov 2011 12:05:00 +0100</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2011-11-06:/blog/2011/low-rank-approximation/</guid><category>misc</category><category>machine learning</category><category>python</category></item><item><title>qr_multiply function in scipy.linalg</title><link>http://fa.bianp.net/blog/2011/qr_multiply-function-in-scipylinalg/</link><description>&lt;p&gt;In scipy's development version there's a new function closely related to
the &lt;a class="reference external" href="http://en.wikipedia.org/wiki/QR_decomposition"&gt;QR-decomposition&lt;/a&gt; of a matrix and to the least-squares solution of
a linear system. What this function does is to compute the
QR-decomposition of a matrix and then multiply the resulting orthogonal
factor by another arbitrary matrix. In pseudocode …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Fri, 14 Oct 2011 16:44:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2011-10-14:/blog/2011/qr_multiply-function-in-scipylinalg/</guid><category>misc</category><category>python</category><category>scipy</category></item><item><title>scikit-learn 0.9</title><link>http://fa.bianp.net/blog/2011/scikit-learn-09/</link><description>&lt;p&gt;Last week we released a new version of scikit-learn. The &lt;a class="reference external" href="http://scikit-learn.sourceforge.net/stable/whats_new.html"&gt;Changelog is
particularly impressive&lt;/a&gt;, yet personally this release is important for
other reasons. This will probably be my last release as a paid engineer.
I'm starting a PhD next month, and although I plan to continue
contributing to the project …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Sun, 02 Oct 2011 11:19:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2011-10-02:/blog/2011/scikit-learn-09/</guid><category>General, scikit-learn</category></item><item><title>Reworked example gallery for scikit-learn</title><link>http://fa.bianp.net/blog/2011/reworked-example-gallery-for-scikit-learn/</link><description>&lt;p&gt;I've been working lately in improving the scikit-learn example gallery
to show also a small thumbnail of the plotted result. Here is what the
gallery looks like now:&lt;/p&gt;
&lt;img alt="" src="http://fa.bianp.net/blog/static/uploads/2011/09/screenshot.png" /&gt;
&lt;p&gt;And the real thing should be already displayed in the &lt;a class="reference external" href="http://scikit-learn.sourceforge.net/dev/auto_examples/index.html"&gt;development-documentation&lt;/a&gt;. The next thing is to add a static image to those …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Sun, 04 Sep 2011 20:09:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2011-09-04:/blog/2011/reworked-example-gallery-for-scikit-learn/</guid><category>scikit-learn</category></item><item><title>scikit-learn’s EuroScipy 2011 coding sprint -- day two</title><link>http://fa.bianp.net/blog/2011/scikit-learns-euroscipy-2011-coding-sprint-day-two/</link><description>&lt;p&gt;&lt;img alt="image0" src="http://fseoane.net/blog/static/uploads/2011/08/all-300x225.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;Today's coding sprint was a bit more crowded, with some
notable scipy hackers such as Ralph Gommers, &lt;a class="reference external" href="http://mentat.za.net/"&gt;Stefan van der Walt&lt;/a&gt;,
&lt;a class="reference external" href="http://cournape.wordpress.com/"&gt;David Cournapeau&lt;/a&gt; or &lt;a class="reference external" href="http://blog.fperez.org/"&gt;Fernando Perez&lt;/a&gt; from Ipython joining in. On
what got done: - We merged &lt;a class="reference external" href="http://www.astro.washington.edu/users/vanderplas/"&gt;Jake&lt;/a&gt;'s new BallTree code. This is a pure
Cython implementation of a nearest-neighbor …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Thu, 25 Aug 2011 00:33:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2011-08-25:/blog/2011/scikit-learns-euroscipy-2011-coding-sprint-day-two/</guid><category>General, Python, scikit-learn</category></item><item><title>scikit-learn EuroScipy 2011 coding sprint -- day one</title><link>http://fa.bianp.net/blog/2011/scikit-learn-euroscipy-2011-coding-sprint-day-one/</link><description>&lt;p&gt;As a warm-up for the upcoming &lt;a class="reference external" href="http://www.euroscipy.org/conference/euroscipy2011"&gt;EuroScipy-conference&lt;/a&gt;, some of the
&lt;a class="reference external" href="http://scikit-learn.sf.net"&gt;scikit-learn&lt;/a&gt; developers decided to gather and work together for a
couple of days. Today was the first day and there was only a handfull of
us, as the real kickoff is expected tomorrow. Some interesting coding
happened, although most of …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Tue, 23 Aug 2011 21:38:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2011-08-23:/blog/2011/scikit-learn-euroscipy-2011-coding-sprint-day-one/</guid><category>misc</category><category>scikit-learn</category><category>python</category></item><item><title>Ridge regression path</title><link>http://fa.bianp.net/blog/2011/ridge-regression-path/</link><description>&lt;p&gt;Ridge coefficients for multiple values of the regularization parameter
can be elegantly computed by updating the &lt;em&gt;thin&lt;/em&gt; SVD decomposition of
the design matrix:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;numpy&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nn"&gt;np&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;scipy&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;linalg&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;ridge&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;A&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;alphas&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="sd"&gt;&amp;quot;&amp;quot;&amp;quot;&lt;/span&gt;
&lt;span class="sd"&gt;    Return coefficients for regularized least squares&lt;/span&gt;

&lt;span class="sd"&gt;         min ||A x - b||^2 + alpha ||x||^2 …&lt;/span&gt;&lt;/pre&gt;&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Tue, 12 Jul 2011 09:21:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2011-07-12:/blog/2011/ridge-regression-path/</guid><category>misc</category><category>scikit-learn</category><category>scipy</category><category>linear algebra</category></item><item><title>LLE comes in different flavours</title><link>http://fa.bianp.net/blog/2011/lle-comes-in-different-flavours/</link><description>&lt;p&gt;I haven't worked in the manifold module since &lt;a class="reference external" href="http://fa.bianp.net/blog/2011/manifold-learning-in-scikit-learn/"&gt;last time&lt;/a&gt;, yet thanks
to &lt;a class="reference external" href="http://www.astro.washington.edu/users/vanderplas/"&gt;Jake VanderPlas&lt;/a&gt; there are some cool features I can talk about.
First of, the ARPACK backend is finally working and gives factor one
speedup over the &lt;a class="reference external" href="http://fa.bianp.net/blog/2011/locally-linear-embedding-and-sparse-eigensolvers/"&gt;lobcpg + PyAMG approach&lt;/a&gt;. The key is to use ARPACK's
shift-invert mode …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Thu, 30 Jun 2011 16:22:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2011-06-30:/blog/2011/lle-comes-in-different-flavours/</guid><category>General, manifold learning, scikit-learn</category></item><item><title>Manifold learning in scikit-learn</title><link>http://fa.bianp.net/blog/2011/manifold-learning-in-scikit-learn/</link><description>&lt;p&gt;The manifold module in &lt;a class="reference external" href="http://scikit-learn.sf.net"&gt;scikit-learn&lt;/a&gt; is slowly progressing: the
&lt;a class="reference external" href="http://fa.bianp.net/blog/2011/locally-linear-embedding-and-sparse-eigensolvers/"&gt;locally linear embedding&lt;/a&gt; implementation was finally merged along with
&lt;a class="reference external" href="http://scikit-learn.sourceforge.net/dev/modules/manifold.html"&gt;some documentation&lt;/a&gt;. At about the same time but in a different
timezone, &lt;a class="reference external" href="http://www.astro.washington.edu/users/vanderplas/"&gt;Jake VanderPlas&lt;/a&gt; began coding &lt;a class="reference external" href="https://github.com/jakevdp/scikit-learn/compare/master...manifold"&gt;other manifold learning
methods&lt;/a&gt; and back in Paris &lt;a class="reference external" href="http://twitter.com/ogrisel"&gt;Olivier Grisel&lt;/a&gt; made &lt;a class="reference external" href="http://fa.bianp.net/blog/2011/handwritten-digits-and-locally-linear-embedding/"&gt;my digits example&lt;/a&gt;
a &lt;a class="reference external" href="http://scikit-learn.sourceforge.net/dev/auto_examples/manifold/plot_lle_digits.html"&gt;lot …&lt;/a&gt;&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Tue, 07 Jun 2011 09:19:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2011-06-07:/blog/2011/manifold-learning-in-scikit-learn/</guid><category>scikit-learn</category></item><item><title>Handwritten digits and Locally Linear Embedding</title><link>http://fa.bianp.net/blog/2011/handwritten-digits-and-locally-linear-embedding/</link><description>&lt;p&gt;I decided to test my &lt;a class="reference external" href="http://fa.bianp.net/blog/2011/locally-linear-embedding-and-sparse-eigensolvers/"&gt;new Locally Linear Embedding (LLE)&lt;/a&gt;
implementation against a real dataset. At first I didn't think this
would turn out very well, since LLE seems to be somewhat fragile,
yielding largely different results for small differences in parameters
such as number of neighbors or tolerance, but …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Wed, 04 May 2011 10:46:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2011-05-04:/blog/2011/handwritten-digits-and-locally-linear-embedding/</guid><category>General, Python, scikit-learn</category></item><item><title>Low-level routines for Support Vector Machines</title><link>http://fa.bianp.net/blog/2011/low-level-routines-for-support-vector-machines/</link><description>&lt;p&gt;I've been working lately in improving the low-level API of the libsvm
bindings in scikit-learn. The goal is to provide an API that encourages
an efficient use of these libraries for expert users. These are methods
that have lower overhead than the &lt;a class="reference external" href="http://scikit-learn.sourceforge.net/modules/svm.html"&gt;object-oriented interface&lt;/a&gt; as they
are closer to the …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Wed, 27 Apr 2011 15:27:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2011-04-27:/blog/2011/low-level-routines-for-support-vector-machines/</guid><category>General, Python, scikit-learn</category></item><item><title>new get_blas_funcs in scipy.linalg</title><link>http://fa.bianp.net/blog/2011/new-get_blas_funcs-in-scipylinalg/</link><description>&lt;p&gt;Today got merged some changes I made to function
scipy.linalg.get_blas_funcs(). The main enhacement is that
get_blas_funcs() now also accepts a single string as input parameter
and a dtype, so that fetching the BLAS function for a specific type
becomes more natural. For example, fetching the gemm routine for …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Sat, 23 Apr 2011 18:24:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2011-04-23:/blog/2011/new-get_blas_funcs-in-scipylinalg/</guid><category>General, Python, scipy</category></item><item><title>Locally linear embedding and sparse eigensolvers</title><link>http://fa.bianp.net/blog/2011/locally-linear-embedding-and-sparse-eigensolvers/</link><description>&lt;p&gt;I've been working for some time on implementing a &lt;a class="reference external" href="http://www.cs.nyu.edu/~roweis/lle/algorithm.html"&gt;locally linear
embedding&lt;/a&gt; algorithm for the upcoming manifold module in scikit-learn.
While several implementations of this algorithm exist in Python, as far
as I know none of them is able to use a sparse eigensolver in the last
step of the …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Thu, 21 Apr 2011 14:28:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2011-04-21:/blog/2011/locally-linear-embedding-and-sparse-eigensolvers/</guid><category>General, Python, scikit-learn</category></item><item><title>scikits.learn is now part of pythonxy</title><link>http://fa.bianp.net/blog/2011/scikitslearn-is-now-part-of-pythonxy/</link><description>&lt;p&gt;The guys behind &lt;a class="reference external" href="http://www.pythonxy.com/"&gt;pythonxy&lt;/a&gt; have been kind enough to add the latest
scikit-learn as an &lt;a class="reference external" href="http://code.google.com/p/pythonxy/wiki/AdditionalPlugins"&gt;additional plugin&lt;/a&gt; for their distribution. Having
scikit-learn being in both &lt;a class="reference external" href="http://www.pythonxy.com/"&gt;pythonxy&lt;/a&gt; and &lt;a class="reference external" href="http://www.enthought.com/products/epd.php"&gt;EPD&lt;/a&gt; will hopefully make it
easier to use for Windows users. &lt;img alt="pythonxy-logo" src="http://fa.bianp.net/blog/static/uploads/2011/04/pythonxy-logo.png" /&gt; For now I will continue
to make windows precompiled binaries, but pythonxy …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Wed, 20 Apr 2011 13:48:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2011-04-20:/blog/2011/scikitslearn-is-now-part-of-pythonxy/</guid><category>General, Python, scikit-learn</category></item><item><title>Least squares with equality constrain</title><link>http://fa.bianp.net/blog/2011/least-squares-with-equality-constrain/</link><description>&lt;p&gt;The following algorithm computes the Least squares solution || Ax -
b|| subject to the equality constrain Bx = d. It's a classic algorithm
that can be implemented only using a QR decomposition and a least
squares solver. This implementation uses numpy and scipy. It makes use
of the new linalg.solve_triangular function …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Thu, 14 Apr 2011 10:02:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2011-04-14:/blog/2011/least-squares-with-equality-constrain/</guid><category>Python, Tecnologí­a</category></item><item><title>A profiler for Python extensions</title><link>http://fa.bianp.net/blog/2011/a-profiler-for-python-extensions/</link><description>&lt;p&gt;Profiling Python extensions has not been a pleasant experience for me,
so I made my own package to do the job. Existing alternatives were
either hard to use, forcing you to recompile with custom flags like
gprofile or desperately slow like valgrind/callgrind. The package I'll
talk about is called …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Wed, 06 Apr 2011 14:02:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2011-04-06:/blog/2011/a-profiler-for-python-extensions/</guid><category>General, Python</category></item><item><title>scikit-learn coding sprint in Paris</title><link>http://fa.bianp.net/blog/2011/scikit-learn-coding-sprint-in-paris/</link><description>&lt;p&gt;Yesterday was the scikit-learn coding sprint in Paris. It was great to
meet with old developers (Vincent Michel) and new ones: some of whom I
was already familiar with from the mailing list while others came just
to say hi and get familiar with the code. It was really great …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Sat, 02 Apr 2011 12:07:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2011-04-02:/blog/2011/scikit-learn-coding-sprint-in-paris/</guid><category>General, scikit-learn</category></item><item><title>py3k in scikit-learn</title><link>http://fa.bianp.net/blog/2011/py3k-in-scikit-learn/</link><description>&lt;p&gt;One thing I'd really like to see done in &lt;a class="reference external" href="http://gael-varoquaux.info/blog/?p=149"&gt;this Friday's scikit-learn
sprint&lt;/a&gt; is to have full support for Python 3. There's &lt;a class="reference external" href="http://github.com/fabianp/scikit-learn/compare/master...py3k"&gt;a branch were
the hard word has been done&lt;/a&gt; (porting C extensions, automatic 2to3
conversion, etc.), although joblib still has some bugs and no one has
attempted to …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Mon, 28 Mar 2011 15:23:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2011-03-28:/blog/2011/py3k-in-scikit-learn/</guid><category>General</category></item><item><title>Computing the vector norm</title><link>http://fa.bianp.net/blog/2011/computing-the-vector-norm/</link><description>&lt;p&gt;&lt;strong&gt;Update: a fast and stable norm was added to scipy.linalg in August
2011 and will be available in scipy 0.10&lt;/strong&gt; Last week I discussed with
&lt;a class="reference external" href="http://gael-varoquaux.info/blog/"&gt;Gael&lt;/a&gt; how we should compute the euclidean norm of a vector a using
SciPy. Two approaches suggest themselves, either calling
scipy.linalg.norm …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Tue, 15 Feb 2011 10:31:00 +0100</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2011-02-15:/blog/2011/computing-the-vector-norm/</guid><category>misc</category><category>linear algebra</category><category>norm</category><category>scipy</category></item><item><title>Smells like hacker spirit</title><link>http://fa.bianp.net/blog/2011/smells-like-hacker-spirit/</link><description>&lt;p&gt;I was last weekend in &lt;a class="reference external" href="http://fosdem.org/2011/"&gt;FOSDEM&lt;/a&gt; presenting &lt;a class="reference external" href="http://scikit-learn.sf.net"&gt;scikits.learn&lt;/a&gt; (&lt;a class="reference external" href="http://fa.bianp.net/talks/fosdem-skl/"&gt;here are
the slides&lt;/a&gt; I used at the Data Analytics Devroom). Kudos to &lt;a class="reference external" href="http://twitter.com/#!/ogrisel"&gt;Olivier
Grisel&lt;/a&gt; and all the people who organized such a fun and authentic
meeting!&lt;/p&gt;
&lt;p&gt;&lt;img alt="image0" src="http://farm6.static.flickr.com/5136/5417861859_8480c65eed_m.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;&lt;img alt="image1" src="http://farm6.static.flickr.com/5294/5425114531_6eec316967_m.jpg" /&gt;&lt;/p&gt;
</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Fri, 11 Feb 2011 09:50:00 +0100</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2011-02-11:/blog/2011/smells-like-hacker-spirit/</guid><category>misc</category><category>python</category><category>sklearn</category></item><item><title>New examples in scikits.learn 0.6</title><link>http://fa.bianp.net/blog/2010/new-examples-in-scikitslearn-06/</link><description>&lt;p&gt;Latest release of &lt;a class="reference external" href="http://scikit-learn.sf.net"&gt;scikits.learn&lt;/a&gt; comes with an &lt;a class="reference external" href="http://scikit-learn.sourceforge.net/0.6/auto_examples/index.html"&gt;awesome collection of
examples&lt;/a&gt;. These are some of my favorites:&lt;/p&gt;
&lt;div class="section" id="faces-recognition"&gt;
&lt;h2&gt;Faces recognition&lt;/h2&gt;
&lt;p&gt;&lt;a class="reference external" href="http://scikit-learn.sourceforge.net/0.6/auto_examples/applications/plot_face_recognition.html"&gt;This example&lt;/a&gt; by &lt;a class="reference external" href="http://twitter.com/ogrisel/"&gt;Olivier Grisel&lt;/a&gt;, downloads a 58MB faces dataset
from &lt;a class="reference external" href="http://vis-www.cs.umass.edu/lfw/"&gt;Labeled Faces in the Wild&lt;/a&gt;, and is able to perform PCA for
feature extraction and SVC for classification, yielding …&lt;/p&gt;&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Fri, 31 Dec 2010 13:55:00 +0100</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2010-12-31:/blog/2010/new-examples-in-scikitslearn-06/</guid><category>General, scikit-learn, Tecnologí­a</category></item><item><title>Weighted samples for SVMs</title><link>http://fa.bianp.net/blog/2010/weighted-samples-for-svms/</link><description>&lt;p&gt;Based on the work of &lt;a class="reference external" href="http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/#weights_for_data_instances"&gt;libsvm-dense&lt;/a&gt; by Ming-Wei Chang, Hsuan-Tien Lin,
Ming-Hen Tsai, Chia-Hua Ho and Hsiang-Fu Yu I patched the libsvm
distribution shipped with scikits.learn to allow setting weights for
individual instances. The motivation behind this is to be able force a
classifier to focus its attention in …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Mon, 29 Nov 2010 13:20:00 +0100</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2010-11-29:/blog/2010/weighted-samples-for-svms/</guid><category>sklearn, python</category></item><item><title>Coming soon ...</title><link>http://fa.bianp.net/blog/2010/coming-soon/</link><description>&lt;img alt="" src="http://farm5.static.flickr.com/4107/5203822436_41b9c350c2.jpg" /&gt;
&lt;p&gt;Highlights for this release: * New &lt;a class="reference external" href="http://scikit-learn.sourceforge.net/modules/sgd.html"&gt;stochastic
gradient descent module&lt;/a&gt; by &lt;a class="reference external" href="http://sites.google.com/site/peterprettenhofer/"&gt;Peter Prettenhofer&lt;/a&gt; * Improved svm
module: memory efficiency, automatic class weights. * Wrap for
liblinear's Multi-class SVC (option multi_class in &lt;a class="reference external" href="http://scikit-learn.sourceforge.net/modules/generated/scikits.learn.svm.LinearSVC.html"&gt;LinearSVC&lt;/a&gt;) * New
features and performance improvements of text feature extraction. *
Improved sparse matrix support, both in main classes (GridSearch) as in
sparse …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Wed, 24 Nov 2010 10:39:00 +0100</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2010-11-24:/blog/2010/coming-soon/</guid><category>scikit-learn, Tecnologí­a</category></item><item><title>memory efficient bindigs for libsvm</title><link>http://fa.bianp.net/blog/2010/memory-efficient-bindigs-for-libsvm/</link><description>&lt;p&gt;&lt;a class="reference external" href="http://scikit-learn.sf.net"&gt;scikits.learn.svm&lt;/a&gt; now uses &lt;a class="reference external" href="http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/#libsvm_for_dense_data"&gt;LibSVM-dense&lt;/a&gt; instead of &lt;a class="reference external" href="http://www.csie.ntu.edu.tw/~cjlin/libsvm/"&gt;LibSVM&lt;/a&gt; for
some support vector machine related algorithms when input is a dense
matrix. As a result most of the copies associated with argument passing
are avoided, giving 50% less memory footprint and several times less
than the python bindings that ship …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Fri, 19 Nov 2010 15:08:00 +0100</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2010-11-19:/blog/2010/memory-efficient-bindigs-for-libsvm/</guid><category>General, scikit-learn</category></item><item><title>solve triangular matrices using scipy.linalg</title><link>http://fa.bianp.net/blog/2010/solve-triangular-matrices-using-scipylinalg/</link><description>&lt;p&gt;For some time now I've been missing a function in scipy that exploits
the triangular structure of a matrix to efficiently solve the associated
system, so I decided to &lt;a class="reference external" href="http://projects.scipy.org/scipy/changeset/6844"&gt;implement it&lt;/a&gt; by binding the LAPACK method
&amp;quot;trtrs&amp;quot;, which also checks for singularities and is capable handling
several right-hand sides. Contrary …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Sat, 30 Oct 2010 01:13:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2010-10-30:/blog/2010/solve-triangular-matrices-using-scipylinalg/</guid><category>scipy, Tecnologí­a</category></item><item><title>LARS algorithm</title><link>http://fa.bianp.net/blog/2010/lars-algorithm/</link><description>&lt;p&gt;I've been working lately with &lt;a class="reference external" href="http://www-sop.inria.fr/members/Alexandre.Gramfort/"&gt;Alexandre Gramfort&lt;/a&gt; coding the &lt;a class="reference external" href="http://scikit-learn.sf.net/modules/glm.html#lars-algorithm-and-its-variants"&gt;LARS
algorithm&lt;/a&gt; in &lt;a class="reference external" href="hhtp://scikit-learn.sf.net"&gt;scikits.learn&lt;/a&gt;. This algorithm computes the solution to
several general linear models used in machine learning: LAR, Lasso,
Elasticnet and Forward Stagewise. Unlike the implementation by
coordinate descent, the LARS algorithm gives the full coefficient path
along the …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Thu, 30 Sep 2010 16:01:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2010-09-30:/blog/2010/lars-algorithm/</guid><category>misc</category><category>scikit-learn</category><category>sparse</category></item><item><title>Second scikits.learn coding sprint</title><link>http://fa.bianp.net/blog/2010/second-scikitslearn-coding-sprint/</link><description>&lt;p&gt;Las week took place in Paris the second &lt;a class="reference external" href="http://scikit-learn.sf.net"&gt;scikits.learn&lt;/a&gt; sprint. It was
two days of insane activity (115 commits, 6 branches, 33 coffees) in
which we did a lot of work, both implementing new algorithms and fixing
or improving old ones. This includes: * sparse version of Lasso by
coordinate …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Sun, 12 Sep 2010 22:31:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2010-09-12:/blog/2010/second-scikitslearn-coding-sprint/</guid><category>scikit-learn</category></item><item><title>Support for sparse matrices in scikits.learn</title><link>http://fa.bianp.net/blog/2010/support-for-sparse-matrices-in-scikitslearn/</link><description>&lt;p&gt;I recently added support for sparse matrices (as defined in
scipy.sparse) in some classifiers of &lt;a class="reference external" href="http://scikit-learn.sf.net"&gt;scikits.learn&lt;/a&gt;. In those classes,
the fit method will perform the algorithm without converting to a dense
representation and will also store parameters in an efficient format.
Right now, the only classese that implements …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Mon, 23 Aug 2010 17:47:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2010-08-23:/blog/2010/support-for-sparse-matrices-in-scikitslearn/</guid><category>General</category></item><item><title>Flags to debug python C extensions.</title><link>http://fa.bianp.net/blog/2010/flags-to-debug-python-c-extensions/</link><description>&lt;p&gt;I often find myself debugging python C extensions from gdb, but usually
some variables are hidden because aggressive optimizations that
distutils sets by default. What I did not know, is that you can prevent
those optimizations by passing flags -O0 -fno-inline to gcc in keyword
extra_compile_args (note: this will only …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Wed, 18 Aug 2010 13:40:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2010-08-18:/blog/2010/flags-to-debug-python-c-extensions/</guid><category>General</category></item><item><title>July in Paris</title><link>http://fa.bianp.net/blog/2010/july-in-paris/</link><description>&lt;p&gt;One of the best things of spending summer in Paris: its parcs (here,
with friends &amp;#64; Parc Montsouris).&lt;/p&gt;
&lt;p&gt;&lt;img alt="image0" src="http://farm5.static.flickr.com/4103/4842146900_953f961d64.jpg" /&gt;&lt;/p&gt;
</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Fri, 30 Jul 2010 00:11:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2010-07-30:/blog/2010/july-in-paris/</guid><category>General</category></item><item><title>Support Vector machines with custom kernels using scikits.learn</title><link>http://fa.bianp.net/blog/2010/support-vector-machines-with-custom-kernels-using-scikitslearn/</link><description>&lt;p&gt;It is now possible (using the development version as of may 2010) to use
Support Vector Machines with custom kernels in scikits.learn. How to use
it couldn't be more simple: you just pass a callable (the kernel) to the
class constructor). For example, a linear kernel would be implemented …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Thu, 27 May 2010 10:42:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2010-05-27:/blog/2010/support-vector-machines-with-custom-kernels-using-scikitslearn/</guid><category>General, scikit-learn, Tecnologí­a</category></item><item><title>Howto link against system-wide BLAS library using numpy.distutils</title><link>http://fa.bianp.net/blog/2010/howto-link-against-system-wide-blas-library-using-numpydistutils/</link><description>&lt;p&gt;If your numpy installation uses system-wide BLAS libraries (this will
most likely be the case unless you installed it through prebuilt windows
binaries), you can retrieve this information at compile time to link
python modules to BLAS. The function get_info in
numpy.distutils.system_info will return a dictionary that contains …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Thu, 22 Apr 2010 14:28:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2010-04-22:/blog/2010/howto-link-against-system-wide-blas-library-using-numpydistutils/</guid><category>General</category></item><item><title>scikits.learn 0.2 release</title><link>http://fa.bianp.net/blog/2010/scikitslearn-02-release/</link><description>&lt;p&gt;Today I released a new version of the &lt;a class="reference external" href="http://scikit-learn.sourceforge.net"&gt;scikits.learn&lt;/a&gt; library for
machine learning. This new release includes the new libsvm bindings,
Jake VanderPlas' BallTree algorithm for *fast* nearest neighbor
queries in high dimension, etc. &lt;a class="reference external" href="http://sourceforge.net/mailarchive/message.php?msg_name=4BA72BE3.1010208%40inria.fr"&gt;Here&lt;/a&gt; is the official announcement. As
usual, it can be downloaded from &lt;a class="reference external" href="http://sourceforge.net/projects/scikit-learn/files"&gt;sourceforge&lt;/a&gt; or from …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Mon, 22 Mar 2010 11:37:00 +0100</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2010-03-22:/blog/2010/scikitslearn-02-release/</guid><category>General</category></item><item><title>Plot the maximum margin hyperplane with scikits.learn</title><link>http://fa.bianp.net/blog/2010/plot-the-maximum-margin-hyperplane-with-scikitslearn/</link><description>&lt;p&gt;Suppose some given data points each belong to one of two classes, and
the goal is to decide which class a new data point will be in. In the
case of support vector machines, a data point is viewed as a
p-dimensional vector (2-dimensional in this example), and we want …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Wed, 17 Mar 2010 12:24:00 +0100</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2010-03-17:/blog/2010/plot-the-maximum-margin-hyperplane-with-scikitslearn/</guid><category>General, scikit-learn, Tecnologí­a</category></item><item><title>Fast bindings for LibSVM in scikits.learn</title><link>http://fa.bianp.net/blog/2010/fast-bindings-for-libsvm-in-scikitslearn/</link><description>&lt;p&gt;&lt;a class="reference external" href="http://www.csie.ntu.edu.tw/~cjlin/libsvm/"&gt;LibSVM&lt;/a&gt; is a C++ library that implements several Support Vector
Machine algorithms that are commonly used in machine learning. It is a
fast library that has no dependencies and most machine learning
frameworks bind it in some way or another. LibSVM comes with a Python
interface written in swig, but …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Tue, 09 Mar 2010 15:49:00 +0100</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2010-03-09:/blog/2010/fast-bindings-for-libsvm-in-scikitslearn/</guid><category>General, scikit-learn, Tecnologí­a</category></item><item><title>scikits.learn coding sprint in Paris</title><link>http://fa.bianp.net/blog/2010/scikitslearn-coding-sprint-in-paris/</link><description>&lt;p&gt;Yesterday we had an extremely productive coding sprint for the
&lt;a class="reference external" href="http://scikit-learn.sf.net"&gt;scikits.learn&lt;/a&gt;. The idea was to put people with common interests in a
room and make them work in a single codebase. Alexandre Gramfort and
Olivier Grisel worked on &lt;a class="reference external" href="http://scikit-learn.svn.sourceforge.net/viewvc/scikit-learn/trunk/scikits/learn/glm/"&gt;GLMNet&lt;/a&gt;, Bertrand Thirion and Gaël Varoquaux
worked on &lt;a class="reference external" href="http://scikit-learn.svn.sourceforge.net/viewvc/scikit-learn/trunk/scikits/learn/feature_selection/"&gt;univariate feature selection …&lt;/a&gt;&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Thu, 04 Mar 2010 11:25:00 +0100</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2010-03-04:/blog/2010/scikitslearn-coding-sprint-in-paris/</guid><category>scikit-learn, Tecnologí­a</category></item><item><title>Scikit-learn 0.1</title><link>http://fa.bianp.net/blog/2010/scikit-learn-01/</link><description>&lt;p&gt;Today I released the first public version of &lt;a class="reference external" href="http://scikit-learn.sourceforge.net"&gt;Scikit-Learn&lt;/a&gt; (&lt;a class="reference external" href="https://sourceforge.net/mailarchive/message.php?msg_name=4B66D190.5090100%40inria.fr"&gt;release
notes&lt;/a&gt;). It's a python module implementing some machine learning
algorithms, and it's shaping quite good.&lt;/p&gt;
&lt;p&gt;For this release I did not want to do any incompatible changes, so most of them are just bug fixes and
updates. For the next …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Mon, 01 Feb 2010 15:32:00 +0100</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2010-02-01:/blog/2010/scikit-learn-01/</guid><category>Software, scikit-learn</category></item><item><title>scikit-learn project on sourceforge</title><link>http://fa.bianp.net/blog/2010/scikit-learn-project-on-sourceforge/</link><description>&lt;p&gt;This week we created a &lt;a class="reference external" href="https://sourceforge.net/projects/scikit-learn/"&gt;sourceforge project&lt;/a&gt; to host our development of
scikit-learn. Although the project already had a directory in scipy's
repo, we needed more flexibility in the user management and in the
mailing list creation, so we opted for SourceForge. To be honest, after
using git and Google …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Thu, 07 Jan 2010 15:17:00 +0100</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2010-01-07:/blog/2010/scikit-learn-project-on-sourceforge/</guid><category>General, scikit-learn, Tecnologí­a</category></item><item><title>After holidays</title><link>http://fa.bianp.net/blog/2010/after-holidays/</link><description>&lt;p&gt;New job, new code, new city, new colleagues. Feels something like this:&lt;/p&gt;
&lt;img alt="" src="http://farm5.static.flickr.com/4027/4240407852_6f461f3776_m.jpg" /&gt;
</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Tue, 05 Jan 2010 10:56:00 +0100</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2010-01-05:/blog/2010/after-holidays/</guid><category>General</category></item><item><title>Winter in Paris is not funny</title><link>http://fa.bianp.net/blog/2009/winter-in-paris-is-not-funny/</link><description>&lt;p&gt;This week I arrived to the place where I will be working the following
two years: Neurospin.&lt;/p&gt;
&lt;img alt="" src="http://farm5.static.flickr.com/4042/4206517312_e35b7fa55d_m.jpg" /&gt;
&lt;p&gt;It's a research center located 20
km from Paris, and so far things are going smoothly: the place is
beautiful, work is great and food is excellent. Well OK, I do miss some …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Tue, 22 Dec 2009 19:36:00 +0100</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2009-12-22:/blog/2009/winter-in-paris-is-not-funny/</guid><category>General, Tecnologí­a</category></item><item><title>Last days in Granada</title><link>http://fa.bianp.net/blog/2009/last-days-in-granada/</link><description>&lt;p&gt;Nice thing about winter in Granada is, that even in the coldest days,
the sky is always blue.&lt;/p&gt;
&lt;img alt="" src="http://farm3.static.flickr.com/2539/4180997669_aa5a45a949_m.jpg" /&gt;
</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Tue, 15 Dec 2009 23:42:00 +0100</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2009-12-15:/blog/2009/last-days-in-granada/</guid><category>Fotos, General</category></item><item><title>Learning, Machine Learning</title><link>http://fa.bianp.net/blog/2009/learning-machine-learning/</link><description>&lt;p&gt;My new job is about managing an open source package for machine learning
in Python. I've had some experience with Python now, but I am a total
newbie in the field of machine learning, so my first task will be to
find a good reference book in the subject and …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Tue, 15 Dec 2009 23:34:00 +0100</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2009-12-15:/blog/2009/learning-machine-learning/</guid><category>General, Tecnologí­a</category></item><item><title>Moving to Paris!</title><link>http://fa.bianp.net/blog/2009/moving-to-paris/</link><description>&lt;p&gt;I'm extremely glad that finally I am moving to Paris to work as part of
the INRIA crew. I'll be working with &lt;a class="reference external" href="http://gael-varoquaux.info/"&gt;Gael Varoquaux&lt;/a&gt; and his team in
an extremely cool Python related project (more to come on this in the
following weeks). Granada has been a great place for …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Sat, 12 Dec 2009 02:07:00 +0100</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2009-12-12:/blog/2009/moving-to-paris/</guid><category>General</category></item><item><title>Summer of Code is over</title><link>http://fa.bianp.net/blog/2009/summer-of-code-is-over/</link><description>&lt;p&gt;Google Summer of Code program is officially over. It has been four
months of intense work, exciting benchmarks and patch reviewing. It was
a huge pleasure working with you guys! As for the project, I implemented
a complete logic module and then an assumption system for sympy
(sympy.logic, sympy …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Sat, 05 Sep 2009 12:21:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2009-09-05:/blog/2009/summer-of-code-is-over/</guid><category>General, sympy, Tecnologí­a</category></item><item><title>Speed improvements for ask() (sympy.queries.ask)</title><link>http://fa.bianp.net/blog/2009/speed-improvements-for-ask-sympyqueriesask/</link><description>&lt;p&gt;I managed to overcome the overhead in ask() that arises when converting
between symbol and integer representation of sentences in conjunctive
normal. The result went beyond what I expected. The test suite for the
query module got 10x times faster in my laptop. From 26 seconds, it
descended to an …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Thu, 20 Aug 2009 00:36:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2009-08-20:/blog/2009/speed-improvements-for-ask-sympyqueriesask/</guid><category>General, sympy, Tecnologí­a</category></item><item><title>Logic module (sympy.logic): improving speed</title><link>http://fa.bianp.net/blog/2009/logic-module-sympylogic-improving-speed/</link><description>&lt;p&gt;Today I've been doing some speed improvements for the logic module. More
precisely, I implemented an efficient internal representation for
clauses in conjunctive normal form. In practice this means a huge
performance boost for all problems that make use the function
satisfiable() or dpll_satisfiable(). For example, test_dimacs.py has
moved …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Tue, 18 Aug 2009 23:35:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2009-08-18:/blog/2009/logic-module-sympylogic-improving-speed/</guid><category>General, sympy, Tecnologí­a</category></item><item><title>Refine module</title><link>http://fa.bianp.net/blog/2009/refine-module/</link><description>&lt;p&gt;&lt;a class="reference external" href="http://git.sympy.org/?p=sympy.git;a=commit;h=dd679c2751ac0900c47302fd6187ae9eea60918f"&gt;This&lt;/a&gt; commit introduced a new module in sympy: the refine module. The
purpose of this module is to simplify expressions when they are bound to
assumptions. For example, if you know that x&amp;gt;0, then you can simplify
abs(x) to x. This code was traditionally embedded into the core …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Mon, 17 Aug 2009 19:20:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2009-08-17:/blog/2009/refine-module/</guid><category>sympy, Tecnologí­a</category></item><item><title>Query module - finally in trunk</title><link>http://fa.bianp.net/blog/2009/query-module-finally-in-trunk/</link><description>&lt;p&gt;The query module is finally in the main SymPy repository. I made
substantial changes since last post, most of them at the user interface
level (thanks to Vinzent and Mateusz for many insightful comments). Main
function is ask(), which replaces the old expression.is_* syntax. You
can ask many things …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Mon, 10 Aug 2009 21:51:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2009-08-10:/blog/2009/query-module-finally-in-trunk/</guid><category>General, sympy, Tecnologí­a</category></item><item><title>django, change language settings dynamically</title><link>http://fa.bianp.net/blog/2009/django-change-language-settings-dynamically/</link><description>&lt;p&gt;After some failed attempts, I just found how to change the language
settings dynamically in django, and I thought it could be useful to
someone. Just use function activate() from django.utils.translation. For
example: [cc lang=&amp;quot;python&amp;quot;] from django.utils.translation import
activate activate('es-ES') [/cc] will change global …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Fri, 07 Aug 2009 16:12:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2009-08-07:/blog/2009/django-change-language-settings-dynamically/</guid><category>General, Tecnologí­a</category></item><item><title>can we merge now, pleeease ?</title><link>http://fa.bianp.net/blog/2009/can-we-merge-now-pleeease/</link><description>&lt;p&gt;Three months after I began to write sympy.queries, I feel it's about
time to include it in sympy's trunk, so today I sent for review &lt;a class="reference external" href="http://groups.google.com/group/sympy-patches/browse_thread/thread/76dcdfd0994a1c81"&gt;4
patches that implement the complete query module&lt;/a&gt;. It's been a lot of
fun, but it has also caused me some headaches ... specially last …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Tue, 21 Jul 2009 22:36:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2009-07-21:/blog/2009/can-we-merge-now-pleeease/</guid><category>General, sympy, Tecnologí­a</category></item><item><title>Refine module, proof of concept</title><link>http://fa.bianp.net/blog/2009/refine-module-proof-of-concept/</link><description>&lt;p&gt;The 0.6.5 release of SymPy is taking longer than expected because &lt;a class="reference external" href="http://code.google.com/p/sympy/issues/detail?id=1521"&gt;some
bugs in the testing framework&lt;/a&gt;, so my query module is not merged into
trunk (yet). In the meantime, I am implementing a refine module (very
little code is available yet). The refine module implements a refine …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Thu, 09 Jul 2009 02:49:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2009-07-09:/blog/2009/refine-module-proof-of-concept/</guid><category>General, sympy, Tecnologí­a</category></item><item><title>Preparing a new release</title><link>http://fa.bianp.net/blog/2009/preparing-a-new-release/</link><description>&lt;p&gt;Last days I've been busy preparing &lt;a class="reference external" href="http://groups.google.com/group/sympy/browse_thread/thread/88474cde3bc6e350#"&gt;the first public beta of SymPy
0.6.5&lt;/a&gt;. Most of the time was spent solving a bug that made
documentation tests fail under python2.4, but now that this is solved, I
hope that by the end of the week we could have …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Tue, 30 Jun 2009 08:57:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2009-06-30:/blog/2009/preparing-a-new-release/</guid><category>General, sympy, Tecnologí­a</category></item><item><title>Efficient DPLL algorithm</title><link>http://fa.bianp.net/blog/2009/efficient-dpll-algorithm/</link><description>&lt;p&gt;Background: DPLL is the algorithm behind SymPy's implementation of
logic.inference.satisfiable After reading the original papers by Davis &amp;amp;
Putnam [1], I managed to implement a more efficient version of the DPLL
algorithm. It is 10x times faster on medium-sized problems (40
variables), and solves some wrong result bugs [2 …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Sun, 28 Jun 2009 18:16:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2009-06-28:/blog/2009/efficient-dpll-algorithm/</guid><category>General</category></item><item><title>Queries and performance</title><link>http://fa.bianp.net/blog/2009/queries-and-performance/</link><description>&lt;p&gt;After some hacking on the queries module, I finally got it right without
the &lt;a class="reference external" href="http://fa.bianp.net/blog/?p=149"&gt;limitations of past versions&lt;/a&gt;. You can check it out from my repo
&lt;a class="reference external" href="http://fa.bianp.net/git/sympy.git"&gt;http://fa.bianp.net/git/sympy.git&lt;/a&gt;, branch master. It now relies even more
on logic.inference.satisfiable(), which is just an implementation of …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Tue, 23 Jun 2009 00:02:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2009-06-23:/blog/2009/queries-and-performance/</guid><category>General, sympy, Tecnologí­a</category></item><item><title>Reading CNF files</title><link>http://fa.bianp.net/blog/2009/reading-cnf-files/</link><description>&lt;p&gt;&lt;p&gt;The DIMACS CNF file format is used to define a Boolean expression,
written in conjunctive normal form, that may be used as an example of
the satisfiability problem. The new logic module (sympy.logic) can read
the content of a cnf file and transform it into a boolean expression
suitable …&lt;/p&gt;&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Sat, 20 Jun 2009 16:27:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2009-06-20:/blog/2009/reading-cnf-files/</guid><category>General, sympy, Tecnologí­a</category></item><item><title>Logic module merged</title><link>http://fa.bianp.net/blog/2009/logic-module-merged/</link><description>&lt;p&gt;Yesterday I finally merged the logic module in sympy's official master
branch, and should be released together with SymPy 0.6.5. Next thing to
do: profile the code and write some docs before the release.&lt;/p&gt;
</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Fri, 19 Jun 2009 12:23:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2009-06-19:/blog/2009/logic-module-merged/</guid><category>General, sympy, Tecnologí­a</category></item><item><title>The boolean satisfiability problem</title><link>http://fa.bianp.net/blog/2009/the-boolean-satisfiability-problem/</link><description>&lt;p&gt;&lt;p&gt;Most annoying problem in my implementation of the query system is that
it will not solve implications if the implicates are far away from each
other. For instance, if the graph of known facts is something like this&lt;/p&gt;
&lt;pre class="literal-block"&gt;
Integer ----&amp;gt; Rational --&amp;gt; Real --&amp;gt; Complex
  ^  ^
  |  |
  |   -------
  |         |
Prime      Even
  ^
  |
  |
MersennePrime
&lt;/pre&gt;
&lt;p&gt;Then it will not know …&lt;/p&gt;&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Mon, 15 Jun 2009 06:00:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2009-06-15:/blog/2009/the-boolean-satisfiability-problem/</guid><category>General, sympy, Tecnologí­a</category></item><item><title>Initial implementation of the query system</title><link>http://fa.bianp.net/blog/2009/initial-implementation-of-the-query-system/</link><description>&lt;p&gt;I sent &lt;a class="reference external" href="http://groups.google.com/group/sympy-patches/browse_thread/thread/e56ceda0038b7c23"&gt;some patches&lt;/a&gt; to sympy-patches with an initial implementation
of the query system. You can check it out by pulling from my branch:
&lt;tt class="docutils literal"&gt;git pull &lt;span class="pre"&gt;http://fa.bianp.net/git/sympy.git&lt;/span&gt; master&lt;/tt&gt; into your sympy
repo. Some examples of what you can do (sample isympy session):
&lt;tt class="docutils literal"&gt;In [1 …&lt;/tt&gt;&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Fri, 12 Jun 2009 06:36:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2009-06-12:/blog/2009/initial-implementation-of-the-query-system/</guid><category>General, sympy, Tecnologí­a</category></item><item><title>Assumption system and automatic theorem proving. Should I be learning LISP ?</title><link>http://fa.bianp.net/blog/2009/assumption-system-and-automatic-theorem-proving-should-i-be-learning-lisp/</link><description>&lt;p&gt;This is the third time I attempt to write the assumption system. Other
attempts could be described as me following the rule: “For any complex
problem, there is always a solution that is simple, clear, and wrong.”
My &lt;a class="reference external" href="http://groups.google.com/group/sympy-patches/browse_thread/thread/b6fd5402e729f58/8006779044c41a17?lnk=gst&amp;amp;q=fabian+assumptions#8006779044c41a17"&gt;first attempt&lt;/a&gt; (although better than the current assumption system)
did use very …&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Wed, 03 Jun 2009 13:50:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2009-06-03:/blog/2009/assumption-system-and-automatic-theorem-proving-should-i-be-learning-lisp/</guid><category>General, sympy, Tecnologí­a</category></item><item><title>Homenaje a Antonio Vega en La Percha</title><link>http://fa.bianp.net/blog/2009/homenaje-a-antonio-vega-en-la-percha/</link><description>&lt;p&gt;El pasado jueves estuvimos en La Percha tocando algunas canciones de
Antonio Vega. El vídeo se lo ha currado &lt;a class="reference external" href="http://retrovisor.net"&gt;mi padre&lt;/a&gt; mezclando el sonido
del directo con una grabación que hicimos en casa de Migue&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="http://vimeo.com/4926476"&gt;LOS ESCLAVOS: homenaje a Antonio Vega&lt;/a&gt; from &lt;a class="reference external" href="http://vimeo.com/user938253"&gt;Felipe Pedregosa&lt;/a&gt; on
&lt;a class="reference external" href="http://vimeo.com"&gt;Vimeo&lt;/a&gt;.&lt;/p&gt;
&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Fabian Pedregosa</dc:creator><pubDate>Mon, 01 Jun 2009 15:34:00 +0200</pubDate><guid isPermaLink="false">tag:fa.bianp.net,2009-06-01:/blog/2009/homenaje-a-antonio-vega-en-la-percha/</guid><category>Canciones, General, Grupo</category></item></channel></rss>