• This notebook tries to reproduce the learning strategy specified in the framework of the SparseNet algorithm from Bruno Olshausen. It allows to efficiently code natural image patches by constraining the code to be sparse.

  • the underlying machinery uses a similar dictionary learning as used in the image denoising example from sklearn and our aim here is to show that a novel ingredient is necessary to reproduce Olshausen's results.

  • All these code bits is regrouped in the SHL scripts repository (where you will also find some older matlab code). You may install it using

    pip install git+https://github.com/bicv/SHL_scripts
In [1]:
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
np.set_printoptions(precision=2, suppress=True)
import pandas as pd
import seaborn as sns
%load_ext autoreload
%autoreload 2

Let's start by doing a simple learning as implemented in the image denoising example from sklearn and then show the dictionary:

In [2]:
from shl_scripts import SHL
database = 'database/'
DEBUG_DOWNSCALE, verbose = 1, 0
matname = 'vanilla'
shl = SHL(database=database, DEBUG_DOWNSCALE=DEBUG_DOWNSCALE, verbose=verbose, eta_homeo=0.)
fig, ax = shl.learn_dico(matname=matname).show_dico(title=matname)
fig.show()

in summary

In this notebook, we have replicated the classical SparseNet algorithm of Olshausen on a set of natural images. However, the dictionaries are qualitatively not the same as the one from the original paper, and this is certainly due to the lack of control in the competition during the learning phase.

What differ in this implementation from the original algorithm is mainly the way that the norm of the filters is controlled. Here, sklearn simply assumes that $ || V_k ||_2 = 1$, $\forall k$ (with $0 <= k < n_{components}$). We will see that this may be a problem.

Let's now try to do that in a new notebook.

In an extension, we will study how homeostasis (cooperation) may be an essential ingredient to this algorithm working on a winner-take-all basis (competition). This extension has been published as Perrinet, Neural Computation (2010) (see http://invibe.net/LaurentPerrinet/Publications/Perrinet10shl ).