Posts about open-science

extending datasets in pyTorch

PyTorch is a great library for machine learning. You can in a few lines of codes retrieve a dataset, define your model, add a cost function and then train your model. It's quite magic to copy and paste code from the internet and get the LeNet network working in a few seconds to achieve more than 98% accuracy.

However, it can be tedious sometimes to extend existing objects and here, I will manipulate some ways to define the right dataset for your application. In particular I will modify the call to a standard dataset (MNIST) to place the characters at random places in a large image.

Read more…

predictive-coding of variable motion

In some recent modeling work:

Laurent Perrinet, Guillaume S. Masson. Motion-based prediction is sufficient to solve the aperture problem. Neural Computation, 24(10):2726--50, 2012 http://invibe.net/LaurentPerrinet/Publications/Perrinet12pred

we study the role of transport in modifying our perception of motion. Here, we test what happens when we change the amount of noise in the stimulus.

In this script the predictive coding is done using the MotionParticles package and for a http://motionclouds.invibe.net/ within a disk aperture.

Read more…

accessing the data from a pupil recording

I am experimenting with the pupil eyetracker and could set it up (almost) smoothly on a macOS. There is an excellent documentation, and my first goal was to just record raw data and extract eye position.

In [1]:
from IPython.display import HTML
HTML('<center><video controls autoplay loop src="http://blog.invibe.net/files/2017-12-13_pupil%20test_480.mp4" width=61.8%/></center>')
Out[1]:

This video shows the world view (cranio-centric, from a head-mounted camera fixed on the frame) with overlaid the position of the (right) eye while I am configuring a text box. You see the eye fixating on the screen then jumping somewhere else on the screen (saccades) or on the keyboard / hands. Note that the screen itself shows the world view, such that this generates an self-reccurrent pattern.

For this, I could use the capture script and I will demonstrate here how to extract the raw data in a few lines of python code.

Read more…

MEUL with a non-parametric homeostasis

In this notebook, we will study how homeostasis (cooperation) may be an essential ingredient to this algorithm working on a winner-take-all basis (competition). This extension has been published as Perrinet, Neural Computation (2010) (see http://invibe.net/LaurentPerrinet/Publications/Perrinet10shl ). Compared to other posts, such as this previous post, we improve the code to not depend on any parameter (namely the Cparameter of the rescaling function). For that, we will use a non-parametric approach based on the use of cumulative histograms.

This is joint work with Victor Boutin and Angelo Francisioni. See also the other posts on unsupervised learning.

Read more…

Designing a A0 poster using matplotlib

Poster GDR Vision

This poster was presented in Lille at a vision workshop, check out http://invibe.net/LaurentPerrinet/Publications/Perrinet17gdr

Apart the content (which is in French) which recaps some previous work inbetween art and science, this post demonstrates how to generate a A0 poster programmatically. In particular, we will use matplotlib and some quickly forged functions to ease up the formatting.

Read more…

Le jeu de l'urne

Lors de la visite au laboratoire d'une brillante élève de seconde (salut Lena!), nous avons inventé ensemble un jeu: le jeu de l'urne. Le principe est simple: il faut deviner la couleur de la balle qu'on tire d'une urne contenant autant de balles rouges que noires - et ceci le plus tôt possible. Plus précisément, les règles sont:

  • On a un ensemble de balles, la motié sont rouges, l'autre moitié noires (c'est donc un nombre pair de balles qu'on appelera $N$, disons $N=8$).
  • Elles sont dans une urne opaque et donc on ne peut pas les voir à moins de les tirer une par une (sans remise dans l'urne). On peut tirer autant de balles qu'on veut pour les observer.
  • Le but est de deviner la balle qu'on va tirer. Si on gagne (on a bien prédit la couleur), alors on gagne autant de points que le nombre de balles qui étaient dans l'urne au moment de la décision. Sinon on perd autant de points que l'on en aurait gagné!
  • à long terme, la stratégie du jeu est de décider le meilleur moment où on est prêt à deviner la couleur de la balle qu'on va prendre et ainsi de gagner le plus de points possibles.

Nous avons d'abord créé ce jeu grâce au language de programmation Scratch sur https://scratch.mit.edu/projects/165806365/:

Ici, nous allons essayer de l'analyser plus finement.

Read more…

testing COMPs-fastPcum_scripted

In this notebook, we will study how homeostasis (cooperation) may be an essential ingredient to this algorithm working on a winner-take-all basis (competition). This extension has been published as Perrinet, Neural Computation (2010) (see http://invibe.net/LaurentPerrinet/Publications/Perrinet10shl ). Compared to the previous post, we integrated the faster code to https://github.com/bicv/SHL_scripts.

See also the other posts on unsupervised learning,

This is joint work with Victor Boutin.

Read more…

testing COMPs-fastPcum

In this notebook, we will study how homeostasis (cooperation) may be an essential ingredient to this algorithm working on a winner-take-all basis (competition). This extension has been published as Perrinet, Neural Computation (2010) (see http://invibe.net/LaurentPerrinet/Publications/Perrinet10shl ). Compared to the previous post, we optimize the code to be faster.

See also the other posts on unsupervised learning,

This is joint work with Victor Boutin.

Read more…

testing COMPs-Pcum

In this notebook, we will study how homeostasis (cooperation) may be an essential ingredient to this algorithm working on a winner-take-all basis (competition). This extension has been published as Perrinet, Neural Computation (2010) (see http://invibe.net/LaurentPerrinet/Publications/Perrinet10shl ). In particular, we will show how one can build the non-linear functions based on the activity of each filter and which implement homeostasis.

See also the other posts on unsupervised learning,

This is joint work with Victor Boutin.

Read more…

Extending Olshausens classical SparseNet

  • In a previous notebook, we tried to reproduce the learning strategy specified in the framework of the SparseNet algorithm from Bruno Olshausen. It allows to efficiently code natural image patches by constraining the code to be sparse. In particular, we saw that in order to optimize competition, it is important to control cooperation and we implemented a heuristic to just do this.

  • In this notebook, we provide an extension to the SparseNet algorithm. We will study how homeostasis (cooperation) may be an essential ingredient to this algorithm working on a winner-take-all basis (competition). This extension has been published as Perrinet, Neural Computation (2010) (see http://invibe.net/LaurentPerrinet/Publications/Perrinet10shl ):

@article{Perrinet10shl,
    Title = {Role of homeostasis in learning sparse representations},
    Author = {Perrinet, Laurent U.},
    Journal = {Neural Computation},
    Year = {2010},
    Doi = {10.1162/neco.2010.05-08-795},
    Keywords = {Neural population coding, Unsupervised learning, Statistics of natural images, Simple cell receptive fields, Sparse Hebbian Learning, Adaptive Matching Pursuit, Cooperative Homeostasis, Competition-Optimized Matching Pursuit},
    Month = {July},
    Number = {7},
    Url = {http://invibe.net/LaurentPerrinet/Publications/Perrinet10shl},
    Volume = {22},
}

This is joint work with Victor Boutin.

Read more…

Reproducing Olshausen's classical SparseNet (part 3)

In this notebook, we test the convergence of SparseNet as a function of different learning parameters. This shows the relative robustness of this method according to the coding parameters, but also the importance of homeostasis to obtain an efficient set of filters:

  • first, whatever the learning rate, the convergence is not complete without homeostasis,
  • second, we achieve better convergence for similar learning rates and on a certain range of learning rates for the homeostasis
  • third, the smoothing parameter alpha_homeo has to be properly set to achieve a good convergence.
  • last, this homeostatic rule works with the different variants of sparse coding.

See also :

This is joint work with Victor Boutin.

Read more…

Reproducing Olshausens classical SparseNet (part 2)

  • In a previous notebook, we tried to reproduce the learning strategy specified in the framework of the SparseNet algorithm from Bruno Olshausen. It allows to efficiently code natural image patches by constraining the code to be sparse.

  • However, the dictionaries are qualitatively not the same as the one from the original paper, and this is certainly due to the lack of control in the competition during the learning phase.

  • Herein, we re-implement the cooperation mechanism in the dictionary learning routine - this will be then proposed to the main code.

This is joint work with Victor Boutin.

Read more…

Reproducing Olshausen's classical SparseNet (part 1)

  • This notebook tries to reproduce the learning strategy specified in the framework of the SparseNet algorithm from Bruno Olshausen. It allows to efficiently code natural image patches by constraining the code to be sparse.

  • the underlying machinery uses a similar dictionary learning as used in the image denoising example from sklearn and our aim here is to show that a novel ingredient is necessary to reproduce Olshausen's results.

  • All these code bits is regrouped in the SHL scripts repository (where you will also find some older matlab code). You may install it using

    pip install git+https://github.com/bicv/SHL_scripts

Read more…

Bogacz (2017) A tutorial on free-energy

I enjoyed reading "A tutorial on the free-energy framework for modelling perception and learning" by Rafal Bogacz, which is freely available here. In particular, the author encourages to replicate the results in the paper. He is himself giving solutions in matlab, so I had to do the same in python all within a notebook...

Read more…

Resizing a bunch of files using the command-line interface

generating databases

A set of bash code to resize images to a fixed size.

Problem statement: we have a set of images with heterogeneous sizes and we want to homogenize the database to avoid problems when classifying them. Solution: ImageMagick.

We first identify the size and type of images in the database. The database is a collection of folders containing each a collection of files. We thus do a nested recursive loop:

Read more…

Finding extremal values in a nd-array

Sometimes, you need to pick up the $N$-th extremal values in a mutli-dimensional matrix.

Let's suppose it is represented as a nd-array (here, I further suppose you are using the numpy library from the python language). Finding extremal values is easy with argsort but this function operated on 1d vectors... Juggling around indices is sometimes not such an easy task, but luckily, we have the unravel_index function.

Let's unwrap an easy solution combining these functions:

Read more…

Saving and displaying movies and dynamic figures

It is insanely useful to create movies to illustrate a talk, blog post or just to include in a notebook:

In [1]:
from IPython.display import HTML
HTML('<center><video controls autoplay loop src="../files/2016-11-15_noise.mp4" width=61.8%/></center>')
Out[1]:

For years I have used a custom made solution made around saving single frames and then calling ffmpeg to save that files to a movie file. That function (called anim_save had to be maintained accross different libraries to reflect new needs (going to WEBM and MP4 formats for instance). That made the code longer than necessary and had not its place in a scientific library.

Here, I show how to use the animation library from matplotlib to replace that

Read more…

Predictive coding of motion in an aperture

After reading the paper http://www.jneurosci.org/content/34/37/12601.full by Helena X. Wang, Elisha P. Merriam, Jeremy Freeman, and David J. Heeger (The Journal of Neuroscience, 10 September 2014, 34(37): 12601-12615; doi: 10.1523/JNEUROSCI.1034-14.2014), I was interested to test the hypothesis they raise in the discussion section :

The aperture-inward bias in V1–V3 may reflect spatial interactions between visual motion signals along the path of motion (Raemaekers et al., 2009; Schellekens et al., 2013). Neural responses might have been suppressed when the stimulus could be predicted from the responses of neighboring neurons nearer the location of motion origin, a form of predictive coding (Rao and Ballard, 1999; Lee and Mumford, 2003). Under this hypothesis, spatial interactions between neurons depend on both stimulus motion direction and the neuron's relative RF locations, but the neurons themselves need not be direction selective. Perhaps consistent with this hypothesis, psychophysical sensitivity is enhanced at locations further along the path of motion than at motion origin (van Doorn and Koenderink, 1984; Verghese et al., 1999).

Concerning the origins of aperture-inward bias, I want to test an alternative possibility. In some recent modeling work:

Laurent Perrinet, Guillaume S. Masson. Motion-based prediction is sufficient to solve the aperture problem. Neural Computation, 24(10):2726--50, 2012 http://invibe.net/LaurentPerrinet/Publications/Perrinet12pred

I was surprised to observe a similar behavior: the trailing edge was exhibiting a stronger activation (i. e. higher precision revealed by a lower variance in this probabilistic model) while I would have thought intuitively the leading edge would be more informative. In retrospect, it made sense in a motion-based prediction algorithm as information from the leading edge may propagate in more directions (135° for a 45° bar) than in the trailing edge (45°, that is a factor of 3 here). While we made this prediction we did not have any evidence for it.

In this script the predictive coding is done using the MotionParticles package and for a http://motionclouds.invibe.net/ within a disk aperture.

Read more…

compiling notebooks into a report

For a master's project in computational neuroscience, we adopted a quite novel workflow to go all the steps from the learning of the small steps to the wrtiting of the final thesis. Though we were flexible in our method during the 6 months of this work, a simple workflow emerged that I describe here.

Compiling a set of notebook to a LaTeX document.

Read more…