2016-07-16 Predictive coding of motion in an aperture

After reading the paper http://www.jneurosci.org/content/34/37/12601.full by Helena X. Wang, Elisha P. Merriam, Jeremy Freeman, and David J. Heeger (The Journal of Neuroscience, 10 September 2014, 34(37): 12601-12615; doi: 10.1523/JNEUROSCI.1034-14.2014), I was interested to test the hypothesis they raise in the discussion section :

The aperture-inward bias in V1–V3 may reflect spatial interactions between visual motion signals along the path of motion (Raemaekers et al., 2009; Schellekens et al., 2013). Neural responses might have been suppressed when the stimulus could be predicted from the responses of neighboring neurons nearer the location of motion origin, a form of predictive coding (Rao and Ballard, 1999; Lee and Mumford, 2003). Under this hypothesis, spatial interactions between neurons depend on both stimulus motion direction and the neuron's relative RF locations, but the neurons themselves need not be direction selective. Perhaps consistent with this hypothesis, psychophysical sensitivity is enhanced at locations further along the path of motion than at motion origin (van Doorn and Koenderink, 1984; Verghese et al., 1999).

Concerning the origins of aperture-inward bias, I want to test an alternative possibility. In some recent modeling work:

Laurent Perrinet, Guillaume S. Masson. Motion-based prediction is sufficient to solve the aperture problem. Neural Computation, 24(10):2726--50, 2012 http://invibe.net/LaurentPerrinet/Publications/Perrinet12pred

I was surprised to observe a similar behavior: the trailing edge was exhibiting a stronger activation (i. e. higher precision revealed by a lower variance in this probabilistic model) while I would have thought intuitively the leading edge would be more informative. In retrospect, it made sense in a motion-based prediction algorithm as information from the leading edge may propagate in more directions (135° for a 45° bar) than in the trailing edge (45°, that is a factor of 3 here). While we made this prediction we did not have any evidence for it.

In this script the predictive coding is done using the MotionParticles package and for a http://motionclouds.invibe.net/ within a disk aperture.

Read more…

2016-06-25 compiling notebooks into a report

For a master's project in computational neuroscience, we adopted a quite novel workflow to go all the steps from the learning of the small steps to the wrtiting of the final thesis. Though we were flexible in our method during the 6 months of this work, a simple workflow emerged that I describe here.

Compiling a set of notebook to a LaTeX document.

Read more…

2016-01-20 Using scratch to illustrate the Flash-Lag Effect

Scratch (see https://scratch.mit.edu/) is a programming language aimed at introducing coding litteracy to schools and education. Yet you can implement even complex algorithms and games. It is visual, multi-platform and critically, open-source. Also, the web-site educates to sharing code and it is very easy to "fork" an existing project to change details or improve it. Openness at its best!

During a visit of a 14-year schoolboy at the lab, we used that to make a simple psychopysics experiment available at https://scratch.mit.edu/projects/92044597/ :

Read more…

2016-01-19 élasticité trames V1

L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.

On va maintenant utiliser des forces elastiques pour coordonner la dynamique des lames dans la trame.

Read more…

2016-01-18 bootstraping posts for élasticité

L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.

.. media:: http://vimeo.com/150813922

Ce meta-post gère la publication sur http://blog.invibe.net.

Read more…

2015-12-11 Reproducing Olshausen's classical SparseNet (part 4)

In this notebook, we write an example script for the sklearn library showing the improvement in the convergence of dictionary learning induced by the introduction of Olshausen's homeostasis.

See also :

Read more…

2015-12-11 Reproducing Olshausen's classical SparseNet (part 3)

In this notebook, we test the convergence of SparseNet as a function of different learning parameters. This shows the relative robusteness of this method according to the coding parameters, but also the importance of homeostasis to obtain an efficient set of filters:

  • first, whatever the learning rate, the convergence is not complete without homeostasis,
  • second, we achieve better convergence for similar learning rates and on a certain range of learning rates for the homeostasis
  • third, the smoothing parameter alpha_homeo has to be properly set to achieve a good convergence.
  • last, this homeostatic rule works with the diferent variants of sparse coding.

See also :

Read more…