## 2017-06-15 Le jeu de l'urne

Lors de la visite au laboratoire d'une brillante élève de seconde (salut Lena!), nous avons inventé un jeu: le jeu de l'urne. Le principe est simple: on a un nombre pair $N$ de balles (disons $8$), la motié sont rouges, l'autre moitié noires. Elles sont dans une urne opaque et donc on ne peut pas les voir à moins de les tirer une par une (sans remise dans l'urne). On peut tirer autant de balles qu'on veut pour les observer et le but est de décider le moment où on est prêt à deviner la couleur de la balle qu'on va prendre. Si on gagne (on a bien prédit la couleur), alors on gagne autant de points que le nombre de balles qui étaient dans l'urne au moment de la décision.Sinon on perd autant de points que l'on en aurait gagné!

Par exemple, je choisis de regarder une balle (rouge), puis un seconde (encore rouge), enfin une troisième (noire) - je décide de parier pour une noire, mais une rouge sort: j'ai perdu $8-3=5$ points.

Ce jeu parait simple (je suis curieux de savoir s'il existe, le contraire m'étonnerait) - mais son analyse mathématique implique des combinaisons dont le nombre grandit vite: quelle est la stratégie optimale? C'est ce que nous essayons de faire ici.

Note: ce jeu pourrait paraitre anodin, mais il est révélateur de notre capacité à prendre des décisions par rapport à ce que l'on sait et ce qui pourrait arriver: c'est la base de la théorie de la décision. Cette théorie est notamment essentielle pour comprendre le fonctionnement normal ou pathologique... et de le diagnostiquer.

## 2017-03-29 testing COMPs-fastPcum_scripted

In this notebook, we will study how homeostasis (cooperation) may be an essential ingredient to this algorithm working on a winner-take-all basis (competition). This extension has been published as Perrinet, Neural Computation (2010) (see http://invibe.net/LaurentPerrinet/Publications/Perrinet10shl ). Compared to the previous post, we integrated the faster code to https://github.com/bicv/SHL_scripts.

This is joint work with Victor Boutin.

## 2017-03-29 testing COMPs-fastPcum

In this notebook, we will study how homeostasis (cooperation) may be an essential ingredient to this algorithm working on a winner-take-all basis (competition). This extension has been published as Perrinet, Neural Computation (2010) (see http://invibe.net/LaurentPerrinet/Publications/Perrinet10shl ). Compared to the previous post, we optimize the code to be faster.

This is joint work with Victor Boutin.

## 2017-03-29 testing COMPs-Pcum

In this notebook, we will study how homeostasis (cooperation) may be an essential ingredient to this algorithm working on a winner-take-all basis (competition). This extension has been published as Perrinet, Neural Computation (2010) (see http://invibe.net/LaurentPerrinet/Publications/Perrinet10shl ). In particular, we will show how one can build the non-linear functions based on the activity of each filter and which implement homeostasis.

This is joint work with Victor Boutin.

## 2017-03-27 Extending Olshausens classical SparseNet

• In a previous notebook, we tried to reproduce the learning strategy specified in the framework of the SparseNet algorithm from Bruno Olshausen. It allows to efficiently code natural image patches by constraining the code to be sparse. In particular, we saw that in order to optimize competition, it is important to control cooperation and we implemented a heuristic to just do this.

• In this notebook, we provide an extension to the SparseNet algorithm. We will study how homeostasis (cooperation) may be an essential ingredient to this algorithm working on a winner-take-all basis (competition). This extension has been published as Perrinet, Neural Computation (2010) (see http://invibe.net/LaurentPerrinet/Publications/Perrinet10shl ):

@article{Perrinet10shl,
Title = {Role of homeostasis in learning sparse representations},
Author = {Perrinet, Laurent U.},
Journal = {Neural Computation},
Year = {2010},
Doi = {10.1162/neco.2010.05-08-795},
Keywords = {Neural population coding, Unsupervised learning, Statistics of natural images, Simple cell receptive fields, Sparse Hebbian Learning, Adaptive Matching Pursuit, Cooperative Homeostasis, Competition-Optimized Matching Pursuit},
Month = {July},
Number = {7},
Url = {http://invibe.net/LaurentPerrinet/Publications/Perrinet10shl},
Volume = {22},
}


This is joint work with Victor Boutin.

## 2017-03-16 Reproducing Olshausen's classical SparseNet (part 3)

In this notebook, we test the convergence of SparseNet as a function of different learning parameters. This shows the relative robustness of this method according to the coding parameters, but also the importance of homeostasis to obtain an efficient set of filters:

• first, whatever the learning rate, the convergence is not complete without homeostasis,
• second, we achieve better convergence for similar learning rates and on a certain range of learning rates for the homeostasis
• third, the smoothing parameter alpha_homeo has to be properly set to achieve a good convergence.
• last, this homeostatic rule works with the different variants of sparse coding.

This is joint work with Victor Boutin.

## 2017-03-15 Reproducing Olshausens classical SparseNet (part 2)

• In a previous notebook, we tried to reproduce the learning strategy specified in the framework of the SparseNet algorithm from Bruno Olshausen. It allows to efficiently code natural image patches by constraining the code to be sparse.

• However, the dictionaries are qualitatively not the same as the one from the original paper, and this is certainly due to the lack of control in the competition during the learning phase.

• Herein, we re-implement the cooperation mechanism in the dictionary learning routine - this will be then proposed to the main code.

This is joint work with Victor Boutin.

## 2017-03-14 Reproducing Olshausen's classical SparseNet (part 1)

• This notebook tries to reproduce the learning strategy specified in the framework of the SparseNet algorithm from Bruno Olshausen. It allows to efficiently code natural image patches by constraining the code to be sparse.

• the underlying machinery uses a similar dictionary learning as used in the image denoising example from sklearn and our aim here is to show that a novel ingredient is necessary to reproduce Olshausen's results.

• All these code bits is regrouped in the SHL scripts repository (where you will also find some older matlab code). You may install it using

    pip install git+https://github.com/bicv/SHL_scripts

## basics of probability theory¶

In the context of a course in Computational Neuroscience, I am teaching a basic introduction in Probabilities, Bayes and the Free-energy principle.

Let's learn to use probabilities in practice by generating some "synthetic data", that is by using the computer's number generator.