# Posts about learning (old posts, page 1)

## 2017-03-14 Reproducing Olshausen's classical SparseNet (part 1)

• This notebook tries to reproduce the learning strategy specified in the framework of the SparseNet algorithm from Bruno Olshausen. It allows to efficiently code natural image patches by constraining the code to be sparse.

• the underlying machinery uses a similar dictionary learning as used in the image denoising example from sklearn and our aim here is to show that a novel ingredient is necessary to reproduce Olshausen's results.

• All these code bits is regrouped in the SHL scripts repository (where you will also find some older matlab code). You may install it using

    pip install git+https://github.com/bicv/SHL_scripts

## basics of probability theory¶

In the context of a course in Computational Neuroscience, I am teaching a basic introduction in Probabilities, Bayes and the Free-energy principle.

Let's learn to use probabilities in practice by generating some "synthetic data", that is by using the computer's number generator. 2018-03-26_cours-NeuroComp_FEP

## 2017-01-15 Bogacz (2017) A tutorial on free-energy

I enjoyed reading "A tutorial on the free-energy framework for modelling perception and learning" by Rafal Bogacz, which is freely available here. In particular, the author encourages to replicate the results in the paper. He is himself giving solutions in matlab, so I had to do the same in python all within a notebook...

## 2015-12-11 Reproducing Olshausen's classical SparseNet (part 3)

This is an old blog post, see the newer version in this post

## 2015-12-11 Reproducing Olshausen's classical SparseNet (part 4)

This is an old blog post, see the newer version in this post and following.

## 2015-05-12 Extending Olshausens classical SparseNet

This is an old blog post, see the newer version in this post