In this notebook, we test the convergence of SparseNet as a function of different learning parameters. This shows the relative robustness of this method according to the coding parameters, but also the importance of homeostasis to obtain an efficient set of filters:
- first, whatever the learning rate, the convergence is not complete without homeostasis,
- second, we achieve better convergence for similar learning rates and on a certain range of learning rates for the homeostasis
- third, the smoothing parameter
alpha_homeohas to be properly set to achieve a good convergence.
- last, this homeostatic rule works with the different variants of sparse coding.
See also :
- http://blog.invibe.net/posts/2017-03-14-reproducing-olshausens-classical-sparsenet.html for a description of how SparseNet is implemented in the scikit-learn package
- http://blog.invibe.net/posts/2017-03-15-reproducing-olshausens-classical-sparsenet-part-2.html for a description of how we managed to implement the homeostasis
- In an extension, we will study how homeostasis (cooperation) may be an essential ingredient to this algorithm working on a winner-take-all basis (competition). This extension has been published as Perrinet, Neural Computation (2010) (see http://invibe.net/LaurentPerrinet/Publications/Perrinet10shl ).
This is joint work with Victor Boutin.