## 2013-03-25 using tabs in vim

• call vim to open more files

vim -p file1 file2 file3

• or open one in your editing session

:tabf <pattern>


(will find the file corresponding to <pattern>)

• to switch tab, use gt

## 2013-03-23 upgrading owncloud to 5.0.0

• backup

rsync -a owncloud/ owncloud_bkpdate +"%Y%m%d"/


mkdir tmp; cd tmp

• remove old (except data and config)

rm COPYING-* AUTHORS README *php db_structure.xml themes search lib l10n ocs core settings files apps 3rdparty backup
tar -xzjf owncloud-5.0.0.tar.bz2
rsync --inplace -rtv tmp/owncloud/ owncloud/

• clean-up

rm -fr tmp


Starting in Leopard (I believe) when you open a file downloaded from the web, OS X asks if you really mean it. While it is intended to stop maliciousness, it is only a source of aggravation for me. While there are some hints here on working around it, it turns out that you can disable it completely using a Terminal command:

defaults write com.apple.LaunchServices LSQuarantine -bool NO

After that, reboot, and you should be set. (this was inspired by http://hints.macworld.com/article.php?story=20091208050655947 )

## 2013-03-13 shortcut to put the display to sleep

• ⇧⌃⏏ (shift+control+eject)

## 2013-03-06 setting some conventions common in Bibdesk (to work with bibtex, citeUlike)

• I found this set useful to collaborate:

• citekey  %a1%y%u0 
• rangement semi-automatique papiers:  %f{Cite Key}%n0%e 
• Topic = use citekey of related papers
• Comment (instead annote) to put... comments (as annote gets printed in the manuscript that would use the entry)
• to do that automatically, one may use this tricks:

defaults write edu.ucsd.cs.mmccrack.bibdesk "Cite Key Format" -string "%a1%y%u0"¬
defaults write edu.ucsd.cs.mmccrack.bibdesk BDSKLocalFileFormatKey -string "%f{Cite Key}%n0%e"

• this is included in this script @ https://github.com/laurentperrinet/dotfiles/blob/master/init/install_tex_live.sh

## 2013-02-20 removing files based on their date

Everything (almost) can be done by the find command:

• finding in the current directory (.) all files containing a lock pattern:

find . -name *lock*


find . -name *lock*  -exec ls -l {} \;

• filtering files that were changed just from now to one day ago:

find . -name *lock*  -mtime 0 -exec ls -l {} \;

• filtering files that were changed just from now to 5 hours ago:

find . -name *lock*  -mmin 300 -exec ls -l {} \;

• filtering files that are at least 5 hours old (not just from now to 5 hours ago):

find . -name *lock* -not -mmin 300 -exec ls -l {} \;

• removing of lock files older than 5 hours:

find . -name *lock*  -not -mmin 300 -exec  rm -f {} \;


## 2013-02-04 WP5 Year 2 report: contribution of CNRS-INT (Institut de Neurosciences de la Timone)

During the first year of BrainScaleS, we have concentrated on disseminating our work on the role of motion-based prediction in motion detection. This led to a publication on the hypothesis that this prior expectation may explain some phenomena explained otherwise by complex arrangements of mechanisms, namely that motion-based prediction is sufficient to solve the aperture problem (Perrinet and Masson, 2012). During the second year, we extended this hypothesis to other types of problems linked to the detection of motion. In particular, we focused on the case were the stimulus is transiently and unexpectedly blanked, a physiologically very relevant constraint occurring for instances during blinks of the eye. For this, we have used the same theoretical framework based on a Bayesian formulation and implemented using a particle filtering scheme, but used a different experimental protocol inspired by behavioral experiments conducted in the laboratory by CNRS-INT (Bogadhi, 2012). This is an important aspect as it allows to better understand the dynamics of the neural representation without sensory input and more generally to understand the interaction of the sensory flow with an internal neural representation of the environment.

 Role of motion-based prediction in motion extrapolation. We show here the results of simulation of the motion-based prediction compared to velocity prediction (no position prediction). These models were tested for a dot moving in a straight trajectory but blanked (as given by the vertical bars) whether at the early stage (top panel) or in the late phase (bottom panel). This shows that compared to a control (condition with no blank), the system simply resumes the convergence to the veridical position at reappearance of the dot. In motion-based prediction, the systems catches up the trajectory and recovers more quickly to the response with no blank.

Our results indicate that motion-based prediction is sufficient to predict eye responses during the blank and ---more importantly--- the dynamics of eye movements at the reappearance of the object. We compared simulations of the motion-based predictive framework to a dot moving in a straight trajectory which is transiently blanked, with a framework where prediction is limited to the velocity domain but is not anisotropically transported also in the position domain. This comparison allowed to show that at the reappearance of the object, instead of just resuming, the estimation of the position and velocity in the motion-based prediction framework catches-up the error that may have accumulated during the blank (see Figure). We have put in evidence that this phenomenon is only present when the system converged to a "tracking state", a phase transition that we first saw in our first study (Perrinet and Masson, 2012) and that we studied systematically here. Furthermore, we give some predictions as how the oculomotor response should respond to the same protocol when visual input is perturbed by noise, an experiment that has still not been performed behaviorally, and that could confirm the validity of our probabilistic framework. These results have been submitted for publication (Khoei, Masson and Perrinet).

From this novel step, we wish to further study the role of prediction on focusing on the neural implementation of these processes. Indeed, the framework that we used so far used an abstract, probabilistic framework. However, it is known to map well to a neural architecture such as those developed in BrainScaleS at the modeling and hardware levels. Such a venture was initiated in collaboration between CNRS-INT and KTH by Bernhard Kaplan and we could give a sufficient large-scale network of spiking neurons that could efficiently implement such algorithms. Our plan is to resume this work in more generic conditions. One objective is to apply it to different modalities,for instance to the somatosensory system (collaboration with Dan Shulz, CNRS-UNIC). Also, we wish to implement a model which is specifically more realistically accounting for the properties of the primary visual cortex of primates and the interaction this area may have with higher order areas. A post-doctoral student was selected in year 2 to work on that issue in Years 3 and 4. The ultimate goal of this work will be to have a pyNN-compatible network that implements a realistic model of motion detection. This network will be tested in light of the synthetic textures that we have generated (Sanz et al. 2012, see WP4 task 1) and that we recently used to disentangle the different read-outs that may be used by perception or action (Simoncini et al., 2012). The use of neuromorphic hardware will then be crucial to explore the configuration space of such large-scale networks implementing motion detection.

• Laurent U. Perrinet and Guillaume S. Masson. Motion-based prediction is sufficient to solve the aperture problem. Neural Computation, 24(10):2726--50, 2012
• Mina A. Khoei, Guillaume S. Masson and Laurent U. Perrinet. Role of motion-based prediction in motion extrapolation. Submitted.
• Paula S. Leon, Ivo Vanzetta, Guillaume S. Masson and Laurent U. Perrinet. Motion Clouds: Model-based stimulus synthesis of natural-like random textures for the study of motion perception Journal of Neurophysiology, 107(11):3217--3226, 2012
• Claudio Simoncini, Laurent U. Perrinet, Anna Montagnini, Pascal Mamassian and Guillaume S. Masson. More is not always better: dissociation between perception and action explained by adaptive gain control Nature Neuroscience, 2012

## 2013-02-02 How To Change Your Time Machine Backup Interval

 tl;dr  sudo /usr/libexec/PlistBuddy -c 'set  :LaunchEvents:com.apple.time:"Backup Interval":Interval 86400' /System/Library/LaunchDaemons/com.apple.backupd-auto.plist 

## method 1 : vim (or any editor)

• open the file for edition

sudo vim /System/Library/LaunchDaemons/com.apple.backupd-auto.plist

• replace 36000 (one hour) by 86400 (one day), then quit:

:%s/3600/86400/g
:wq!


## method 2 : plistbuddy

• you can make it in one line:

sudo /usr/libexec/PlistBuddy -c 'set  :LaunchEvents:com.apple.time:"Backup Interval":Interval 86400' /System/Library/LaunchDaemons/com.apple.backupd-auto.plist

• this method use the PlistBuddy method to read / write the file. this utility can be used interactively

sudo /usr/libexec/PlistBuddy  /System/Library/LaunchDaemons/com.apple.backupd-auto.plist

• but also directly:

sudo /usr/libexec/PlistBuddy -c 'print  ' /System/Library/LaunchDaemons/com.apple.backupd-auto.plist
sudo /usr/libexec/PlistBuddy -c 'print  :LaunchEvents:com.apple.time ' /System/Library/LaunchDaemons/com.apple.backupd-auto.plist
sudo /usr/libexec/PlistBuddy -c 'print  :LaunchEvents:com.apple.time:"Backup Interval" ' /System/Library/LaunchDaemons/com.apple.backupd-auto.plist
sudo /usr/libexec/PlistBuddy -c 'print  :LaunchEvents:com.apple.time:"Backup Interval":Interval ' /System/Library/LaunchDaemons/com.apple.backupd-auto.plist
sudo /usr/libexec/PlistBuddy -c 'set  :LaunchEvents:com.apple.time:"Backup Interval":Interval 3600' /System/Library/LaunchDaemons/com.apple.backupd-auto.plist


• reboot.

## 2013-01-31 (re)moving lots of file containing a similar pattern

• I use ownCloud as a remplacement of dropbox, but I had unfortunately lots of conflicts files (on client and server)

• these contain the _conflict- pattern, so a solution is to move all of them to a backup folder:

cd /share/DriveOne/Web/owncloud/data/admin/files
find . -name *_conflict-* -exec mv {} /share/Backups/backups/duplicate-photos/ \;
`

## 2013-01-16 WP4 report : NeuroTools support for the synthesis of random textured dynamical stimuli

In the context of BrainScales, we have developed a library to synthesize stimuli targeted at the characterization of motion perception. This process took the following steps:

• creation of the library using python and linked with the development of NeuroTools,
• documentation of this library along with a mathematical description, published in the Journal of Neurophysiology, while this stimuli were at the basis of a paper published in Nature Neuroscience (both acknowledging BrainScaleS)
• dissemination of this tool by creating a dedicated webpage associated to the "neuralensemble organization" which also host neo and pyNN.

These steps were recently described in deliverable D4-1.1: https://brainscales.kip.uni-heidelberg.de/jss/FileStore/dI_1548/BrainScaleS_DeliverableD4-1.1.pdf (requires authentification).