Posts about sciblog (old posts, page 3)

2010-08-03 compiling OpenCV on MacOSX 10.6

2010-08-03 12:14:29

using macports

  • it works now with macports:

    sudo port install -u opencv +python26 +tbb

latest SVN

  • compiling here along with MacTex...

  • from

    svn co
    cd opencv # the directory containing INSTALL, CMakeLists.txt etc.
    mkdir build
    cd build
    cmake -D CMAKE_OSX_ARCHITECTURES=x86_64 -D WITH_FFMPEG=ON -D BUILD_EXAMPLES=ON -D BUILD_LATEX_DOCS=ON -D PDFLATEX_COMPILER=/usr/texbin/pdflatex -D BUILD_NEW_PYTHON_SUPPORT=ON  -D PYTHON_LIBRARY=/opt/local/lib/libpython2.6.dylib -D PYTHON_INCLUDE_DIR=/opt/local/Library/Frameworks/Python.framework/Headers ..
    make -j4
    sudo make install
  • I had to rebuild some ports

    sudo port install ilmbase
    port provides /opt/local/lib/libIlmImf.dylib
    sudo port install openexr
    sudo port install libdc1394

    and recompile

  • then could run

    cd ../samples/python/

using homebrew

  • another route is homebrew: / :

    $ brew info opencv
    opencv 2.1.1-pre
    Depends on: cmake, pkg-config, libtiff, jasper, tbb
    /usr/local/Cellar/opencv/2.1.1-pre (96 files, 37M)
    The OpenCV Python module will not work until you edit your PYTHONPATH like so:
      export PYTHONPATH="/usr/local/lib/python2.6/site-packages/:$PYTHONPATH"
    To make this permanent, put it in your shell's profile (e.g. ~/.profile).


2010-07-08 latex within moinmoin

2011-07-06 20:59:01
  • installation d'après

  • pour s'adapter à ma distribution pdflatex, j'ai changé

    1 # last arg must have %s in it!
    2 latex_args = ("--interaction=nonstopmode -output-format dvi", "%s.tex")

    dans le parser sudoopen-e~/WebSites/moin/data/plugin/parser/


This is a red square:


\savebox{\mysquare}{\textcolor{red}{\rule{1in}{1in} } }


% Math-mode symbol & verbatim
\def\W#1#2{$#1{#2}$ &\tt\string#1\string{#2\string}}
\def\X#1{$#1$ &\tt\string#1}
\def\Y#1{$\big#1$ &\tt\string#1}

% A non-floating table environment.

% All the tables are \label'ed in case this document ever gets some
% explanatory text written, however there are no \refs as yet. To save
% LaTeX-ing the file twice we go:

\X\alpha        &\X\theta       &\X o           &\X\tau         \\
\X\beta         &\X\vartheta    &\X\pi          &\X\upsilon     \\
\X\gamma        &\X\gamma       &\X\varpi       &\X\phi         \\
\X\delta        &\X\kappa       &\X\rho         &\X\varphi      \\
\X\epsilon      &\X\lambda      &\X\varrho      &\X\chi         \\
\X\varepsilon   &\X\mu          &\X\sigma       &\X\psi         \\
\X\zeta         &\X\nu          &\X\varsigma    &\X\omega       \\
\X\eta          &\X\xi                                          \\
\X\Gamma        &\X\Lambda      &\X\Sigma       &\X\Psi         \\
\X\Delta        &\X\Xi          &\X\Upsilon     &\X\Omega       \\
\X\Theta        &\X\Pi          &\X\Phi
\caption{Greek Letters}\label{greek}


x^3 =\int_{0}^{\infty} f(x,y) dy
  • et encore

    $$x^3 =\int_{0}^{\infty} f(x,y) dy + c$$


Because people requested an easier way to enter latex, I've added the possibility to write $ ... $ to obtain inline formulas. This is equivalent to writing \$ ...\$ and has the same single-line limitation (but everything else isn't really useful in formulas anyway). In order to do this, install the inline\ parser add #format inline\_latex to your page (alternatively, configure the default parser to be ``inline\_latex). This parser accepts all regular wiki syntax, but additionally the $ ... $' syntax. Additionally, the ``inline_latex` formatter supports $$....$$ style formulas (still limited to a single line though!) which puts the formula into a paragraph on its own.

Note: in the nikola blog, this is directly accomplished by using ReST : \$\\lambda\$ = $lambda$


2010-06-24 installing SUMATRA

2010-06-24 13:36:57
  • notes from the CodeJamNr4
  • this was using a fresh install of ETS 6.2


  • pysvn :

    • had to uninstall stuff from MacPorts

      sudo port uninstall --follow-dependents subversion
    • get pysvn

      • make :

        cd Source
        python backport
        Create the Makefile using python configure
    • install

      sudo rsync -av pysvn /Library/Frameworks/Python.framework/Versions/6.2/lib/python2.6/site-packages/
      • pysvn 1.7.1 worked for me
  • mercurial

    sudo easy_install mercurial
  • django

    sudo easy_install django django_tagging

with hg

525  svn export ../sci/dyva/Motion/particles hg_particles
526  cd hg_particles/
527  hg init
528  hg add
529  hg commit
530  hg commit -m 'test'
531  echo $USER
532  vim .hgrc
533  vim ~/.hgrc
534  hg commit -m 'my first HG commit'
535  vim ~/.hgrc
536  ipython
537  ls
538  smt init sumatraTest_hg
539  smt info

with svn

 501  cd sci/dyva/Motion/particles/
 502  smt init -h
 503  smt init sumatraTest
 504  smt info
511  smt configure --simulator=python
 512  smt info
 513  smtweb &
 514  ls -a
 515  rm -fr .smt
 516  smt init sumatraTest
 517  smtweb &
 518  open
 519  touch fake.param
 520  smt run
 521  smt run -s python -m fake.param
 522  smt info
 523  smt configure -h
 524  smt configure -c diff
 525  smt info
 526  smt run -s python -m fake.param
 529  smt run -s python -m fake.param
 534  rm mat/dot.npy
 535  python fake.param
 536  ls
 537  smt help configure
 538  smt configure -d ./figures/
 539  smt info
 540  smt configure -s python -m
 541  smt run fake.param
 542  rm mat/dot.npy
 543  smt run fake.param
 544  ls figures/
 545  rm figures/dot_*
 546  smt run fake.param
 547  smt info
 548  smt configure -d ./figures
 549  smt info
 550  rm figures/dot_*png
 551  smt configure -d ./figures
 552  smt run fake.param
 553  smt comment "apparently, it is worth NaN shekels."
 554  smt tag codejam
 558  rm figures/dot_*png
 559  rm mat/dot.npy
 560  smt run --reason="test effect of a bigger dot" fake.param dot_size=0.1
 561  ls
 562  ls -al .smt/
 563  less .smt/simulation_records
 564  sqlite3 .smt/simulation_records


2010-05-27 NeuroCompMarseille 2010 Workshop

Computational Neuroscience: From Representations to Behavior

Second NeuroComp Marseille Workshop

27-28 May 2010
Amphithéâtre Charve at the Saint-Charles' University campus - Métro : Line 1 et 2 (St Charles), a 5 minute walk from the railway station. Map (Amphithéâtre Charve, University Main Entrance, etc.) Metro, Bus and Tramway Getting to Marseille from Airport
Registration was free but mandatory, participation limited to 80 persons.

Computational Neuroscience emerges now as a major breakthrough in exploring cognitive functions. It brings together theoretical tools that elucidate fundamental mechanisms responsible for experimentally observed behaviour in the applied neurosciences. This is the second Computational Neuroscience Workshop organized by the "NeuroComp Marseille" network.

It will focus on latest advances on the understanding of how information may be represented in neural activity (1st day) and on computational models of learning, decision-making and motor control (2nd day). The workshop will bring together leading researchers in these areas of theoretical neuroscience. The meeting will consist of invited speakers with sufficient time to discuss and share ideas and data. All conferences will be in English.

  • 27 May 2010 Neural representations for sensory information & the structure-function relation

In this talk, I will review recent works on the sparse representations of natural images. I will in particular focus on both the application of these emerging models to image processing problems, and their potential implication for the modeling of visual processing. Natural images exhibit a wide range of geometric regularities, such as curvilinear edges and oscillating textures. Adaptive image representations select bases from a dictionary of orthogonal or redundant frames that are parameterized by the geometry of the image. If the geometry is well estimated, the image is sparsely represented by only a few atoms in this dictionary. On an ingeniering level, these methods can be used to enhance the resolution of super-resolution inverse problems, and can also be used to perform texture synthesis. On a biological level, these mathematical representations share similarities with low level grouping processes that operate in areas V1 and V2 of the visual brain. We believe both processing and biological application of geometrical methods work hand in hand to design and analyze new cortical imaging methods.

  • 11h00-12h00 Jean Petitot Centre d'Analyse et de Mathématique Sociales, Ecole des Hautes Etudes en Sciences Sociales - Paris «Neurogeometry of visual perception»*

In relation with experimental data, we propose a geometric model of the functional architecture of the primary visual cortex (V1) explaining contour integration. The aim is to better understand the type of geometry algorithms implemented by this functional architecture. The contact structure of the 1-jet space of the curves in the plane, with its generalization to the roto-translation group, symplectifications, and sub-Riemannian geometry, are all neurophysiologically realized by long-range horizontal connections. Virtual structures, such as illusory contours of the Kanizsa type, can then be explained by this model.

  • 14h00-14h45 Peggy Series Institute for Adaptive and Neural Computation, Edinburgh «Bayesian Priors in Perception and Decision Making»

We'll present two recent projects:

The first project (with M. Chalk and A. R. Seitz) is an experimental investigation of the influence of expectations on the perception of simple stimuli. Using a simple task involving estimation and detection of motion random dots displays, we examined whether expectations can be developed quickly and implicitly and how they affect perception. We find that expectations lead to attractive biases such that stimuli appear as being more similar to the expected one than they really are, as well as visual hallucinations in the absence of a stimulus. We discuss our findings in terms of Bayesian Inference.

In the second project (with A. Kalra and Q. Huys), we explore the concepts of optimism and pessimism in decision making. Optimism is usually assessed using questionnaires, such as the LOT-R. Here, using a very simple behavioral task, we show that optimism can be described in terms of a prior on expected future rewards. We examine the correlation between the shape of this prior for individual subjects and their scores on questionnaires, as well as with other measures of personality traits.

  • 14h45-15h45 Heiko Neumann (in collaboration with Florian Raudies) Inst. of Neural Information Processing, Ulm University Germany «Cortical mechanisms of transparent motion perception – a neural model»

Transparent motion is perceived when multiple motions different in directions and/or speeds are presented in the same part of visual space. In perceptual experiments the conditions have been studied under which motion transparency occurs. An upper limit in the number of perceived transparent layers has been investigated psychophysically. Attentional signals can improve the perception of a single motion amongst several motions. While criteria for the occurrence of transparent motion have been identified only few potential neural mechanisms have been discussed so far to explain the conditions and mechanisms for segregating multiple motions. A neurodynamical model is presented which builds upon a previously developed neural architecture emphasizing the role of feedforward cascade processing and feedback from higher to earlier stages for selective feature enhancement and tuning. Results of computational experiments are consistent with findings from physiology and psychophysics. Finally, the model is demonstrated to cope with realistic data from computer vision benchmark databases. Work supported by European Union (project SEARISE), BMBF, and CELEST

  • 16h00-17h00 Rudolf Friedrich Institute für Theoretische Physik Westfälische Wilhelms Universität Münster ** «Windows to Complexity: Disentangling Trends and Fluctuations inComplex Systems»**

In the present talk, we discuss how to perform an analysis of experimental data of complex systems by disentangling the effects of dynamical noise (fluctuations) and deterministic dynamics (trends). We report on results obtained for various complex systems like turbulent fields, the motion of dissipative solitons in nonequilibrium systems, traffic flows, and biological data like human tremor data and brain signals. Special emphasis is put on methods to predict the occurrence of qualitative changes in systems far from equilibrium. [1] R. Friedrich, J. Peinke, M. Reza Rahimi Tabar: Importance of Fluctuations: Complexity in the View of stochastic Processes (in: Springer Encyclopedia on Complexity and System Science, (2009))

  • 17h00-17h45 General Discussion
  • 28 May 2010 Computational models of learning and decision making
  • 9h30-10h00 Andrea Brovelli Institut de Neurosciences Cognitives de la Méditerranée, CNRS and Université de la Méditerranée - Marseille «An introduction to Motor Learning, Decision-Making and Motor Control»
  • 10h00-11h00 Emmanuel Daucé Mouvement & Perception, UMR 6152, Faculté des sciences du sport «Adapting the noise to the problem : a Policy-gradient approach of receptive fields formation»

In machine learning, Kernel methods are give a consistent framework for applying the perceptron algorithm to non-linear problems. In reinforcement learning, the analog of the perceptron delta-rule is called the "policy-gradient" approch proposed by Williams in 1992 in the framework of stochastic neural networks. Despite its generality and straighforward applicability to continuous command problems, quite few developments of the method have been proposed since. Here we present an account of the use of a kernel transformation of the perception space for learning a motor command, in the case of eye orientation and multi-joint arm control. We show that such transformation allows the system to learn non-linear transformation, like the log-like resolution of a foveated retina, or the transformation from a cartesian perception space to a log-polar command, by shaping appropriate receptive fields from the perception to the command space. We also present a method for using multivariate correlated noise for learning high-DOF control problems, and propose some interpretations on the putative role of correlated noise for learning in biological systems.

  • 11h00-12h00 Máté Lengyel Computational & Biological Learning Lab, Department of Engineering, University of Cambridge «Why remember? Episodic versus semantic memories for optimal decision making»

Memories are only useful inasmuch as they allow us to act adaptively in the world. Previous studies on the use of memories for decision making have almost exclusively focussed on implicit rather than declarative memories, and even when they did address declarative memories they dealt only with semantic but not episodic memories. In fact, from a purely computational point of view, it seems wasteful to have memories that are episodic in nature: why should it be better to act on the basis of the recollection of single happenings (episodic memory), rather than the seemingly normative use of accumulated statistics from multiple events (semantic memory)? Using the framework of reinforcement learning, and Markov decision processes in particular, we analyze in depth the performance of episodic versus semantic memory-based control in a sequential decision task under risk and uncertainty in a class of simple environments. We show that episodic control should be useful in a range of cases characterized by complexity and inferential noise, and most particularly at the very early stages of learning, long before habitization (the use of implicit memories) has set in. We interpret data on the transfer of control from the hippocampus to the striatum in the light of this hypothesis.

  • 14h00-15h00 Rafal Bogacz Department of Computer Science, University of Bristol «Optimal decision making and reinforcement learning in the cortico-basal-ganglia circuit»

During this talk I will present a computational model describing decision making process in the cortico-basal ganglia circuit. The model assumes that this circuit performs statistically optimal test that maximizes speed of decisions for any required accuracy. In the model, this circuit computes probabilities that considered alternatives are correct, according to Bayes’ theorem. This talk will show that the equation of Bayes’ theorem can be mapped onto the functional anatomy of a circuit involving the cortex, basal ganglia and thalamus. This theory provides many precise and counterintuitive experimental predictions, ranging from neurophysiology to behaviour. Some of these predictions have been already validated in existing data and others are a subject of ongoing experiments. During the talk I will also discuss the relationships between the above model and current theories of reinforcement learning in the cortico-basal-ganglia circuit.

  • 15h30-16h30 Emmanuel Guigon Institut des Systèmes Intelligents et de Robotique, UPMC - CNRS / UMR 7222 «Optimal feedback control as a principle for adaptive control of posture and movement»
  • 16h30-17h15 General Discussion


2010-05-23 Haïm Cohen : Tu Ne Laisseras Point Pleurer

2010-05-23 11:04:50

  • Présentation de l'éditeur (source: amazon)
    • Où puiser l'espoir d'un monde plus humain ? En comprenant la dimension humaine des pleurs de nos bébés et en y répondant encore et encore. A partir d'arguments psychologiques et neurobiologiques, Haïm Cohen nous expose son utopie susceptible d'élever la conscience morale de nos enfants ainsi immunisés contre l'extrême violence. Manuel d'humanisme autant que de réflexion portée sur notre société, ce livre s'adresse à tous les parents soucieux du bon développement psycho-affectif de leur enfant, mais aussi à tous les lecteurs intéressés par les progrès des neurosciences.
    • Biographie de l'auteur : Haïm Cohen est pédiatre à Paris.
  • utopie de base: importance de ne pas laisser un bébé pleurer, ce qui amènerait le bébé à accepter le manque et la violence entre individus. peut se baser sur notre évolution à l'échelle du million d'année, notre statut ancien de chasseur / cueilleur. les pleurs sont universels, un langage "phasique" primaire, primal
  • convergence de la "neuroanalyse" : psychanalyse + neuroscience ...
  • vers une émergence de l'éthique. l'individu n'a qu'un objectif d'épanouissement personnel. perception de l'altruisme, émergence de l'éthique depuis l'interaction de ces individualités.


2010-04-28 reStructuredText rst cheatsheet

2010-04-28 10:15:04

  • =====================================================
     The reStructuredText_ Cheat Sheet: Syntax Reminders
    :Info: See <> for introductory docs.
    :Author: David Goodger <>
    :Date: $Date: 2006-01-23 02:13:55 +0100 (Mon, 23 Jän 2006) $
    :Revision: $Revision: 4321 $
    :Description: This is a "docinfo block", or bibliographic field list
    Section Structure
    Section titles are underlined or overlined & underlined.
    Body Elements
    Grid table:
    | Paragraphs are flush-left,     | Literal block, preceded by "::":: |
    | separated by blank lines.      |                                   |
    |                                |     Indented                      |
    |     Block quotes are indented. |                                   |
    +--------------------------------+ or::                              |
    | >>> print 'Doctest block'      |                                   |
    | Doctest block                  | > Quoted                          |
    | | Line blocks preserve line breaks & indents. [new in 0.3.6]       |
    | |     Useful for addresses, verse, and adornment-free lists; long  |
    |       lines can be wrapped with continuation lines.                |
    Simple tables:
    ================  ============================================================
    List Type         Examples
    ================  ============================================================
    Bullet list       * items begin with "-", "+", or "*"
    Enumerated list   1. items use any variation of "1.", "A)", and "(i)"
                      #. also auto-enumerated
    Definition list   Term is flush-left : optional classifier
                          Definition is indented, no blank line between
    Field list        :field name: field body
    Option list       -o  at least 2 spaces between option & description
    ================  ============================================================
    ================  ============================================================
    Explicit Markup   Examples (visible in the `text source <cheatsheet.txt>`_)
    ================  ============================================================
    Footnote          .. [1] Manually numbered or [#] auto-numbered
                         (even [#labelled]) or [*] auto-symbol
    Citation          .. [CIT2002] A citation.
    Hyperlink Target  .. _reStructuredText:
                      .. _indirect target: reStructuredText_
                      .. _internal target:
    Anonymous Target  __
    Directive ("::")  .. image:: images/biohazard.png
    Substitution Def  .. |substitution| replace:: like an inline directive
    Comment           .. is anything else
    Empty Comment     (".." on a line by itself, with blank lines before & after,
                      used to separate indentation contexts)
    ================  ============================================================
    Inline Markup
    *emphasis*; **strong emphasis**; `interpreted text`; `interpreted text
    with role`:emphasis:; ``inline literal text``; standalone hyperlink,; named reference, reStructuredText_;
    `anonymous reference`__; footnote reference, [1]_; citation reference,
    [CIT2002]_; |substitution|; _`inline internal target`.
    Directive Quick Reference
    See <> for full info.
    ================  ============================================================
    Directive Name    Description (Docutils version added to, in [brackets])
    ================  ============================================================
    attention         Specific admonition; also "caution", "danger",
                      "error", "hint", "important", "note", "tip", "warning"
    admonition        Generic titled admonition: ``.. admonition:: By The Way``
    image             ``.. image:: picture.png``; many options possible
    figure            Like "image", but with optional caption and legend
    topic             ``.. topic:: Title``; like a mini section
    sidebar           ``.. sidebar:: Title``; like a mini parallel document
    parsed-literal    A literal block with parsed inline markup
    rubric            ``.. rubric:: Informal Heading``
    epigraph          Block quote with class="epigraph"
    highlights        Block quote with class="highlights"
    pull-quote        Block quote with class="pull-quote"
    compound          Compound paragraphs [0.3.6]
    container         Generic block-level container element [0.3.10]
    table             Create a titled table [0.3.1]
    list-table        Create a table from a uniform two-level bullet list [0.3.8]
    csv-table         Create a table from CSV data (requires Python 2.3+) [0.3.4]
    contents          Generate a table of contents
    sectnum           Automatically number sections, subsections, etc.
    header, footer    Create document decorations [0.3.8]
    target-notes      Create an explicit footnote for each external target
    meta              HTML-specific metadata
    include           Read an external reST file as if it were inline
    raw               Non-reST data passed untouched to the Writer
    replace           Replacement text for substitution definitions
    unicode           Unicode character code conversion for substitution defs
    date              Generates today's date; for substitution defs
    class             Set a "class" attribute on the next element
    role              Create a custom interpreted text role [0.3.2]
    default-role      Set the default interpreted text role [0.3.10]
    title             Set the metadata document title [0.3.10]
    ================  ============================================================
    Interpreted Text Role Quick Reference
    See <> for full info.
    ================  ============================================================
    Role Name         Description
    ================  ============================================================
    emphasis          Equivalent to *emphasis*
    literal           Equivalent to ``literal`` but processes backslash escapes
    PEP               Reference to a numbered Python Enhancement Proposal
    RFC               Reference to a numbered Internet Request For Comments
    raw               For non-reST data; cannot be used directly (see docs) [0.3.6]
    strong            Equivalent to **strong**
    sub               Subscript
    sup               Superscript
    title             Title reference (book, etc.); standard default role
    ================  ============================================================
  • results in

The reStructuredText Cheat Sheet: Syntax Reminders

Info: See <> for introductory docs.
Author: David Goodger <>
Date: $Date: 2006-01-23 02:13:55 +0100 (Mon, 23 Jän 2006) $
Revision: $Revision: 4321 $
Description: This is a "docinfo block", or bibliographic field list

Section Structure

Section titles are underlined or overlined & underlined.

Body Elements

Grid table:

Paragraphs are flush-left, separated by blank lines.

Block quotes are indented.

Literal block, preceded by "::":



> Quoted
>>> print 'Doctest block'
Doctest block
Line blocks preserve line breaks & indents. [new in 0.3.6]
Useful for addresses, verse, and adornment-free lists; long lines can be wrapped with continuation lines.

Simple tables:

List Type Examples
Bullet list
  • items begin with "-", "+", or "*"
Enumerated list
  1. items use any variation of "1.", "A)", and "(i)"
  2. also auto-enumerated
Definition list
Term is flush-left : optional classifier
Definition is indented, no blank line between
Field list
field name: field body
Option list
-o at least 2 spaces between option & description
Explicit Markup Examples (visible in the text source)
[1] Manually numbered or [#] auto-numbered (even [#labelled]) or [*] auto-symbol
[CIT2002] A citation.
Hyperlink Target
Anonymous Target
Directive ("::") images/biohazard.png
Substitution Def
Empty Comment (".." on a line by itself, with blank lines before & after, used to separate indentation contexts)

Inline Markup

emphasis; strong emphasis; interpreted text; interpreted text with role; inline literal text; standalone hyperlink,; named reference, reStructuredText; anonymous reference; footnote reference, [1]; citation reference, [CIT2002]; like an inline directive; inline internal target.

Directive Quick Reference

See <> for full info.

Directive Name Description (Docutils version added to, in [brackets])
attention Specific admonition; also "caution", "danger", "error", "hint", "important", "note", "tip", "warning"
admonition Generic titled admonition: .. admonition:: By The Way
image .. image:: picture.png; many options possible
figure Like "image", but with optional caption and legend
topic .. topic:: Title; like a mini section
sidebar .. sidebar:: Title; like a mini parallel document
parsed-literal A literal block with parsed inline markup
rubric .. rubric:: Informal Heading
epigraph Block quote with class="epigraph"
highlights Block quote with class="highlights"
pull-quote Block quote with class="pull-quote"
compound Compound paragraphs [0.3.6]
container Generic block-level container element [0.3.10]
table Create a titled table [0.3.1]
list-table Create a table from a uniform two-level bullet list [0.3.8]
csv-table Create a table from CSV data (requires Python 2.3+) [0.3.4]
contents Generate a table of contents
sectnum Automatically number sections, subsections, etc.
header, footer Create document decorations [0.3.8]
target-notes Create an explicit footnote for each external target
meta HTML-specific metadata
include Read an external reST file as if it were inline
raw Non-reST data passed untouched to the Writer
replace Replacement text for substitution definitions
unicode Unicode character code conversion for substitution defs
date Generates today's date; for substitution defs
class Set a "class" attribute on the next element
role Create a custom interpreted text role [0.3.2]
default-role Set the default interpreted text role [0.3.10]
title Set the metadata document title [0.3.10]

Interpreted Text Role Quick Reference

See <> for full info.

Role Name Description
emphasis Equivalent to emphasis
literal Equivalent to literal but processes backslash escapes
PEP Reference to a numbered Python Enhancement Proposal
RFC Reference to a numbered Internet Request For Comments
raw For non-reST data; cannot be used directly (see docs) [0.3.6]
strong Equivalent to strong
sub Subscript
sup Superscript
title Title reference (book, etc.); standard default role


2010-04-26 replacing text in files

2010-04-26 12:17:16

using sed

  • The UNIX command sed is useful to find and replace text in single or multiple files. This page lists some common commands in using sed to improve editing code.

  • To replace foo with foo_bar in a single file:

    sed -i 's/foo/foo_bar/g'
    • -i = edit the file "in-place": sed will directly modify the file if it finds anything to replace
    • s = substitute the following text
    • foo = the text string to be substituted
    • foo_bar = the replacement string
    • g = global, match all occurrences in the line
  • To replace foo with foo_bar in multiple files:

    sed -i 's/foo/foo_bar/g'  *.py
  • Consult the manual pages of the operating system that you use: mansed

  • in the particular case of changing a scaling parameter in a set of experiment files:

    sed -i 's/size = 6/size = 7/g'  experiment*.py
    sed -i 's/size = 7/size = 6/g'  experiment*.py

using vim

  • on the current buffer, with confirmation

  • on the current buffer

  • to get help

    :help substitute
  • one could pass the required files to 'args' and apply whatever command to all these files using the command 'argdo'. First I will apply the substitute 's' command and then 'update' which will only save the modified files.

    :args *.py
    :argdo :%s/old_text/new_text/g | update


2010-04-25 bibdesk + citeulike

2010-04-25 10:31:51
  • as described in
  • "Add external file group"
  • enter the URL (change the name accordingly)
  • no back sync bidesk > citeUlike except manual export / import workflow
  • to focus on one tag, use something like


2010-04-24 Richard Dawkins on our "queer" universe

2010-04-25 12:13:24

My title: "Queerer than we can suppose: The strangeness of science." "Queerer than we can suppose" comes from J.B.S. Haldane, the famous biologist, who said, "Now, my own suspicion is that the universe is not only queerer than we suppose, but queerer than we can suppose. I suspect that there are more things in heaven and earth than are dreamed of, or can be dreamed of, in any philosophy." Richard Feynman compared the accuracy of quantum theories -- experimental predictions -- to specifying the width of North America to within one hair's breadth of accuracy. This means that quantum theory has got to be in some sense true. Yet the assumptions that quantum theory needs to make in order to deliver those predictions are so mysterious that even Feynman himself was moved to remark, "If you think you understand quantum theory, you don't understand quantum theory."

It's so queer that physicists resort to one or another paradoxical interpretation of it. David Deutsch, who's talking here, in The Fabric of Reality, embraces the "many worlds" interpretation of quantum theory, because the worst that you can say about it is that it's preposterously wasteful. It postulates a vast and rapidly growing number of universes existing in parallel -- mutually undetectable except through the narrow porthole of quantum mechanical experiments. And that's Richard Feynman.

The biologist Lewis Wolpert believes that the queerness of modern physics is just an extreme example. Science, as opposed to technology, does violence to common sense. Every time you drink a glass of water, he points out, the odds are that you will imbibe at least one molecule that passed through the bladder of Oliver Cromwell. (Laughter) It's just elementary probability theory. The number of molecules per glassful is hugely greater than the number of glassfuls, or bladdersful, in the world -- and, of course, there's nothing special about Cromwell or bladders. You have just breathed in a nitrogen atom that passed through the right lung of the third iguanodon to the left of the tall cycad tree.

"Queerer than we can suppose." What is it that makes us capable of supposing anything, and does this tell us anything about what we can suppose? Are there things about the universe that will be forever beyond our grasp, but not beyond the grasp of some superior intelligence? Are there things about the universe that are, in principle, ungraspable by any mind, however superior? The history of science has been one long series of violent brainstorms, as successive generations have come to terms with increasing levels of queerness in the universe. We're now so used to the idea that the Earth spins -- rather than the Sun moves across the sky -- it's hard for us to realize what a shattering mental revolution that must have been. After all, it seems obvious that the Earth is large and motionless, the Sun small and mobile. But it's worth recalling Wittgenstein's remark on the subject. "Tell me," he asked a friend, "why do people always say, it was natural for man to assume that the sun went round the earth rather than that the earth was rotating?" His friend replied, "Well, obviously because it just looks as though the Sun is going round the Earth." Wittgenstein replied, "Well, what would it have looked like if it had looked as though the Earth was rotating?" (Laughter)

Science has taught us, against all intuition, that apparently solid things, like crystals and rocks, are really almost entirely composed of empty space. And the familiar illustration is the nucleus of an atom is a fly in the middle of a sports stadium and the next atom is in the next sports stadium. So it would seem the hardest, solidest, densest rock is really almost entirely empty space, broken only by tiny particles so widely spaced they shouldn't count. Why, then, do rocks look and feel solid and hard and impenetrable? As an evolutionary biologist I'd say this: our brains have evolved to help us survive within the orders of magnitude of size and speed which our bodies operate at. We never evolved to navigate in the world of atoms. If we had, our brains probably would perceive rocks as full of empty space. Rocks feel hard and impenetrable to our hands precisely because objects like rocks and hands cannot penetrate each other. It's therefore useful for our brains to construct notions like "solidity" and "impenetrability," because such notions help us to navigate our bodies through the middle-sized world in which we have to navigate.

Moving to the other end of the scale, our ancestors never had to navigate through the cosmos at speeds close to the speed of light. If they had, our brains would be much better at understanding Einstein. I want to give the name "Middle World" to the medium-scaled environment in which we've evolved the ability to take act -- nothing to do with Middle Earth. Middle World. (Laughter) We are evolved denizens of Middle World, and that limits what we are capable of imagining. You find it intuitively easy to grasp ideas like, when a rabbit moves at the -- sort of medium velocity at which rabbits and other Middle World objects move, and hits another Middle World object, like a rock, it knocks itself out.

May I introduce Major General Albert Stubblebine III, commander of military intelligence in 1983. He stared at his wall in Arlington, Virginia, and decided to do it. As frightening as the prospect was, he was going into the next office. He stood up, and moved out from behind his desk. What is the atom mostly made of? he thought. Space. He started walking. What am I mostly made of? Atoms. He quickened his pace, almost to a jog now. What is the wall mostly made of? Atoms. All I have to do is merge the spaces. Then, General Stubblebine banged his nose hard on the wall of his office. Stubblebine, who commanded 16,000 soldiers, was confounded by his continual failure to walk through the wall. He has no doubt that this ability will, one day, be a common tool in the military arsenal. Who would screw around with an army that could do that? That's from an article in Playboy, which I was reading the other day. (Laughter)

I have every reason to think it's true; I was reading Playboy because I, myself, had an article in it. (Laughter) Unaided human intuition schooled in Middle World finds it hard to believe Galileo when he tells us a heavy object and a light object, air friction aside, would hit the ground at the same instant. And that's because in Middle World, air friction is always there. If we'd evolved in a vacuum we would expect them to hit the ground simultaneously. If we were bacteria, constantly buffeted by thermal movements of molecules, it would be different, but we Middle Worlders are too big to notice Brownian motion. In the same way, our lives are dominated by gravity but are almost oblivious to the force of surface tension. A small insect would reverse these priorities.

Steve Grand -- he's the one on the left, Douglas Adams is on the right -- Steve Grand, in his book, Creation: Life and How to Make It, is positively scathing about our preoccupation with matter itself. We have this tendency to think that only solid, material things are really things at all. Waves of electromagnetic fluctuation in a vacuum seem unreal. Victorians thought the waves had to be waves in some material medium -- the ether. But we find real matter comforting only because we've evolved to survive in Middle World, where matter is a useful fiction. A whirlpool, for Steve Grand, is a thing with just as much reality as a rock.

In a desert plain in Tanzania, in the shadow of the volcano Ol Donyo Lengai, there's a dune made of volcanic ash. The beautiful thing is that it moves bodily. It's what's technically known as a barchan, and the entire dune walks across the desert in a westerly direction at a speed of about 17 meters per year. It retains its crescent shape and moves in the direction of the horns. What happens is that the wind blows the sand up the shallow slope on the other side, and then, as each sand grain hits the top of the ridge, it cascades down on the inside of the crescent, and so the whole horn-shaped dune moves. Steve Grand points out that you and I are, ourselves, more like a wave than a permanent thing. He invites us, the reader, to "think of an experience from your childhood -- something you remember clearly, something you can see, feel, maybe even smell, as if you were really there. After all, you really were there at the time, weren't you? How else would you remember it? But here is the bombshell: You weren't there. Not a single atom that is in your body today was there when that event took place. Matter flows from place to place and momentarily comes together to be you. Whatever you are, therefore, you are not the stuff of which you are made. If that doesn't make the hair stand up on the back of your neck, read it again until it does, because it is important."

So "really" isn't a word that we should use with simple confidence. If a neutrino had a brain, which it evolved in neutrino-sized ancestors, it would say that rocks really do consist of empty space. We have brains that evolved in medium-sized ancestors which couldn't walk through rocks. "Really," for an animal, is whatever its brain needs it to be in order to assist its survival, and because different species live in different worlds, there will be a discomforting variety of reallys. What we see of the real world is not the unvarnished world but a model of the world, regulated and adjusted by sense data, but constructed so it's useful for dealing with the real world.

The nature of the model depends on the kind of animal we are. A flying animal needs a different kind of model from a walking, climbing or swimming animal. A monkey's brain must have software capable of simulating a three-dimensional world of branches and trunks. A mole's software for constructing models of its world will be customized for underground use. A water strider's brain doesn't need 3D software at all, since it lives on the surface of the pond in an Edwin Abbott flatland.

I've speculated that bats may see color with their ears. The world model that a bat needs in order to navigate through three dimensions catching insects must be pretty similar to the world model that any flying bird, a day-flying bird like a swallow, needs to perform the same kind of tasks. The fact that the bat uses echoes in pitch darkness to input the current variables to its model, while the swallow uses light, is incidental. Bats, I even suggested, use perceived hues, such as red and blue, as labels, internal labels, for some useful aspect of echoes -- perhaps the acoustic texture of surfaces, furry or smooth and so on, in the same way as swallows or, indeed, we, use those perceived hues -- redness and blueness etcetera -- to label long and short wavelengths of light. There's nothing inherent about red that makes it long wavelength.

And the point is that the nature of the model is governed by how it is to be used, rather than by the sensory modality involved. J. B .S. Haldane himself had something to say about animals whose world is dominated by smell. Dogs can distinguish two very similar fatty acids, extremely diluted: caprylic acid and caproic acid. The only difference, you see, is that one has an extra pair of carbon atoms in the chain. Haldane guesses that a dog would probably be able to place the acids in the order of their molecular weights by their smells, just as a man could place a number of piano wires in the order of their lengths by means of their notes. Now, there's another fatty acid, capric acid, which is just like the other two, except that it has two more carbon atoms. A dog that had never met capric acid would, perhaps, have no more trouble imagining its smell than we would have trouble imagining a trumpet, say, playing one note higher than we've heard a trumpet play before. Perhaps dogs and rhinos and other smell-oriented animals smell in color. And the argument would be exactly the same as for the bats.

Middle World -- the range of sizes and speeds which we have evolved to feel intuitively comfortable with -- is a bit like the narrow range of the electromagnetic spectrum that we see as light of various colors. We're blind to all frequencies outside that, unless we use instruments to help us. Middle World is the narrow range of reality which we judge to be normal, as opposed to the queerness of the very small, the very large and the very fast. We could make a similar scale of improbabilities; nothing is totally impossible. Miracles are just events that are extremely improbable. A marble statue could wave its hand at us; the atoms that make up its crystalline structure are all vibrating back and forth anyway. Because there are so many of them, and because there's no agreement among them in their preferred direction of movement, the marble, as we see it in Middle World, stays rock steady. But the atoms in the hand could all just happen to move the same way at the same time, and again and again. In this case, the hand would move and we'd see it waving at us in Middle World. The odds against it, of course, are so great that if you set out writing zeros at the time of the origin of the universe, you still would not have written enough zeros to this day.

Evolution in Middle World has not equipped us to handle very improbable events; we don't live long enough. In the vastness of astronomical space and geological time, that which seems impossible in Middle World might turn out to be inevitable. One way to think about that is by counting planets. We don't know how many planets there are in the universe, but a good estimate is about ten to the 20, or 100 billion billion. And that gives us a nice way to express our estimate of life's improbability. Could make some sort of landmark points along a spectrum of improbability, which might look like the electromagnetic spectrum we just looked at.

If life has arisen only once on any -- if -- if life could -- I mean, life could originate once per planet, could be extremely common, or it could originate once per star, or once per galaxy or maybe only once in the entire universe, in which case it would have to be here. And somewhere up there would be the chance that a frog would turn into a prince and similar magical things like that. If life has arisen on only one planet in the entire universe, that planet has to be our planet, because here we are talking about it. And that means that if we want to avail ourselves of it, we're allowed to postulate chemical events in the origin of life which have a probability as low as one in 100 billion billion. I don't think we shall have to avail ourselves of that, because I suspect that life is quite common in the universe. And when I say quite common, it could still be so rare that no one island of life ever encounters another, which is a sad thought.

How shall we interpret "queerer than we can suppose?" Queerer than in principle can be supposed, or just queerer than we can suppose, given the limitations of our brain's evolutionary apprenticeship in Middle World? Could we, by training and practice, emancipate ourselves from Middle World and achieve some sort of intuitive, as well as mathematical, understanding of the very small and the very large? I genuinely don't know the answer. I wonder whether we might help ourselves to understand, say, quantum theory, if we brought up children to play computer games, beginning in early childhood, which had a sort of make believe world of balls going through two slits on a screen, a world in which the strange goings on of quantum mechanics were enlarged by the computer's make believe, so that they became familiar on the Middle-World scale of the stream. And, similarly, a relativistic computer game in which objects on the screen manifest the Lorenz Contraction, and so on, to try to get ourselves into the way of thinking -- get children into the way of thinking about it.

I want to end by applying the idea of Middle World to our perceptions of each other. Most scientists today subscribe to a mechanistic view of the mind: we're the way we are because our brains are wired up as they are; our hormones are the way they are. We'd be different, our characters would be different, if our neuro-anatomy and our physiological chemistry were different. But we scientists are inconsistent. If we were consistent, our response to a misbehaving person, like a child murderer, should be something like, this unit has a faulty component; it needs repairing. That's not what we say. What we say -- and I include the most austerely mechanistic among us, which is probably me -- what we say is, "Vile monster, prison is too good for you." Or worse, we seek revenge, in all probability thereby triggering the next phase in an escalating cycle of counter-revenge, which we see, of course, all over the world today. In short, when we're thinking like academics, we regard people as elaborate and complicated machines, like computers or cars, but when we revert to being human we behave more like Basil Fawlty, who, we remember, thrashed his car to teach it a lesson when it wouldn't start on gourmet night. (Laughter)

The reason we personify things like cars and computers is that just as monkeys live in an arboreal world and moles live in an underground world and water striders live in a surface tension-dominated flatland, we live in a social world. We swim through a sea of people -- a social version of Middle World. We are evolved to second-guess the behavior of others by becoming brilliant, intuitive psychologists. Treating people as machines may be scientifically and philosophically accurate, but it's a cumbersome waste of time if you want to guess what this person is going to do next. The economically useful way to model a person is to treat him as a purposeful, goal-seeking agent with pleasures and pains, desires and intentions, guilt, blame-worthiness. Personification and the imputing of intentional purpose is such a brilliantly successful way to model humans, it's hardly surprising the same modeling software often seizes control when we're trying to think about entities for which it's not appropriate, like Basil Fawlty with his car or like millions of deluded people with the universe as a whole. (Laughter)

If the universe is queerer than we can suppose, is it just because we've been naturally selected to suppose only what we needed to suppose in order to survive in the Pleistocene of Africa? Or are our brains so versatile and expandable that we can train ourselves to break out of the box of our evolution? Or, finally, are there some things in the universe so queer that no philosophy of beings, however godlike, could dream them? Thank you very much.


2010-04-08 Comment créer et manipuler les données scientifiques : autour de Numpy

2010-04-08 08:17:13

Le tableau : l'outil de base du calcul scientifique


Manipulation fréquente d'ensembles ordonnés discrets :

  • temps discrétisé d'une expérience/simulation
  • signal enregistré par un appareil de mesure
  • pixels d'une image, ...

Le module Numpy permet de

  • créer d'un coup ces ensembles de données
  • réaliser des opérations en "batch" sur les tableaux de données (pas de boucle sur les éléments).

Tableau de données := numpy.ndarray

La création de tableaux de données Numpy

Un petit exemple pour commencer:

>>> import numpy as np
>>> a = np.array([0, 1, 2])
>>> a
array([0, 1, 2])
>>> print a
[0 1 2]
>>> b = np.array([[0., 1.], [2., 3.]])
>>> b
array([[ 0.,  1.],
       [ 2.,  3.]])

Dans la pratique, on rentre rarement les éléments un par un...

  • Valeurs espacées régulièrement:

    >>> import numpy as np
    >>> a = np.arange(10) # de 0 a n-1
    >>> a
    array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
    >>> b = np.arange(1., 9., 2) # syntaxe : debut, fin, saut
    >>> b
    array([ 1.,  3.,  5.,  7.])

    ou encore, en spécifiant le nombre de points:

    >>> c = np.linspace(0, 1, 6)
    >>> c
    array([ 0. ,  0.2,  0.4,  0.6,  0.8,  1. ])
    >>> d = np.linspace(0, 1, 5, endpoint=False)
    >>> d
    array([ 0. ,  0.2,  0.4,  0.6,  0.8])
  • Constructeurs pour des tableaux classiques:

    >>> a = np.ones((3,3))
    >>> a
    array([[ 1.,  1.,  1.],
           [ 1.,  1.,  1.],
           [ 1.,  1.,  1.]])
    >>> a.dtype
    >>> b = np.ones(5,
    >>> b
    array([1, 1, 1, 1, 1])
    >>> c = np.zeros((2,2))
    >>> c
    array([[ 0.,  0.],
           [ 0.,  0.]])
    >>> d = np.eye(3)
    >>> d
    array([[ 1.,  0.,  0.],
           [ 0.,  1.,  0.],
           [ 0.,  0.,  1.]])

La représentation graphique des données : matplotlib et mayavi

Maintenant que nous avons nos premiers tableaux de données, nous allons les visualiser. Matplotlib est un package de plot 2-D, on importe ces fonctions de la manière suivante

>>> import pylab
>>> # ou
>>> from pylab import * # pour tout importer dans le namespace

Si vous avec lancé Ipython avec python(x,y), ou avec l'option ipython -pylab (sous linux), toutes les fonctions/objets de pylab ont déjà été importées, comme si on avait fait from pylab import *. Dans la suite on suppose qu'on a fait from pylab import * ou lancé ipython -pylab: on n'écrira donc pas pylab.fonction() mais directement fonction.

Tracé de courbes 1-D

In [6]: a = np.arange(20)
In [7]: plot(a, a**2) # line plot
Out[7]: [<matplotlib.lines.Line2D object at 0x95abd0c>]
In [8]: plot(a, a**2, 'o') # symboles ronds
Out[8]: [<matplotlib.lines.Line2D object at 0x95b1c8c>]
In [9]: clf() # clear figure
In [10]: loglog(a, a**2)
Out[10]: [<matplotlib.lines.Line2D object at 0x95abf6c>]
In [11]: xlabel('x') # un peu petit
Out[11]: <matplotlib.text.Text object at 0x98923ec>
In [12]: xlabel('x', fontsize=26) # plus gros
Out[12]: <matplotlib.text.Text object at 0x98923ec>
In [13]: ylabel('y')
Out[13]: <matplotlib.text.Text object at 0x9892b8c>
In [14]: grid()
In [15]: axvline(2)
Out[15]: <matplotlib.lines.Line2D object at 0x9b633cc>

Tableaux 2-D (images par exemple)

In [48]: # Tableaux 30x30 de nombres aleatoires entre 0 et 1
In [49]: image = np.random.rand(30,30)
In [50]: imshow(image)
Out[50]: <matplotlib.image.AxesImage object at 0x9e954ac>
In [51]: gray()
In [52]: hot()
In [53]: imshow(image, cmap=cm.gray)
Out[53]: <matplotlib.image.AxesImage object at 0xa23972c>
In [54]: axis('off') # on enleve les ticks et les labels

Il y a bien d'autres fonctionnalités dans matplotlib : choix de couleurs ou des tailles de marqueurs, fontes latex, inserts à l'intérieur d'une figure, histogrammes, etc.

Pour aller plus loin :

Représentation en 3-D

Pour la visualisation 3-D, on utilise un autre package : Mayavi. Un exemple rapide : commencez par relancer ipython avec les options ipython -pylab -wthread

In [59]: from enthought.mayavi import mlab
In [60]: mlab.figure()
get fences failed: -1
param: 6, val: 0
Out[60]: <enthought.mayavi.core.scene.Scene object at 0xcb2677c>
In [61]:
Out[61]: <enthought.mayavi.modules.surface.Surface object at 0xd0862fc>
In [62]: mlab.axes()
Out[62]: <enthought.mayavi.modules.axes.Axes object at 0xd07892c>

La fenêtre mayavi/mlab qui s'ouvre est interactive : en cliquant sur le bouton gauche de la souris vous pouvez faire tourner l'image, on peut zoomer avec la molette, etc.


Pour plus d'informations sur Mayavi :


On peut accéder aux éléments des tableaux Numpy (indexer) d'une manière similaire que pour les autres séquences Python (list, tuple)

>>> a = np.arange(10)
>>> a
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> a[0], a[2], a[-1]
(0, 2, 9)

Attention ! L'indexage commence à partir de 0, comme pour les autres séquences Python (et comme en C/C++). En Fortran ou Matlab, l'indexage commence à 1.

Pour les tableaux multidimensionnels, l'indice d'un élément est donné par un n-uplet d'entiers

>>> a = np.diag(np.arange(5))
>>> a
array([[0, 0, 0, 0, 0],
       [0, 1, 0, 0, 0],
       [0, 0, 2, 0, 0],
       [0, 0, 0, 3, 0],
       [0, 0, 0, 0, 4]])
>>> a[1,1]
>>> a[2,1] = 10 # deuxième ligne, première colonne
>>> a
array([[ 0,  0,  0,  0,  0],
       [ 0,  1,  0,  0,  0],
       [ 0, 10,  2,  0,  0],
       [ 0,  0,  0,  3,  0],
       [ 0,  0,  0,  0,  4]])
>>> a[1]
array([0, 1, 0, 0, 0])

A retenir :

  • En 2-D, la première dimension correspond aux lignes, la seconde aux colonnes.
  • Pour un tableau a à plus qu'une dimension,`a[0]` est interprété en prenant tous les éléments dans les dimensions non-spécifiés.

Slicing (parcours régulier des éléments)

Comme l'indexage, similaire au slicing des autres séquences Python:

>>> a = np.arange(10)
>>> a
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> a[2:9:3] # [début:fin:pas]
array([2, 5, 8])

Attention, le dernier indice n'est pas inclus

>>> a[:4]
array([0, 1, 2, 3])

début:fin:pas est un objet slice, qui représente l'ensemble d'indices range(début, fin, pas). On peut créer explicitement un slice

>>> sl = slice(1, 9, 2)
>>> a = np.arange(10)
>>> b = 2*a + 1
>>> a, b
(array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), array([ 1,  3,  5,  7,  9, 11, 13, 15, 17, 19]))
>>> a[sl], b[sl]
(array([1, 3, 5, 7]), array([ 3,  7, 11, 15]))

On n'est pas obligé de spécifier à la fois le début (indice 0 par défaut), la fin (dernier indice par défaut) et le pas (1 par défaut):

>>> a[1:3]
array([1, 2])
>>> a[::2]
array([0, 2, 4, 6, 8])
>>> a[3:]
array([3, 4, 5, 6, 7, 8, 9])

Et bien sûr, ça marche pour les tableaux à plusieurs dimensions:

>>> a = np.eye(5)
>>> a
array([[ 1.,  0.,  0.,  0.,  0.],
       [ 0.,  1.,  0.,  0.,  0.],
       [ 0.,  0.,  1.,  0.,  0.],
       [ 0.,  0.,  0.,  1.,  0.],
       [ 0.,  0.,  0.,  0.,  1.]])
>>> a[2:4,:3] #2è et 3è lignes, trois premières colonnes
array([[ 0.,  0.,  1.],
       [ 0.,  0.,  0.]])

On peut changer la valeur de tous les éléments indexés par une slice de façon très simple

>>> a[:3,:3] = 4
>>> a
array([[ 4.,  4.,  4.,  0.,  0.],
       [ 4.,  4.,  4.,  0.,  0.],
       [ 4.,  4.,  4.,  0.,  0.],
       [ 0.,  0.,  0.,  1.,  0.],
       [ 0.,  0.,  0.,  0.,  1.]])

Une petite illustration en résumé de l'indexage et du slicing avec Numpy...


Une opération de slicing crée une vue (view) du tableau d'origine, c'est-à-dire une manière d'aller lire dans la mémoire. Le tableau d'origine n'est donc pas copié. Quand on modifie la vue, on modife aussi le tableau d'origine.:

>>> a = np.arange(10)
>>> a
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> b = a[::2]; b
array([0, 2, 4, 6, 8])
>>> b[0] = 12
>>> b
array([12,  2,  4,  6,  8])
>>> a # a a été modifié aussi !
array([12,  1,  2,  3,  4,  5,  6,  7,  8,  9])

Ce comportement peut surprendre au début... mais est bien pratique pour gérer la mémoire de façon économe.

Si on veut faire une copie différente du tableau d'origine

>>> a = np.arange(10)
>>> a
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> b = np.copy(a[::2]); b
array([0, 2, 4, 6, 8])
>>> b[0] = 12
>>> b
array([12,  2,  4,  6,  8])
>>> a # a n'a pas été modifié
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])

Manipuler la forme des tableaux

On obtient la forme d'un tableau grâce à la méthode ndarray.shape qui retourne un tuple des dimensions du tableau

>>> a = np.arange(10)
>>> a.shape
>>> b = np.ones((3,4))
>>> b.shape
(3, 4)
>>> b.shape[0] # on peut accéder aux élements du tuple b.shape
>>> # et on peut aussi faire
>>> np.shape(b)
(3, 4)

Par ailleurs on obtient la longueur de la première dimension avec np.alen (par analogie avec len pour une liste) et le nombre total d'éléments avec ndarray.size:

>>> np.alen(b)
>>> b.size

Il existe plusieurs fonctions Numpy qui permettent de créer un tableau de taille différente à partir d'un tableau de départ.:

>>> a = np.arange(36)
>>> b = a.reshape((6, 6))
>>> b
array([[ 0,  1,  2,  3,  4,  5],
       [ 6,  7,  8,  9, 10, 11],
       [12, 13, 14, 15, 16, 17],
       [18, 19, 20, 21, 22, 23],
       [24, 25, 26, 27, 28, 29],
       [30, 31, 32, 33, 34, 35]])

ndarray.reshape renvoie une vue, et pas une copie

>>> b[0,0] = 10
>>> a
array([10,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15, 16,
       17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33,
       34, 35])

On peut aussi créer un tableau avec un nombre d'éléments différents avec ndarray.resize:

>>> a = np.arange(36)
>>> a.resize((4,2))
>>> a
array([[0, 1],
       [2, 3],
       [4, 5],
       [6, 7]])
>>> b = np.arange(4)
>>> b.resize(3, 2)
>>> b
array([[0, 1],
       [2, 3],
       [0, 0]])

Ou paver un grand tableau à partir d'un tableau plus petit

>>> a = np.arange(4).reshape((2,2))
>>> a
array([[0, 1],
       [2, 3]])
>>> np.tile(a, (2,3))
array([[0, 1, 0, 1, 0, 1],
       [2, 3, 2, 3, 2, 3],
       [0, 1, 0, 1, 0, 1],
       [2, 3, 2, 3, 2, 3]])

Exercices : quelques gammes avec les tableaux numpy

Grâce aux divers constructeurs, à l'indexage et au slicing, et aux opérations simples sur les tableaux (+/-/x/:), on peut facilement créer des tableaux de grande taille correspondant à des motifs variés.

Exemple : comment créer le tableau:

[[ 0  1  2  3  4]
 [ 5  6  7  8  9]
 [10 11 12 13  0]
 [15 16 17 18 19]
 [20 21 22 23 24]]


>>> a = np.arange(25).reshape((5,5))
>>> a[2, 4] = 0

Exercices : créer les tableaux suivants de la manière la plus simple possible (pas élément par élement)

[[ 1.  1.  1.  1.]
 [ 1.  1.  1.  1.]
 [ 1.  1.  1.  2.]
 [ 1.  6.  1.  1.]]

[[0 0 0 0 0]
 [2 0 0 0 0]
 [0 3 0 0 0]
 [0 0 4 0 0]
 [0 0 0 5 0]
 [0 0 0 0 6]]

De "vraies données" : lire et écrire des tableaux dans des fichiers

Bien souvent, nos expériences ou nos simulations écrivent leurs résultats dans des fichiers. Il faut ensuite les charger dans Python sous la forme de tableaux Numpy pour les manipuler. De même, on peut vouloir sauver les tableaux qu'on a obtenus dans des fichiers.

Aller dans le bon répertoire

Pour se déplacer dans une arborescence de fichiers :

  • utiliser les fonctionnalités d'Ipython : cd, pwd, tab-completion.

  • modules os (routines système) et os.path (gestion des chemins)

    >>> import os, os.path
    >>> current_dir = os.getcwd()
    >>> current_dir
    >>> data_dir = os.path.join(current_dir, 'data')
    >>> data_dir
    >>> if not(os.path.exists(data_dir)):
    ...     os.mkdir('data')
    ...     print "creation du repertoire 'data'"
    >>> os.chdir(data_dir) # ou dans Ipython : cd data

On peut en fait se servir de Ipython comme d'un véritable shell grâce aux fonctionnalités d'Ipython et au module os.

Ecrire un tableau de données dans un fichier

>>> a = np.arange(100)
>>> a = a.reshape((10, 10))
  • Ecriture dans un fichier texte (en ascii)

    >>> np.savetxt('data_a.txt', a)
  • Ecriture dans un fichier en binaire (extension .npy)

    >>>'data_a.npy', a)

Charger un tableau de données à partir d'un fichier

  • Lecture dans un fichier texte

    >>> b = np.loadtxt('data_a.txt')
  • Lecture dans un fichier binaire

    >>> c = np.load('data_a.npy')

Pour lire les fichiers de données matlab : la structure matlab d'un fichier .mat est stockée dans un dictionnaire.

Pour sélectionner un fichier dans une liste

On va sauver chaque ligne de a dans un fichier différent

>>> for i, l in enumerate(a):
...     print i, l
...     np.savetxt('ligne_'+str(i), l)
0 [0 1 2 3 4 5 6 7 8 9]
1 [10 11 12 13 14 15 16 17 18 19]
2 [20 21 22 23 24 25 26 27 28 29]
3 [30 31 32 33 34 35 36 37 38 39]
4 [40 41 42 43 44 45 46 47 48 49]
5 [50 51 52 53 54 55 56 57 58 59]
6 [60 61 62 63 64 65 66 67 68 69]
7 [70 71 72 73 74 75 76 77 78 79]
8 [80 81 82 83 84 85 86 87 88 89]
9 [90 91 92 93 94 95 96 97 98 99]

Pour obtenir une liste de tous les fichiers commençant par ligne, on fait appel au module glob qui "gobe" tous les chemins correspondant à un motif. Exemple

>>> import glob
>>> filelist = glob.glob('ligne*')
>>> filelist
['ligne_0', 'ligne_1', 'ligne_2', 'ligne_3', 'ligne_4', 'ligne_5', 'ligne_6', 'ligne_7', 'ligne_8', 'ligne_9']
>>> # attention la liste n'est pas toujours ordonnee
>>> filelist.sort()
>>> l2 = np.loadtxt(filelist[2])

Remarque : il est aussi possible de créer des tableaux à partir de fichiers Excel/Calc, de fichiers hdf5, etc. (mais à l'aide de modules supplémentaires non décrits ici : xlrd, pytables, etc.).

Opérations mathématiques et statistiques simples sur les tableaux

Un certain nombre d'opérations sur les tableaux sont codées directement dans numpy (et sont donc en général très efficaces):

>>> a = np.arange(10)
>>> a.min() # ou np.min(a)
>>> a.max() # ou np.max(a)
>>> a.sum() # ou np.sum(a)

Il est possible de réaliser l'opération le long d'un axe uniquement, plutôt que sur tous les éléments

>>> a = np.array([[1, 3], [9, 6]])
>>> a
array([[1, 3],
       [9, 6]])
>>> a.mean(axis=0) # tableau contenant la moyenne de chaque colonne
array([ 5. ,  4.5])
>>> a.mean(axis=1) # tableau contenant la moyenne de chaque ligne
array([ 2. ,  7.5])

Il y en a encore bien d'autres opérations possibles : on en découvrira quelques unes au fil de ce cours.


Les opérations arithmétiques sur les tableaux correspondent à des opérations élément par élément. En particulier, le produit n'est pas un produit matriciel (contrairement à Matlab) ! Le produit matriciel est fourni par

>>> a = np.ones((2,2))
>>> a*a
array([[ 1.,  1.],
       [ 1.,  1.]])
array([[ 2.,  2.],
       [ 2.,  2.]])

Exemple : simulation de diffusion avec un marcheur aléatoire


Quelle est la distance typique d'un marcheur aléatoire à l'origine, après t sauts à droite ou à gauche ?

>>> nreal = 1000 # nombre de réalisations de la marche
>>> tmax = 200 # temps sur lequel on suit le marcheur
>>> # On tire au hasard tous les pas 1 ou -1 de la marche
>>> walk = 2 * ( np.random.random_integers(0, 1, (nreal,tmax)) - 0.5 )
>>> np.unique(walk) # Vérification : tous les pas font bien 1 ou -1
array([-1.,  1.])
>>> # On construit les marches en sommant ces pas au cours du temps
>>> cumwalk = np.cumsum(walk, axis=1) # axis = 1 : dimension du temps
>>> sq_distance = cumwalk**2
>>> # On moyenne dans le sens des réalisations
>>> mean_sq_distance = np.mean(sq_distance, axis=0)
In [39]: figure()
In [40]: plot(mean_sq_distance)
In [41]: figure()
In [42]: plot(np.sqrt(mean_sq_distance))

On retrouve bien que la distance grandit comme la racine carrée du temps !

Exercice : statistiques des femmes dans la recherche (données INSEE)

  1. Récupérer les fichiers organismes.txt et pourcentage_femmes.txt (clé USB du cours ou
  2. Créer un tableau data en ouvrant le fichier pourcentage_femmes.txt avec np.loadtxt. Quelle est la taille de ce tableau ?
  3. Les colonnes correspondent aux années 2006 à 2001. Créer un tableau annees (sans accent !) contenant les entiers correspondant à ces années.
  4. Les différentes lignes correspondent à différents organismes de recherche dont les noms sont stockés dans le fichier organismes.txt. Créer un tableau organisms en ouvrant ce fichier. Attention : comme np.loadtxt crée par défaut des tableaux de flottant, il faut lui préciser qu'on veut créer un tableau de strings : organisms = np.loadtxt('organismes.txt', dtype=str)
  5. Vérifier que le nombre de lignes de data est égal au nombre d'organismes.
  6. Quel est le pourcentage maximal de femmes dans tous les organismes, toutes années confondues ?
  7. Créer un tableau contenant la moyenne temporelle du pourcentage de femmes pour chaque organisme (i.e., faire la moyenne de data suivant l'axe No 1).
  8. Quel organisme avait le pourcentage de femmes le plus élevé en 2004 ? (Indice np.argmax).
  9. Représenter un histogramme du pourcentage de femmes dans les
    différents organismes en 2006 (indice : np.histogram, puis bar ou plot de matplotlib pour la visualisation).

L'indexage avancé (fancy indexing)

On peut indexer des tableaux numpy avec des slices, mais aussi par des tableaux de booléens (les masques) ou d'entiers : on appelle ces opérations plus évoluées du fancy indexing.

Les masques

>>> np.random.seed(3)
>>> a = np.random.random_integers(0, 20, 15)
>>> a
array([10,  3,  8,  0, 19, 10, 11,  9, 10,  6,  0, 20, 12,  7, 14])
>>> (a%3 == 0)
array([False,  True, False,  True, False, False, False,  True, False,
        True,  True, False,  True, False, False], dtype=bool)
>>> mask = (a%3 == 0)
>>> extract_from_a = a[mask] #on pourrait écrire directement a[a%3==0]
>>> extract_from_a #on extrait un sous-tableau grâce au masque
array([ 3,  0,  9,  6,  0, 12])

Extraire un sous-tableau avec un masque produit une copie de ce sous-tableau, et non une vue

>>> extract_from_a = -1
>>> a
array([10,  3,  8,  0, 19, 10, 11,  9, 10,  6,  0, 20, 12,  7, 14])

L'indexation grâce masques peut être très utile pour l'assignation d'une nouvelle valeur à un sous-tableau

>>> a[mask] = 0
>>> a
array([10,  0,  8,  0, 19, 10, 11,  0, 10,  0,  0, 20,  0,  7, 14])

Indexer avec un tableau d'entiers

>>> a = np.arange(10)
>>> a[::2] +=3 #pour ne pas avoir toujours le même np.arange(10)...
>>> a
array([ 3,  1,  5,  3,  7,  5,  9,  7, 11,  9])
>>> a[[2, 5, 1, 8]] # ou a[np.array([2, 5, 1, 8])]
array([ 5,  5,  1, 11])

On peut indexer avec des tableaux d'entiers où le même indice est répété plusieurs fois

>>> a[[2, 3, 2, 4, 2]]
array([5, 3, 5, 7, 5])

On peut assigner de nouvelles valeurs avec ce type d'indexation

>>> a[[9, 7]] = -10
>>> a
array([  3,   1,   5,   3,   7,   5,   9, -10,  11, -10])
>>> a[[2, 3, 2, 4, 2]] +=1
>>> a
array([  3,   1,   6,   4,   8,   5,   9, -10,  11, -10])

Quand on crée un tableau en indexant avec un tableau d'entiers, le nouveau tableau a la même forme que le tableau d'entiers

>>> a = np.arange(10)
>>> idx = np.array([[3, 4], [9, 7]])
>>> a[idx]
array([[3, 4],
       [9, 7]])
>>> b = np.arange(10)

>>> a = np.arange(12).reshape(3,4)
>>> a
array([[ 0,  1,  2,  3],
       [ 4,  5,  6,  7],
       [ 8,  9, 10, 11]])
>>> i = np.array( [ [0,1],
...              [1,2] ] )
>>> j = np.array( [ [2,1],
...              [3,3] ] )
>>> a[i,j]
array([[ 2,  5],
       [ 7, 11]])


Reprenons nos données de statistiques du pourcentage de femmes dans la recherche (tableaux data et organisms)

  1. Créer un tableau sup30 de même taille que data valant 1 si la valeur de data est supérieure à 30%, et 0 sinon.
  2. Créez un tableau contenant l'organisme avec le pourcentage de femmes le plus élévé de chaque année.

Le broadcasting

Les opérations élémentaires sur les tableaux numpy (addition, etc.) sont faites élément par élément et opèrent donc des tableaux de même taille. Il est néanmoins possible de faire des opérations sur des tableaux de taille différente si numpy` arrive à transformer les tableaux pour qu'ils aient tous la même taille : on appelle cette transformation le broadcasting (jeu de mots intraduisible en français).

L'image ci-dessous donne un exemple de


ce qui donne dans Ipython:

>>> a = np.arange(0, 40, 10)
>>> b = np.arange(0, 3)
>>> a = a.reshape((4,1)) #il faut transformer a en tableau "vertical"
>>> a + b
array([[ 0,  1,  2],
       [10, 11, 12],
       [20, 21, 22],
       [30, 31, 32]])

On a déjà utilisé le broadcasting sans le savoir

>>> a = np.arange(20).reshape((4,5))
>>> a
array([[ 0,  1,  2,  3,  4],
       [ 5,  6,  7,  8,  9],
       [10, 11, 12, 13, 14],
       [15, 16, 17, 18, 19]])
>>> a[0] = 1 # on égale deux tableaux de dimension 1 et 0
>>> a[:3] = np.arange(1,6)
>>> a
array([[ 1,  2,  3,  4,  5],
       [ 1,  2,  3,  4,  5],
       [ 1,  2,  3,  4,  5],
       [15, 16, 17, 18, 19]])

On peut même utiliser en même temps le fancy indexing et le broadcasting : reprenons un exemple déjà utilisé plus haut

>>> a = np.arange(12).reshape(3,4)
>>> a
array([[ 0,  1,  2,  3],
       [ 4,  5,  6,  7],
       [ 8,  9, 10, 11]])
>>> i = np.array( [ [0,1],
...              [1,2] ] )
>>> a[i, 2] # même chose que a[i, 2*np.ones((2,2), dtype=int)]
array([[ 2,  6],
       [ 6, 10]])

Le broadcasting peut sembler un peu magique, mais il est en fait assez naturel de l'utiliser dès qu'on veut veut résoudre un problème où on obtient en sortie un tableau avec plus de dimensions que les données en entrée.

Exemple : construisons un tableau de distances (en miles) entre les villes de la route 66 : Chicago, Springfield, Saint-Louis, Tulsa, Oklahoma City, Amarillo, Santa Fe, Albucquerque, Flagstaff et Los Angeles.

>>> mileposts = np.array([0, 198, 303, 736, 871, 1175, 1475, 1544,
...                         1913, 2448])
>>> tableau_de_distances = np.abs(mileposts - mileposts[:,np.newaxis])
>>> tableau_de_distances
array([[   0,  198,  303,  736,  871, 1175, 1475, 1544, 1913, 2448],
       [ 198,    0,  105,  538,  673,  977, 1277, 1346, 1715, 2250],
       [ 303,  105,    0,  433,  568,  872, 1172, 1241, 1610, 2145],
       [ 736,  538,  433,    0,  135,  439,  739,  808, 1177, 1712],
       [ 871,  673,  568,  135,    0,  304,  604,  673, 1042, 1577],
       [1175,  977,  872,  439,  304,    0,  300,  369,  738, 1273],
       [1475, 1277, 1172,  739,  604,  300,    0,   69,  438,  973],
       [1544, 1346, 1241,  808,  673,  369,   69,    0,  369,  904],
       [1913, 1715, 1610, 1177, 1042,  738,  438,  369,    0,  535],
       [2448, 2250, 2145, 1712, 1577, 1273,  973,  904,  535,    0]])


Bonnes pratiques

Sur l'exemple précédent, on peut noter quelques bonnes (et mauvaises) pratiques :

  • Donner des noms de variables explicites (pas besoin d'un commentaire pour expliquer ce qu'il y a dans la variable).
  • Mettre des espaces après les virgules, autour des =, etc. Un certain nombre de règles pour écrire du "beau" code (et surtout, utiliser les mêmes conventions que tout le monde !) sont données par le Style Guide for Python Code et la page Docstring Conventions (pour organiser les messages d'aide).
  • Sauf exception (ex : cours pour francophones ?), donner des noms de variables en anglais, et écrire les commentaires en anglais (imaginez récupérer un code commenté en russe...).

Beaucoup de problèmes sur grille ou réseau peuvent aussi utiliser du broadcasting. Par exemple, si on veut calculer la distance à l'origine des points sur une grille 10x10, on peut faire

>>> x, y = np.arange(5), np.arange(5)
>>> distance = np.sqrt(x**2 + y[:, np.newaxis]**2)
>>> distance
array([[ 0.        ,  1.        ,  2.        ,  3.        ,  4.        ],
       [ 1.        ,  1.41421356,  2.23606798,  3.16227766,  4.12310563],
       [ 2.        ,  2.23606798,  2.82842712,  3.60555128,  4.47213595],
       [ 3.        ,  3.16227766,  3.60555128,  4.24264069,  5.        ],
       [ 4.        ,  4.12310563,  4.47213595,  5.        ,  5.65685425]])

On peut représenter les valeurs du tableau distance en niveau de couleurs grâce à la fonction pylab.imshow (syntaxe : pylab.imshow(distance). voir l'aide pour plus d'options).


Remarque : la fonction numpy.ogrid permet de créer directement les vecteurs x et y de l'exemple précédent avec deux "dimensions significatives" différentes

>>> x, y = np.ogrid[0:5, 0:5]
>>> x, y
       [4]]), array([[0, 1, 2, 3, 4]]))
>>> x.shape, y.shape
((5, 1), (1, 5))
>>> distance = np.sqrt(x**2 + y**2)

np.ogrid est donc très utile dès qu'on a des calculs à faire sur un réseau. np.mgrid fournit par contre directement des matrices pleines d'indices pour les cas où on ne peut/veut pas profiter du broadcasting

>>> x, y = np.mgrid[0:4, 0:4]
>>> x
array([[0, 0, 0, 0],
       [1, 1, 1, 1],
       [2, 2, 2, 2],
       [3, 3, 3, 3]])
>>> y
array([[0, 1, 2, 3],
       [0, 1, 2, 3],
       [0, 1, 2, 3],
       [0, 1, 2, 3]])

Exercice de synthèse : un médaillon pour Lena

Nous allons faire quelques manipulations sur les tableaux numpy en partant de la célébre image de Lena ( scipy fournit un tableau 2D de l'image de Lena avec la fonction scipy.lena

>>> import scipy
>>> lena = scipy.lena()

Voici quelques images que nous allons obtenir grâce à nos manipulations : utiliser différentes colormaps, recadrer l'image, modifier certaines parties de l'image.

  • Utilisons la fonction imshow de pylab pour afficher l'image de Lena.
In [3]: import pylab
In [4]: lena = scipy.lena()
In [5]: pylab.imshow(lena)
  • Lena s'affiche alors en fausses couleurs, il faut spécifier une colormap pour qu'elle s'affiche en niveaux de gris.
In [6]: pylab.imshow(lena,
In [7]: # ou
In [8]: gray()
  • Créez un tableau où le cadrage de Lena est plus serré : enlevez par exemple 30 pixels de tous les côtés de l'image. Affichez ce nouveau tableau avec imshow pour vérifier.
In [9]: crop_lena = lena[30:-30,30:-30]
  • On veut maintenant entourer le visage de Lena d'un médaillon noir. Pour cela, il faut

    • créer un masque correspondant aux pixels qu'on veut mettre en noir. Le masque est défini par la condition (y-256)**2 + (x-256)**2
In [15]: y, x = np.ogrid[0:512,0:512] # les indices x et y des pixels
In [16]: y.shape, x.shape
Out[16]: ((512, 1), (1, 512))
In [17]: centerx, centery = (256, 256) # centre de l'image
In [18]: mask = ((y - centery)**2 + (x - centerx)**2)> 230**2


  • affecter la valeur 0 aux pixels de l'image correspondant au masque. La syntaxe pour cela est extrêmement simple et intuitive :
In [19]: lena[mask]=0
In [20]: imshow(lena)
Out[20]: <matplotlib.image.AxesImage object at 0xa36534c>
  • Question subsidiaire : recopier toutes les instructions de cet exercice dans un script puis exécuter ce script dans Ipython avec %run

Conclusion : que faut-il savoir faire sur les tableaux numpy pour démarrer ?

  • Savoir créer des tableaux : array, arange, ones, zeros.

  • Connaître la forme du tableau avec array.shape, puis faire du slicing pour obtenir différentes vues du tableau : array[::2], etc. Changer la forme du tableau avec reshape.

  • Obtenir une partie des éléments d'un tableau et/ou en modifier la valeur grâce aux masques

    >>> a[a<0] = 0
  • Savoir faire quelques opérations sur les tableaux comme trouver le max ou la moyenne (array.max(), array.mean()). Pas la peine de tout retenir, mais avoir le réflexe de chercher dans la doc

  • Pour une utilisation plus avancée : maîtriser l'indexation avec des tableaux d'indices entiers, et le broadcasting. Connaître plus de fonctions de numpy permettant de réaliser des opérations sur les tableaux.