lun19Mar201811Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Invited by the Vision group
lun05Mar201811hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisShow details
COSMO, a Bayesian model of perceptuo-motor interactions in speech communication
I will present COSMO (Communicating Objects using Sensory-Motor Operations), a
computational model enabling to analyze the functional role of sensory-motor interactions
in speech perception and in speech production. I will present three properties of COSMO
in speech perception, respectively called redundancy, complementarity (with the
“auditory-narrowband versus motor-wideband” framework) and “specificity” (according to
which auditory cues would be more efficient for vowel decoding and motor cues more
efficient for plosive articulation decoding). I will sketch a possible neuroanatomical
architecture for COSMO, and capitalize on properties of the auditory vs. motor decoders
to address various neurocognitive studies of the literature. I will conclude on the
interest of combining a complementary exogenous decoding system, optimally fitted on the
environment stimuli, with an endogenous decoding system, equipped with generative
Invited by Speech Team
lun12Fév201811hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisShow details
Automaticity and attentional control in neural language processing
A long-standing debate in the science of language is whether our capacity to process language draws on attentional resources, or whether some stages or types of this processing may be automatic. I will present a series of experiments in which this issue was addressed by removing attention to linguistic stimuli or by modulating the level of attention on the language input while recording brain activity. The overall results of these studies show that the language function does possess a certain degree of automaticity, which seems to apply to different types of information including lexical access, semantic processing, syntactic parsing and even acquisition of new lexemes. Furthermore, such an automaticity appears to exist in both auditory speech perception and visual processing of written words. It can be explained, at least in part, by robustness of strongly connected linguistic memory circuits in the brain that can activate fully even when attentional resources are low. At the same time, this automaticity is limited to the very first stages of linguistic processing (<200 msec from the point in time when the relevant information is available in the input, e.g. word recognition point). Later processing steps are, in turn, more affected by attention modulation and possibly reflect a more in-depth, secondary processing or re-analysis of input, dependant on the amount of resources allocated. The results will be discussed in the framework of distributed neural circuits which function as memory traces for language elements in the human brain.
Invited by Speech Team
lun05Fév201811hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisShow details
Prefrontal neuronal circuits controlling emotional behavior
When facing danger, mammals display a broad range of fear behavior ranging from active (avoidance) to passive (freezing) fear responses. The canonical model of fear circuits posits that the basolateral amygdala directly controls fear responses through projections to the brainstem. Using state of the art behavioral, electrophysiological and optogenetic manipulations we provide evidence challenging this view. Our results indicate that (i) specific cell populations within the medial prefrontal cortex support different coding strategy for fear behavior and (ii) that specific manipulation of prefrontal neurons projecting to the brainstem directly regulate conditioned fear responses.
Invited by Vision Team
lun29Jan201811hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisShow details
Brain activations associated with attention to speech and text
In our recent functional magnetic resonance imaging (fMRI) study (Moisala et al., Front. Hum. Neurosci., 2015), young adult participants (N = 18) were presented with concurrent spoken and written sentences and they were to attend to either sentence or to both of them. Dividing attention between the spoken and written sentences was associated with bilateral activity enhancements in dorsolateral and -medial prefrontal areas. These areas showed also smaller activity enhancements during selective attention to speech or text at the presence of task-irrelevant text and speech, respectively, suggesting that dealing with distracting information involves the same brain areas as dividing attention. In our subsequent study applying the same experimental paradigm (Moisala et al., NeuroImage, 2016), we found in healthy adolescents and young adults (N=149) more right prefrontal activity and less accurate performance during attention to speech or text at the presence of irrelevant text or speech, respectively, the more the participants multitasked in their daily life. These results suggest that habitual multitasking may lead to enhanced distractibility. Yet, in another experimental condition (Moisala et al., Brain Res., 2016), the adolescent and young adult participants (N = 167) showed better performance and higher dorsolateral prefrontal activity during a demanding bimodal verbal working-memory task the more they played computer and video games in the daily life, suggesting that computer gaming may enhance attention and memory skills. Moreover, comparison of results from these attention and working-memory studies (Moisala et al., submitted) suggests development of prefrontal executive functions during adolescence. In our very recent study (Leminen et al., in preparation), we investigate selective attention to speech in more naturalistic conditions where the participants see the facial movements of speakers attended selectively at the presence of irrelevant speech on the background. According to our preliminary results, higher activity is observed in superior temporal and inferior parietal areas for higher quality of attended speech (natural vs. noise-vocoded speech) and for more perceivable speech-related facial movements (non-masked vs. masked). Moreover, right superior parietal and dorsolateral prefrontal areas show higher activity for lower speech quality suggesting enhanced demand for attention.
lun27Nov201711Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
The mindless moving eyes: A novel, universal, account of eye guidance in a range of cognitive tasks
Saccades are very fast
movements of the eyes, that bring poorly resolved peripheral input onto the center of the retinas for detailed visual analysis. Crucial for reading and seeing, they have long been thought to be an open window on the mind and the neo-cortex. The underlying assumption, that saccades are aimed at foveating words/ objects of (possible) interest, still stands, having survived the no-less-popular visual-saliency account of scene viewing, while remaining central in reading models. During my talk, I will first review behavioral a nd neural evidence against this long-standing hypothesis. I will then present novel neuro-compu tational data revealing that mindless visuo- motor principles in the superior colliculus, a midbrain structure involved in saccade programming, can predict where humans move their eyes in a range of tasks, and in particular during sentence reading and the free viewing of natural scenes. I will finally discuss how top-down cognitive processes may intervene on top of default, low-level, visuo-motor mechanisms to influence oculomotor behavior.
Invited by the Vision team
lun20Nov201711Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow detailsSocially guided statistical learning in early language development.How do infants learn from other people? The microstructure of interaction between infants and caregivers is characterized by statistical regularities in the form and timing of adult responsiveness. Manipulations of responsiveness using biological and robotic interaction partners reveal that social reactions to prelinguistic vocalizing facilitate rapid developmental advances in speech and language. Findings from a new paradigm show that certain characteristics of social interaction are rewarding for infants, and reward pathways may drive learning in social contexts. Parallel studies in songbirds further illustrate the robustness of socially guided learning in vocal development and allow for direct investigation of the developing connections between reward and learning circuitry. Thus prelinguistic vocal learning, one of the earliest stages of language acquisition, is an active, socially-embedded process.
Invited by the Perception, Action and Cognitive Development group
lun26Juin201711hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Attention, perception, and cognitive control: testing the limits
Human capacity for perception, attention and cognitive control is limited and processing is often severely compromised in tasks that load these capacities, as well as for people with reduced capabilities. For example, High load in perceptual processing results in the phenomena of inattentional blindness and deafness; whereas individual differences in the level of attention deficits symptoms at childhood can predict level of distractibility during task performance in adulthood. In this talk, I will present recent research on the effects of perceptual load on attention, perception, awareness and cognitive control and the underlying neural mechanisms. Applications of this work in both clinical and non-clinical settings (e.g. for the automotive industry) will be described as well.
Invited by the AVOC team
lun12Juin201711hSalle de Conférence R229, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisShow details
Imaging Pain, Analgesia and Anaesthesia induced altered states of Consciousness
The ability to experience pain is old and shared across species. Acute pain is the body’s alarm and warning system, and as such a good thing. Chronic pain is the system gone wrong and now one of the largest medical health problems worldwide. The brain is key to these experiences and relating specific neurophysiologic measures from advanced brain imaging to perceptual or non-perceptual changes in pain perception induced by peripheral or central sensitisation, psychological or pharmacological mechanisms has tremendous value. Identifying non-invasively where functional and structural plasticity, sensitisation and other amplification or attenuation processes occur along the pain neuraxis for an individual and relating these neural mechanisms to specific pain experiences, measures of pain relief, persistence of pain states, degree of injury and the subject's underlying genetics, has neuroscientific and potential diagnostic relevance.
As such, advanced neuroimaging methods can powerfully aid explanation of a subject’s multidimensional pain experience, analgesia and even what makes them vulnerable to developing chronic pain.
Relatively far less work has been directed at understanding what changes in the brain occur during altered states of consciousness induced either endogenously (e.g. sleep) or exogenously (e.g. anaesthesia). However, that situation is changing rapidly. For example, our recent multimodal neuroimaging work explores how anaesthetic agents produce altered states of consciousness such that perceptual experiences of pain and awareness are degraded. This is bringing us fascinating insights into the complex phenomenon of anaesthesia.
- The basic neuroanatomy of pain processing in the human brain – concept of a flexibly accessible network
- How different neuroimaging techniques provide insight into chronic and acute pain (and analgesia)
- How neuroimaging tools are being used to unravel how anaesthetics produce altered states of consciousness
Invited by the Vision team
lun29Mai201711hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Fifty years without free will
How are actions initiated by the human brain when there is no external sensory cue or other immediate imperative? Much is understood about how the brain decides between competing alternatives, leading to different behavioral responses. But far less is known about how the brain decides "when" to perform an action, or "whether" to perform an action in the first place, especially in a context where there is no sensory cue to act such as during foraging. More than fifty years ago, in 1965, scientists discovered a slow buildup of neural activity that precedes the onset of spontaneous self-initiated movements (movements made without any cue telling you when to move). This buildup was dubbed the "readiness potential" (RP) or bereitschaftspotential, and has since been confirmed at the single-neuron level. For decades it has been assumed to reflect a process of "planning and preparation for movement". In the 1980s the RP was used to argue that we do not have conscious free will, because it appears to begin even before we are aware of our own conscious decision to act. Now we and others have challenged the long-standing interpretation of the RP by showing that the early part of the RP might reflect sub-threshold random fluctuations in brain activity that have an influence on the precise moment that the movement begins. These fluctuations thus appear as part of the "signal" when we analyze the data time-locked to the time of movement onset. This insight leads to novel and testable predictions concerning both objective (brain signals and behavior) and subjective (the perceived time of the conscious intention) phenomena, and also exposes serious limitations of the age-old practice of working with movement-locked data epochs.
Invited by the AVoC team
Modulation of attention to the eyes and mouth of a talking face during development
Many recent studies have approached the topic of audiovisual speech perception by analyzing the way infants and adults explore a speaker?s face and use of the audiovisual speech cues located at the eyes and mouth of a talker (Ayneto, & Sebastian?Galles, 2017; Barenholtz, Mavica, & Lewkowicz, 2016; Hillairet de Boisferon, Hansen-Tift, Minar, & Lewkowicz, 2016; Lewkowicz & Hansen-Tift, 2012; Pons, Bosch, & Lewkowicz, 2015; Ter Schure, Junge, & Boersma, 2016). In this talk I will present results from studies from our lab with infants, children and adults, showing different factors that seem to modulate attention to the eyes and mouth of a talking face. In infancy, factors such as 1) type of bilingualism, 2) language familiarity, or 3) communication and social abilities, seem to play a key role regarding the use of the audiovisual information located at the mouth. On the other hand, in children and adults, factors such as 1) language proficiency, 2) language similarity or 3) language dominance seem to be responsible for the relative attention to the mouth of a talker. Finally, I will also discuss how this redundant audiovisual information is used in children with specific language impairment (SLI) as compared to typically developing (TD) children.
Invited by the Speech team
lun15Mai201711hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Invited by the Action team
ven12Mai2017Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 Paris
lun24Avr201711hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Aspects of speech input in parent-child interaction and their relationship to child language development: A snapshot of ongoing research at Stockholm Babylab
Child language development is speech-input dependent, making parent-child interaction one of the main vessels promoting language development. At Stockholm Babylab, a large proportion of the research that I’m part of is focused on speech input in the nucleus of the parent-child dyad. This parental speech input is studied on different levels.
First, on a macro-level, it is the amount of speech input that correlates with language outcome (e.g., Weisleder & Fernald, 2013). At Stockholm Babylab, we are currently evaluating the LENA system when it is used on Swedish (Schwarz et al., under review), paving the way for large-scale studies and interventions based on LENA estimates. The LENA system has the potential to open up the methodological bottleneck of transcribing or annotating recordings, as it uses automatic segmentation of the audio signal based on its acoustic properties. Further, recent results from our team indicate that amount of speech input is directly related to maturation of speech sound categories early in infancy (Marklund, Schwarz, & Lacerda, submitted), suggesting that the amount of speech input is important for language development on a fine-grained level long before the child herself starts to talk.
Second, contingency of parental responsiveness in parent-child interaction that is also related to language development (e.g., Goldstein & Schwade, 2008). The Stockholm Babylab team has shown that parents of 18-month-old children have different response times depending on the vocabulary size of the child (Marklund, Marklund, Lacerda & Schwarz, 2015). Parents of children with large vocabularies respond faster to their children’s utterances than do parents of children with smaller vocabularies. While not implying causation, this study still highlights that temporal aspects of parent-child interaction are important to study with respect to the impact they may have on language development. This counts also for the timing of infant responses to parent speech input. An ongoing study at Stockholm Babylab on temporal aspects of parent-child interaction gives a first indication for the possibility that 6-month-olds’ response time to parent target utterances is dependent on whether the parent is the primary or secondary caregiver, implying differences in the amount of speech exposure created by the two parents (Schwarz et al., to be submitted). Note that this is work in progress, and that these results are only preliminary.
Third, on a micro-level, we study affect in parents’ infant-directed speech and its relation to vocabulary development in a collaboration with The MARCS Institute at Western Sydney University in two ongoing studies; one that describes the perceived affective intent of Swedish infant-directed speech and compares it to previous results on Australian English (Kitamura & Burnham, 2003), and one that investigates the acoustic correlates of perceived affect. Other micro-level characteristics of parent input as for example repetitions are studied in the ongoing MINT-project, which I will present in an overview.
Goldstein, M. H., & Schwade, J. A. (2008). Social feedback to infants' babbling facilitates rapid phonological learning.
Psychological Science, 19(5), 515-523. doi: 10.1111/j.1467-9280.2008.02117.x
Kitamura, C., & Burnham, D. (2003). Pitch and communicative intent in mother's speech: adjustments for age and sex in
the first year. Infancy, 4(1), 85-110.
Marklund, E., Schwarz, I.-C., & Lacerda, F. (submitted). Amount of speech exposure predicts vowel categorization in 4- and
8-month-olds. Developmental Cognitive Neuroscience.
Marklund, U., Marklund, E., Lacerda, F., & Schwarz, I.-C. (2015). Pause and utterance duration in child-directed speech in
relation to child vocabulary size. Journal of Child Language, 42(5), 1158-1171. doi: 10.1017/S0305000914000609
Schwarz, I.-C., Botros, N., Lord, A., Marcusson, A., Tidelius, H. & Marklund, E. (under review). The LENATM system applied to
Swedish: Reliability of the Adult Word Count estimate. Paper to be presented at Interspeech 2017, Stockholm, Sweden.
Schwarz, I.-C., Ekman, M., Hällström, E., Moretta, M.R., Myr, J. & Marklund, U. (to be submitted). 6-month-olds respond
faster to target utterances from primary caregivers than to secondary caregivers. Paper to be presented at the
Workshop Many Paths to Language Acquisition 2017, Nijmegen, Netherlands.
Weisleder, A., & Fernald, A. (2013). Talking to children matters - Early language experience strengthens processing and
builds vocabulary. Psychological Science, 24(11), 2134-2152. doi: 10.1177/0956797613488145
Invited by the Speech team
lun27Mar201711hSalle des Conférences (R229), Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisShow details
Talking with kids really matters: Early language experience shapes later life chances
Children from disadvantaged families typically enter school with lower language skills than children from families higher in socioeconomic status (SES). Our longitudinal studies show that these language gaps between rich and poor children begin to emerge in infancy. In diverse groups of English- and Spanish-learning toddlers from higher- and lower-SES families, we found that significant differences in vocabulary and real-time language processing efficiency were already evident at 18 months, and by 24 months there was a 6-month gap between SES groups in processing skills critical to language development. Where do such early differences come from? One critical factor is that parents differ in the language stimulation they provide their infants. We have found that parents who talk more with their children in engaging and supportive ways have children who are faster in language processing and more advanced in vocabulary than those who hear less child-directed speech. We also explore how vocabulary gaps become knowledge gaps. High-quality verbal engagement includes extensive elaboration – linking new words together in ways that help the child build up complex concepts. Through rich verbal interactions with caregivers, infants begin to link words together into networks of meanings, with cascading benefits for the growth of knowledge. Converging findings from observational and experimental studies show that regardless of economic circumstances, parents who are more verbally engaged with their infants can help their children learn more quickly.
Invited by the Speech team
lun20Mar201711hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
The graded and discrete nature of conscious perception
How do the stimuli that engage our sensory systems rise to the level of conscious perception? Some models view awareness as graded, with the quality of a conscious percept reflecting the amount of sensory information available, whereas other models posit that the resulting conscious percept is essentially discrete—either all or none. Using the attentional blink (AB) paradigm and mixture modeling analysis, we previously showed that target awareness arises at central stages of information processing in an all-or-none manner. Graph theory analysis of fMRI data reinforce these findings by showing that target awareness is associated with widespread changes in the brain’s functional connectivity, consistent with all-or-none, global workspace models of consciousness. However, recent findings from our lab suggest that, at least in the context of the AB, awareness can also be graded when the task loads early perceptual processing. Taken together, these findings suggest that target awareness can be graded or discrete depending on the stage of information processing taxed by task demands.
Invited by the AVOC team
lun13Mar2017LPP seminarShow detailsAction modulating perceptual decisions
Perceptual decisions are classically thought to depend solely on characteristics of the sensory signal. In this view, the motor response is considered to be a neutral output channel that only reflects the upstream decision. Contrary to the view, I will present two examples, which show that the processing involved in action generation can modulate our perceptual decisions. First, I will show that the duration of a visual stimulus presented during reaching preparation is felt dilated, due to the increased capacity of visual processing during this period. Second, I will show that the visual motion direction judgement can be biased by the asymmetric physical resistance applied to the two judgment options (i.e. left or right reaching movement). The asymmetric resistance on hand during the motion judgments can similarly bias the subsequent motion judgments performed vocally, suggesting that the bias is not occurring at the stage of action selection, but at the stage of judgement based on the input stimulus.These results indicate the existence of mutual interaction between the perceptual and action system in the human brain. Actions are selected using the information from the environment, but at the same time, how we act may define how we interpret the environment.
ven10Mar2017Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 Paris
lun06Mar201711hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisShow details
Cortical Mechanisms of Attention
Over the last years, the neural mechanisms of spatial attention, via feedback signals from spatially-mapped control areas in frontal / parietal cortex, have been described in much detail. For non-spatial attention to different sensory modalities, complex objects, and so on, the control mechanisms seem much more complex and experimental work has just begun to identify possible sources of top-down control in the inferior part of frontal cortex. Obviously, however, spatial and non-spatial attention are often combined in everyday tasks. How these different control networks work together is a major question in cognitive neuroscience. To answer these remaining questions, we combined MEG and fMRI data in human subjects to identify not only the sources for spatial and non-spatial feedback signals, but also the mechanisms by which these different networks interact with sensory areas in attention. We identified two separable networks in the superior- and inferior-frontal cortex, mediating spatial versus non-spatial attention, respectively. Using multi-voxel pattern analysis, we found spatial and non-spatial information are represented in different subpopulations of frontal cortex. Most importantly, our analyses of temporally high-resolving MEG data also show that both control structures engage selectively in coherent interactions with sensory areas that represent the attended stimulus. Rather than a zero-phase lag connection, which would indicate common input, the interactions between frontal cortex and sensory areas are phase-shifted to allow for a 20ms transmission time. This seems to be just the right time for signals in one area to arrive at a time of maximum depolarization in the connected area, increasing their impact. Further, we were able to identify top-down directionality of these oscillatory interactions, establishing the superior- versus inferior-frontal cortex as key sources of spatial versus non-spatial attentional inputs, respectively. Finally, we combined transcranial current stimulation (tACS) with MEG recordings to directly test the causal role of local oscillation patterns in visual attention. After stimulating visual cortex we used evoked responses to evaluate the effects of frequency entrainment on the attentional weighting of visual input. By analyzing both phase and power spectra of the entrained rhythms we show how experimentally induced alpha rhythms lead to lasting inhibition and, in consequence, to suppressed visual responses, mimicking effects of visual attention.
Invited by the Vision team
lun27Fév201711hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisShow details
Fairness: From Biology to Culture
Invited by the Vision team
lun20Fév201711hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisShow details
Gaze control in information foraging for perceptual decisions
Many perceptual judgements involve actively sampling visual information from the environment with gaze. We pick up visual information by fixating different locations in the scene and, typically, we put this information together in order to decide on a course of action (e.g. where to place your foot on a rocky path). Most models and studies of perceptual decision making typically involve just a single source of information that has to be mapped onto a small number of discrete decision categories. I will present work on the control of eye movements in decision problems that involve actively gathering and combining visual information from several locations.
I will focus on two key questions: (i) How is time allocated to different sources of information that may vary in the quality of evidence they provide? (ii) What decision variable is being computed to govern the active sampling strategy? Our work shows that time is allocated adaptively in that more noisy information sources are sampled for longer, particularly when prior knowledge about the information quality of different sources is available. In addition, participants are able to track some measure of their own uncertainty around the task relevant variable (visual motion in this case), which may be used to govern their sampling strategy online.
Invited by the Vision team
lun30Jan201711hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
The role of experience on the development of face processing during the first year of life.
Faces are perhaps the most predominant visual stimulus in children’s environment. From birth onwards, children encounter thousands of faces. These faces vary in terms of not only identity, but also gender, age, attractiveness, species, and race. Given the adaptive significance of the face processing ability, the hypothesis about an innate disposition to such ability is appealing. Exactly which components of the face processing system are present at birth, which develop first, and at what stage the system becomes adult-like are still hotly debated topics. I will review evidence accumulated in the last several decades that suggests the prominent role of experience in shaping children’s face processing expertise, which in turn forms a foundation for later face expertise in adulthood and also affects their social interaction.
Invited by the Action team
mar24Jan201711hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisShow details
Voluntary saccadic eye movements ride the attentional rhythm
Visual perception seems continuous, but recent evidence suggests that the underlying perceptual mechanisms are in fact periodic – particularly visual attention. Because visual attention is closely linked to the preparation of saccadic eye movements, the question arises how periodic attentional processes interact with the preparation and execution of voluntary saccades. In two experiments, human observers made voluntary saccades between two placeholders, monitoring each one for the presentation of a threshold-level target. Detection performance was evaluated as a function of latency with respect to saccade landing. The time-course of detection performance revealed oscillations at around 4 Hz both before the saccade at the saccade origin, and after the saccade at the saccade destination. Furthermore, oscillations before and after the saccade were in phase, meaning that the saccade did not disrupt or reset the ongoing attentional rhythm. Instead, it seems that voluntary saccades are executed as part of an ongoing attentional rhythm, with the eyes in flight during the troughs of the attentional wave. This finding for the first time demonstrates that periodic attentional mechanisms affect not only perception but also overt motor behaviour.
Invited by the Vision team
lun23Jan201714hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisShow details
Which factors predict vocabulary knowledge in 2-year-old bilingual toddlers? Effects of linguistic distance and contextual variables
Young bilingual children typically underperform on language development measures relative to monolingual norms. Detecting genuine language delay, beyond this bilingual difference, is confounded by the range of interacting situational factors and possible language combinations that modulate the rate of development. For the first time we provide norms of development of expressive and receptive vocabulary for 2-year-old toddlers learning British English and one Additional Language (AL) out of diverse set of 13 (Bengali, Cantonese, Dutch, French, German, Greek, Hindi-Urdu, Italian, Mandarin, Polish, Portuguese, Spanish and Welsh). These norms are based on CDI measures of vocabulary modulated by a range of predictors of bilingual development (amount of exposure to each language, proportion of each language in parental overheard speech, infant gender) identified in a comprehensive survey. We also show that linguistic distance based on measures of phonological overlap predicts receptive and expressive vocabulary in English and the AL. Finally, we integrate these predictors to develop UKBTAT, an online tool for the estimation of expected vocabulary size in 2-year-old bilingual UK-raised toddlers.
Invited by the Speech team
lun09Jan2017LPP seminarShow details
Invited by the Action team
sam19Nov20169h00-17h00Centre Universitaire des Saints-Pères, 45 rue des Saints-Pères, Paris 6èmeLPP seminarShow details
Chers amis du Café Bilingue,
Le Café Bilingue vous invite à un colloque portant sur le plurilinguisme et son maintien en structure d’accueil de la petite enfance et à l’école.
L’objectif premier de ce colloque est de mener une réflexion et d’échanger sur les meilleures pratiques pour le maintien de la richesse plurilingue et l’intégration des enfants allophones dans différents pays d’Europe : France, Royaume Uni, Grèce, Slovénie.
Depuis maintenant 10 ans, l’association CAFE Bilingue œuvre à promouvoir la diversité linguistique et culturelle au sein de la famille notamment en conseillant les parents binationaux et biculturels dans la transmission de leur langue et de leur culture. A travers ces échanges, nous avons fait le constat trop fréquent que les professionnels de la petite enfance ainsi que le corps enseignant ne sont que trop peu familiers de ces situations particulières pourtant si courantes. Dans certaines écoles franciliennes, une seule langue étrangère est présente, alors que dans d’autres, plus de vingt langues différentes sont parlées par les élèves. Dans le Grand Paris, il existe des écoles qui accueillent jusqu’à 100% d’enfants locuteurs natifs d’une autre langue que le français. Cette réalité démographique est malheureusement peu prise en compte dans la formation des personnels en contact avec les enfants qui se trouvent trop souvent démunis face aux questions soulevées par cette réalité. A travers cette conférence et ces ateliers, nous avons pour ambition non seulement d’éveiller les consciences et de familiariser le personnel de la petite enfance et le personnel enseignants aux problématiques liées au plurilinguisme, mais aussi de faire des propositions d’actions concrètes en faveur du plurilinguisme comme élément d’intégration et d’appropriation de la langue de l’école et/ou du pays d’accueil.
Ce colloque international s’articulera de deux manières. Le matin, trois chercheurs européens, spécialistes de ces questions, exposeront les politiques publiques vis-à-vis du plurilinguisme dans leur pays et présenteront les résultats de leurs travaux de recherche sur le plurilinguisme des enfants. L’après-midi sera composée d’ateliers « échanges d’outils et d’expériences » animés successivement par des professionnels.
Programme de la journée:
8h30 : Accueil des participants
9h00 : Ouverture de la journée par Barbara Abdelilah-Bauer, présidente du CAFE Bilingue, linguiste et psychosociologue et Ranka Bijeljac-Babic, vice-présidente du CAFE Bilingue, enseignante-chercheuse à l’Université de Poitiers et à l’Université Paris Descartes
9h15-10h00: Une politique en faveur de la diversité linguistique : le cas de la France, par Gaid Evnou, chargé du plurilinguisme à DGLFLF, Ministère de la culture et de la communication
10h00-10h45: “Why bilingualism matters (and gives children much more than two languages)” - Pourquoi le bilinguisme est important (et donne aux enfants beaucoup plus que deux langues), par Antonella Sorace, professeur à l’Université d’Edimbourg, fondatrice et directrice du centre de recherche et d’information «Bilingualism Matters», Royaume-Uni (intervention en anglais, traduite en français)
10h45-11h30: “Slovenian/Italian bilingualism in Slovenia: study on pragmatic abilities of bilingual children” - Bilinguisme Slovenian/italian à Slovénie: recherche sur les capacités pragmatiques des enfants bilingues, par Sara Andreetta, chercheuse à l’Université de Nova Gorica, Slovénie (intervention en anglais, traduite en français)
11h30-12h0 : Pause Café
12h00-12h45 : "La Pédagogie en plurilinguisme dans l'éducation des enfants réfugiés.", par Argyro Moumtzidou, Spécialiste en Plurilinguisme et Didactique des Langues-Cultures, Université Aristote de Thessalonique, Grèce
12h45-14h00: Pause Déjeuner
14h00-17h00 : Ateliers avec:
- Jackie Pialhoux, psychologue scolaire à Garges‐les‐Gonesse« La Semaine des Langues » dans les écoles de de Garges 2015-16. Atelier: "Ecole et plurilinguisme" projet sur les écoles de Garges les Gonesse, en partenariat avec l'Education nationale et le PRE (programme de réussite éducative ) de la ville
- Amina Benaissa, socio-linguiste, présidente de l’association « Les Jardins des Savoirs Mediterraneens ». Atelier: "D'une langue à l'autre, d'une culture à l'autre : quelles identités ?"
- Argyro Moumtzidou, Spécialiste en Plurilinguisme et Didactique des Langues-Cultures, Université Aristote de Thessalonique, Grèce. Atelier: Le monde intermédiare dans la pédagogie de l'accès: vers un modèle didactique socioculturel
- Inscription obligatoire ici
- Tout public
- Entrée: 30 euros par personne, 15 euros pour les étudiants et enseignats, gratuit pour les membres du Café Bilingue
- Horaires: 9:00 - 17:00
- Lieu: amphithéâtre Lavoisier au 3ème étage à l'Université Paris Descartes, 45 rue des Saints-Pères, Paris, 75006
L'équipe du CAFE BILINGUE
Centre d'Animation et de Formation pour l'Éducation bilingue et plurilingue
Depuis 2006 à Paris
"La différence est notre force et la diversité notre richesse "
Réservez votre place ICI !
En partenariat avec:
lun17Oct201611hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
"Learning and attention in infants: The importance of prediction in development"
I will review three lines of research from my lab that have implications for the normative course of development and for the diagnosis of deficits or delays in development among special populations. (1) Statistical learning is a rapid form of implicitly extracting information from the environment. It has been shown to be robustly present in infants, children, and adults. Children with Specific Language Impairment and adults with Autism Spectrum Disorder show different patterns of statistical learning. It may, therefore, serve as both a diagnostic tool and as a potential mechanism that underlies some developmental disorders. (2) The allocation of attention to gather information via statistical learning is controlled by both low-level stimulus salience and by predictive mechanisms. Infants allocate their attention to visual and auditory events so that they ignore both overly simple and overly complex information, while focusing mostly on information of medium complexity. Deviations from this normative pattern of allocating attention may contribute to some developmental disorders. (3) The infant brain must make predictions about upcoming stimuli. We have shown using a brain imaging technique called functional near-infrared spectroscopy (fNIRS) that an auditory cue can predict a visual stimulus, and even in the absence of the visual stimulus this prediction will elicit a brain response in the visual cortex. A follow-up study of prematurely born infants revealed that this brain signature of prediction is absent, despite these at-risk infants (tested at their corrected age) showing predictions at the behavioral level.
mer12Oct201614hR229, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Sensitive periods in human development: Evidence from the Bucharest Early Intervention Project
sam17Sep201610h00-12h30Centre Universitaire des Saints-Pères, 45 rue des Saints-Pères, Paris 6èmeLPP seminarShow details
Le Laboratoire Psychologie de la Perception (CNRS-Université Paris Descartes), le LABEX EFL (Empirical Foundations of Linguistics), le CAFE Bilingue et la Formation Continue (Université Paris Descartes) organisent une table ronde avec Barbara Cassin pour la traduction en anglais de son livre Plus d’une langue.
Participants: Ranka Bijeljac-Babic (psycholinguiste), William T. Bishop (traducteur), Igor Krtolica (philosophe), Amélie Mourgue d'Algue (artiste), Thierry Nazzi (psychologue du développement) et Christian Puech (linguiste).
Cette table ronde aura lieu le 17 septembre de 10h00-12h30 au Centre Universitaire des Saints-Pères, 45 rue des Saints-Pères, Paris 6ème.
L’objectif de cet événement est d’aborder différentes questions qui se posent dans notre monde multilingue autour des langues, telles que leur variabilité, la définition de la langue maternelle et la façon dont on traduit d’une langue à une autre. Les intervenants discuteront ces différentes questions avec Barbara Cassin, développant des visions complémentaires, avant d’ouvrir le débat avec le public. L’inscription est nécessaire: tablerondeBC@gmail.com
lun04Juil201611hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Invité par l'équipe AVoC
lun06Juin201611hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Invité par l'équipe AVoC
lun23Mai201611hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Invité par l'équipe Parole
lun21Mar201611hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
How baby robots help us understand complex dynamics in development
Abstract: Understanding infant development is one of the greatest scientific challenges, as this is a massive complex dynamical system. The development of skills can be viewed as pattern formation through the interactions of multiple mechanisms at multiple spatio-temporal scales. Various processes of self-organization make that the concepts of “innate” or “acquired” are not adequate tools for explanation: what is needed is a shift from reductionist to systemic accounts. To address this challenge, it is insightful to build and experiment with robots that model the growing infant brain and body. This type of work can help explain how new patterns form in sensorimotor, cognitive, and social development. This complements traditional experimental methods in psychology and neuroscience where only a few variables can be studied at the same time. This also provides tools to model the mechanisms of development, going further than simply identifying correlations among variables in black-box statistical studies.
Moreover, work with robots has enabled researchers to consider the body as a variable that can be systematically changed to study the impact on skill formation, something developmentalists could only dream about decades earlier. More generally, work with developing robots has shed new light on development as a complex dynamical system, leading to formal models that integrate mathematics, algorithms, and robots.
Invité par l'équipe Perception-Action (Jacqueline + Véronique)
Organisation: Véronique + Lola
lun14Mar201611hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Distinct neural mechanisms of inhibitory control: evidence from behaviour, EEG and MEG
Selective attention optimises perception by enhancing target processing and suppressing task irrelevant sensory input. Target enhancement and distractor inhibition are often considered two sides of the same process, however we argue that each depend on fundamentally distinct control systems. Behavioural studies show that participants are unable to selectively inhibit distracting input via top-down attention cues, whereas predictions derived from experience can be used effectively to inhibit expected task-irrelevant input. Moreover, EEG and MEG suggest that predictive coding could provide an important mechanisms for preparatory distractor suppression.
Invité par l'équipe Vision (Andrei)
lun07Mar2016LPP seminarShow details
Prediction in multimodal emotional speech
Social interactions rely on verbal and non-verbal information sources and their interaction. Crucially, in such communicative interactions we can obtain information about the current emotional state of others. However, emotion expressions are not always clear cut or may be influenced by a specific situational context or learned knowledge. In our work on the temporal and neural correlates of multimodal emotion expressions we address a number of questions by means of ERPs, fMRI, and lesion studies. Within a prediction framework I will focus on the following aspects in my talk: (1) How do we integrate different verbal and non-verbal emotion expressions, (2) How do cognitive demands impact the processing of multimodal emotion expressions, (3) How do we resolve interferences between verbal and non-verbal emotion expressions?
Invité par l'équipe Parole
lun22Fév201611hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
How our capacity to perceive numbers is neurally implemented and associated with mental arithmetic
Extracting numerical information from the environment is supposed to comprise a series of perceptual processes that converge on an abstract, analog magnitude representation – the approximate number system (ANS). I will delineate the neural implementation of the different steps leading to the extraction of numerical information from the environment. In particular, I will demonstrate how a simple map architecture can account for observed capacity limits when enumerating and maintaining in visual short term memory the number of objects in a set. I will argue for a gradient of numerosity specificity along an occipital-to-parietal pathway for spatial (i.e. sets of objects) but not temporal (i.e. stream of events) enumeration. Contradicting previous reports (Bahrami et al., 2010), I will argue that numerical information is not subject to automatic unconscious processing. Finally, I will demonstrate how the ANS is linked with arithmetic fact retrieval and (impaired) mental arithmetic.
Invité par l'équipe Perception-Action (Véronique)
lun15Fév201611hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Individual variation in early word learning
Children build their vocabulary at a remarkable speed. However, there is quite some variation with which they do this: some are slow, while others are exceptionally fast learners. Vocabulary size is one of the most important variables predicting later academic success (Duncan et al., 2007; Howlin, Goode, Hutton, & Rutter, 2004), with language measures obtained when children first go to school predicting later literacy (Dickenson & Tabors, 2001; Snow, Burns & Griffin, 1998). These abilities appear to be stable throughout life (Walker, Greenwood, Hart & Carta, 1994) but can already be detected in infancy. What I am interested in is what promotes early word learning. A recent overview suggests that differences in vocabulary measures in early childhood can be traced back to differences in infant performance in laboratory tasks (Cristia, Seidl, Junge, Soderstrom & Hagoort, 2014). Some of these infant tasks show stronger links to future language development than others. For instance, the ability to recognize word repetitions from continuous speech has repeatedly been shown to predict future language ability (Junge & Cutler, 2014). In this talk I will present (on-going) studies from several infant paradigms (speech segmentation tasks, early/ novel word learning, vowel discrimination, mispronounciation paradigms that all tried to link infant behavior/brain correlates to vocabulary development. I will also present my newest research on the beneficial effect that a familiar voice has on novel word learning.
Invité par l'équipe Parole
lun01Fév201611hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Multidimensional representation of odors in the human olfactory cortex
An important issue in cognitive psychology of smells is to understand how the human brain represents odor objects. Representation of visual objects is far better understood; we know that physical and perceptual attributes of visual stimuli are represented in a distributed and hierarchical manner, but it is still unclear how chemical properties of odorant molecules and perceptual aspects of odors are represented along the human olfactory pathway. In this talk, I will present a series of psychophysical and fMRI studies that tested the hypothesis that the multidimensional representation of odors is the result of a neural mechanism that processes similarities of chemical information and perceptual information at various stages of the olfactory system, from piriform cortex to temporal areas.
Invité par l'équipe Perception-Action (Marianne)
lun25Jan201611hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Signal correlation and natural scene statistics in audiovisual perception
Abstract: The signals reaching our sensory organs are often correlated in the real world. Such correlations can be readily available to the senses, such as the temporal correlation of the different signals originating from a single physical event, or can be the results of statistical co-occurrence of certain stimulus properties that can be learned over time. In my talk I will first present some recent findings on how correlation across signals modulates audiovisual perception. Next, I will propose a novel multi-purpose model for audiovisual integration that exploits temporal correlation across the signals to solve the multisensory correspondence problem, detect lag and synchrony across the sense, and perform optimal cue integration of synchronous and temporally correlated signals.
Invited by Andrei Gorea
Organized by Lola
lun18Jan201611hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Neural codes for working memory and perception
Working memory, the ability to actively internally maintain information over brief intervals, is considered an essential component of most complex behaviours and is closely linked to general intelligence. Critically, working memory is strongly limited in its ability to hold multiple representations simultaneously, constraining the complexity of mental operations. A growing body of evidence indicates that this limit is best expressed not as a fixed number of memory slots, but in terms of a single, continuously-divisible resource. I will argue that a physiological basis for this limited resource can be found in neural population coding, where errors associated with decreasing signal strength match the pattern of failures in human recall under increasing memory load. The same neural code can be shown to determine the limits of perceptual awareness, with variations in total spiking activity accounting for variation in our certainty about what we saw.
lun30Nov201511hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Revisiting visual unconscious processing
Delineating the limits of unconscious processing in order to understand the function of consciousness in visual processing has been a central goal for research. I will first examine what measure is best suited to index conscious processing - and absence thereof. Recent research suggests that subjective measures can be as sensitive as objective forced-choice performance, leading to the conclusion that these direct measures both tap conscious perception. A notable exception is blindsight patients, who report no subjective awareness of stimuli that they can localize or discriminate way above chance level. However, it is important to reflect on what above-chance performance actually means: simply that some information – any information - guides the observers’ guesses. I will suggest that this information is not necessarily visual and present experiments testing this conceptual framework in healthy participants. Then, I will address the question of whether conscious and unconscious processes lie on a continuum or reflect the operation of separate mechanisms. To do so (a) I will present a paradigm that departs from the gold standard adopted in unconscious processing research and allows comparing conscious and unconscious processing rather than simply detecting the presence of unconscious processing and (b) I will rely on effects of prior expectations to show that conscious perception and indirect effects of visual stimulation on motor action can be dissociated.
Invitée par l'équipe Perception-Action (Sylvie C.)
lun23Nov201511hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Embodiment in spatial cognition: How and why visual body representations might influence the perception of spaces around us
How we perceive and act in the world depends not only on information in the environment, but also on our own body capabilities. This talk will focus on how visual representations of bodies (one’s own as well as other’s) can influence perceived affordances—the decisions that we make about action possibilities. I will show that real, illusory, and virtual changes to the body affect action decisions. Furthermore, I will present recent results that support a social context for perceived affordances, showing that the size of another person’s body influences perceived affordances for oneself. Together, this work suggests that perceptual, cognitive, and social influences on bodily awareness underlie perceived affordances.
Invitée par l'équipe Vision (Mark)
lun16Nov201511hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Children create design features of language: Evidence from Nicaraguan Sign Language, Home Sign, and pantomiming by children
Why does language have the properties they have? The goal of my presentation is to provide evidence for the idea that some of the design features of language (Hocket, 1956) has emerged (partly) due to children's tendency to shape communication systems into "language-like" ones. I will discuss two design features related linguistic forms. First, language segments and linearises information. For example, when the event of "a ball rolls down the hill" is verbally described, the holistic event is encoded in a linear sequence of six words, each one of which segments out a particular aspect of the event. Second, language uses a set of discrete (as opposed to continuous) forms as building blocks for words. For example, voiced and voiceless consonants (e.g., /b/ vs. /p/) are two discrete categories along a physical continuum. I will present evidence that children spontaneously introduce these design features into their communication system. The evidence comes from three sources: (1) Nicaraguan Sign language, which is a new language created by a group of deaf children without any linguistic input from adults, (2) Pantomiming (gesturing without speech) by hearing English-speaking children who were asked to gesturally express certain events without any speech, (3) "Home Signs", gestural communication system developed by a deaf child growing up a in a hearing family, who have virtually no linguistic input. I will conclude that design features of language related linguistic forms are universal because all languages are learned by children, who shape language just in that way.
Invité par l'équipe Parole
lun12Oct201511hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Mapping phonetic perception by infants and adults using Cortical Auditory Evoked Potentials
Measurements of infant speech perception are very time intensive, typically allowing for only isolated phonetic contrasts to be investigated in a single experiment (e.g., beat-boot). This talk will discuss a more efficient technique that we’ve developed to map perception in larger multidimensional phonetic spaces (e.g., a set of 8 English vowels). Listeners hear random sequences of concatenated vowels or fricatives, with stimulus changes up to three times per second (i.e., about 3000 changes within a typical 15-minute infant testing session). We use EEG to measure the magnitude of the neural response to each stimulus change, and generate perceptual maps based on these responses using multidimensional scaling (MDS). The results demonstrate that 4-5 month old infants have MDS spaces that closely match acoustic differences, but the responses selectively increase for nearby phonetic contrasts by 8-11 months. In adults, the MDS spaces reflect a combination of auditory sensitivity and cross-language differences. This technique thus appears to be successful for creating multidimensional perceptual maps that assess the developing auditory system and emerging phonetic categories.
Invité par l'équipe Parole
lun28Sep2015LPP seminarShow details
The Importance of Being Variable - causes and effects of the variability in brain activity
A typical task design in cognitive neuroscience or experimental psychology involves presenting the participant with the exact same assignment or sensory stimulus multiple times, and then obtaining an average response. This is necessary, because the responses to such assignments or stimuli - be it reaction times, skin conductance, or brain responses measured with fMRI or MEG/EEG - are typically highly variable even within a single participant. This variability is largely caused by activity that is spontaneously generated by the cortex. Such spontaneous activity is also visible in the fMRI signal; since it is typically studied while the participant is at rest, it is known as resting-state activity. In my talk, I'll explore various aspects of this resting-state activity: its spatial structure in the visual cortex and its relationship to the underlying spontaneous neural activity. I'll end by showing some recent data that I collected in the awake macaque monkey, that investigate how variability in brain activity leads to moment-to-moment variations in the perception of visual stimuli.
Invité par l'équipe AVoC
lun06Juil201511hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
When is a neural representation a conscious one?
There is now considerable agreement on the fact that conscious visual processing requires recurrent or re-entrant interactions between widespread neural assemblies. Strong controversy exists, however, on the necessary extent of these interactions. Some argue that they must involve the fronto-parietal network, enabling a broadcasting of information to the whole brain. Others, however, claim that recurrent interactions localized to the visual cortex suffice for a conscious visual percept. Further broadcasting is then only required for attention, access and report, functions that go beyond the generation of conscious experiences per se. The difference has great consequences for understanding the neural basis of consciousness, the interpretation of patient data (e.g. in vegetative state), and for age-old and fundamental questions about consciousness, such as its presence in animals or machines, the issue of qualia, or its molecular basis. Recent data on the controversy, obtained using EEG, fMRI, TMS, and pharmacological interventions will be discussed.
Victor A.F. Lamme, Department of Psychology, University of Amsterdam, Amsterdam Brain and Cognition (ABC), Weesperplein 4, 1018 XA, Amsterdam, The Netherlands
Victor Lamme is a full professor of cognitive neuroscience at the University of Amsterdam. He has worked on visual perception, attention, and memory, only to converge on the topic he is truly obsessed with: consciousness. He studies consciousness using a variety of techniques, ranging from single unit electrophysiology in monkeys to EEG, fMRI, TMS, and pharmacological interventions in humans. His aim is to provide a new definition of consciousness, moving away from our introspective intuition of it. He received an advanced ERC grant (2.3M) for this work, and was president of the Association for the Scientific Study of Consciousness (ASSC) in 2012. He also is a writer of popular science books, and owns a neuromarketing company.
Invité par l'équipe AVOC (Claire)
lun29Juin2015LPP seminarShow details
Phonological knowledge in speech perception
Does the perception of spoken language draw necessarily on abstract phonological knowledge, or is speech processing, in contrast, driven by analogical comparison and episodic memory, with abstract knowledge about language phonology serving rather as a resource for metalinguistic decision-making? This talk will describe evidence from talker adaptation and recognition, and from phonetic learning and discrimination, all suggesting that abstract phonological knowledge informs decisions about speech input even in situations where memory and analogy also play an important role.
Invité par l'équipe Parole
jeu25Juin20159h15R 229 (2ème étage), Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisGDR Neurosciences Cognitives du DéveloppementShow details
9h15 - Accueil
9h30-9h55 - Elena Koulaguina - LPP Parole - The acquisition of Subjet-Verb number agreement with liaison: From surface patterns to abstract knowledge
9h55-10h20 - Sho Tsuji - LSCP - The role of on-screen social cues in 12-month-
olds novel word learning
10h20-11h - Jessica Dubois - INSERM U 992 - Exploring the early organization and maturation of linguistic pathways in the human infant brain
11h-11h25 - Andrea Helo - LPP Parole - Development of scene perception: Scan strategies and incongruence detection
Pause café - - -
11h50-12h15 - Fabrice Damon - LPNC - The preference for attractive faces is driven by the distance to the prototype in human and macaque infants
12h15-12h40 - Alex Cristia - LSCP - What is child-directed speech good for? A quantitative multi-level approach
12h40-13h05 - Lyn Tieu - LSCP - Why little semanticists confuse 'ou' and 'et' : Implicatures of disjunction in preschoolers
Repas - SALLE SABATIER A - 2ème étage - En face de la salle de conférence R229
14h30-15h30 - Bettina Pause - University of Düsseldorf, Germany - The chemical nature of emotions
15h30-15h55 - Sara Dominguez - LECD - The roots of turn taking in the neonatal period
15h55-16h20 - Jean-Yves Baudouin - CSGA - Multisensory processing of faces in infancy
Pause café - - -
16h45-17h10 - Cassandra Potier Watkins - LPP Perception Action - Number Understanding with Game Boards
17h10-17h35 - Laurianne Cabrera - LPP Parole - How do newborns use temporal cues in speech to perceive phonetic contrasts?
17h35-18h00 - Claire Kabdebon - INSERM U 992 - Neural correlates of abstract representations in 5 month-old infants
Apéro - Salle SABATIER A - 2ème étage - En face de la salle de conférence R229
Équipes du GDR : LSCP (A. Christophe), U. 992 (G. Dehaene), LPP (S. Chokron & T. Nazzi), LPNC (O. Pascalis), CSGA (B. Schaal), PSY-NCA (équipe Fiacre, J. Rivière), LECD (M. Gratier).
Directrice : Marianne Barbu-Roth, Centre Biomédical des Saints-Pères - 45, rue des Saints-Pères - 75270 Paris Cedex 06 email : email@example.com
lun22Juin201511hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
The Active Pupil: Pupil size in attention, working memory, and active vision
When the eyes are exposed to light, the pupils constrict. The pupillary light response (PLR) is traditionally believed to be purely reflexive and not susceptible to cognitive influences. In the first part of this talk, I will present recent studies that show that this reflexive view is incomplete. I will focus in particular on how the PLR is modulated by visual attention, inhibition of return, and working memory. These studies show that the PLR is neither fully reflexive, nor under complete voluntary control, but is a stereotyped response that is modulated by visual attention and related phenomena. In the second part of this talk, I will focus on pupil size as an on-line measure of goal-driven behavior. In conclusion, I will argue that pupil-size changes are an integral aspect of active vision, and have many similarities with saccadic and smooth-pursuit eye movements.
Mathôt, S., Siebold, A., Donk, M., & Vitu, F. (2015). Large pupils predict goal-driven eye movements. Journal of Experimental Psychology: General
Mathôt, S., van der Linden, L., Grainger, J., & Vitu, F. (2015). The pupillary light response reflects eye-movement preparation. Journal of Experimental Psychology: Human Perception and Performance 4 (1), 28-35.
Mathôt, S., Dalmaijer, E., Grainger, J., & Van der Stigchel, S. (2014). The pupillary light response reflects exogenous attention and inhibition of return. Journal of Vision 14 (14), 7.
Mathôt, S., Van der Linden, L., Grainger, J., & Vitu, F. (2013). The pupillary response to light reflects the focus of covert visual attention. PLoS ONE, 8(10), e78168.
Invité par l'équipe Vision
lun15Juin201511hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Perceiving the Social in Early Development: A Cognitive Neuroscience Approach
Neuroscience and developmental psychology are often seen as having different aims and objectives from each other. I will outline why amongst all neuroscience techniques, EEG-derived methods are those which are best placed for research with infant participants. Using examples from areas including biological motion processing, action understanding, joint attention and eye gaze, my talk will focus on work that shows why neuroscience techniques can substantially enrich our understanding of early social and perceptual development. A particular focus will be on the role of action production within action perception. I will also look at semantic processing with actions and how this may be related to language. I will finish by showing how new analysis methods allow for improved attrition rates, better representative samples, and the assessment of individual differences. These factors mean that neuroscience methods will be increasingly at the forefront of many new discoveries within developmental science for the next decade.
Invité par l'équipe Perception-Action (Marianne)
jeu11Juin201516h-18hR229, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLabex EFLShow details
Dimension-based vs. domain-based indices of dominance • Subtraction-based vs. ratio-based indices of dominance • Global assessments of dominance: Bilingual Language Profile vs. Bilingual Dominance Scale • Discrete assessments of dominance: A Quick Test of Cognitive Speed vs. Boston Naming Test • Language dominance and handedness: comparing constructs, assessments, and findings • Ambidexterity and mixed handedness vs. balanced bilingualism • Putting dominance indices to use: to identify ‘balanced bilinguals’; as participant factors in group designs and regression • Future directions in dominance assessment
jeu04Juin201516h-18hR229, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLabex EFLShow details
Dissociating and associating dominance and nativelikeness • Balanced bilingualism does not imply nativelikeness in both languages • (Non-)nativelikeness and the nature of bilingualism • (Non-)nativelikeness and the critical period hypothesis • Native/non-native comparisons: whys and why nots • Dominance in processing French syntax by natives and non-natives
sam30Mai201509h-17hAmphithéâtre Claude Bernard, 3e étage gauche, Centre Biomédical des Saints Pères, 45 rue des Sts Pères, 75006 ParisFormation continueShow details
GRANDIR AVEC DEUX LANGUES
Entrée libre sur inscription, dans la limite des places disponibles
Informations et inscriptions : www.scfc.parisdescartes.fr
Tél. : 06 95 37 07 27
email : firstname.lastname@example.org
Programme de la journée du 30 mai 2015
9h00 : Accueil
9h15-10h30 : De quoi l’enfant bilingue est-il capable ?
Compétences langagières et cognitives de l’enfant bilingue précoce
Présentation par Ranka Bijeljac-Babic et Frédéric Isel, Université Paris Descartes
Conférencière invitée : Pr Laura Bosch, Université de Barcelone
10h30-11h00 : Pause Café
11h00-11h45 : Grandir en parlant plusieurs langues
par Barbara Abdelilah-Bauer, Sociolinguiste, présidente de CAFÉ Bilingue
11h45-12h30 : Le bilinguisme des allophones : apprentissage du français langue seconde
par Corinne Lambin, Ancienne inspectrice de l’Éducation nationale à Bobigny,
Membre du Conseil de l’Éducation nationale dans les Deux-Sèvres
12h30-14h30 : Pause Déjeuner
14h30-17h00 : « Vous avez la parole »
Table ronde - Des spécialistes répondent aux questions recueillies durant la journée
Modératrice : Jackie Pialhoux, Psychologue à l’Éducation nationale (Val d’Oise)
Participants : Barbara Abdelilah-Bauer, Ranka Bijeljac-Babic, Laura Bosch, Frédéric Isel, Corinne Lambin
ven29Mai201508h30-17hAmphithéâtre Lavoisier, 3e étage gauche, Centre Biomédical des Saints Pères, 45 rue des Sts Pères, 75006 ParisFormation continueShow details
DE QUOI L’ENFANT BILINGUE EST-IL CAPABLE ?
Entrée libre sur inscription, dans la limite des places disponibles
Informations et inscriptions : www.scfc.parisdescartes.fr
Tél. : 06 95 37 07 27
email : email@example.com
Programme de la journée du 29 mai 2015
8h30 : Accueil des participants
9h00 : Ouverture de la journée
9h15-10h00 : Cross-linguistic interaction in bilingual prosodic and segmental acquisition
par Margaret Winkler-Kehoe, Faculté de psychologie et sciences éducatives, Université de Genève
10h00-10h45 : Early language differentiation : the contribution of rhythm, distributional and audiovisual informationpar Laura Bosch, Université de Barcelone
10h45-11h15 : Pause Café
11h15-12h00 : Perception and production of prosody in bilingual children : phenomena of interferencepar Ranka Bijeljac-Babic, Laboratoire Psychologie de la Perception, Paris Descartes et CNRS
12h00-13h30 : Pause Déjeuner
13h30-14h15 : Lexical-semantic organization in bilingual children : ERP evidence
par Pia Rämä, Laboratoire Psychologie de la Perception, Paris Descartes et CNRS
14h15-15h00 : Prosodic bootstrapping : how bilinguals can use prosody to learn about the word order of their native language
par Judit Gervain, Laboratoire Psychologie de la Perception, Paris Descartes et CNRS
15h00-15h45 : Linguistic capacities of early and late bilinguals : an overview of issues and evidence
par David Birdsong, Université de Texas, invité du LABEX EFL
16h00-17h00 : Summary and discussion : perspectives on early bilingualism research
Médiateur Thierry Nazzi, DR CNRS, Laboratoire Psychologie de la Perception, Paris Descartes et CNRS
lun18Mai201511hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Invitée par l'équipe AVoC (Claire)
lun11Mai201511hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Invité par l'équipe Vision.
lun04Mai201511hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Various pieces of experimental evidence using both psychophysical and physiological (EEG) measurements have lead us (and others) to conclude that at least certain aspects of visual perception and attention are intrinsically rhythmic. For example, in a variety of perceptual and attentional tasks, the trial-by-trial outcome was found to depend on the precise phase of pre-stimulus EEG oscillations in specific frequency bands (between 7 and 15Hz). This suggests that there are "good" and "bad" phases for perception and attention; in other words, perception and attention proceed as a succession of cycles. These cycles are normally invisible, but in specific situations they can be directly experienced as an illusory flicker superimposed on the static scene. The brain oscillations that drive these perceptual cycles are not strictly spontaneous, but can also be modulated by visual stimulation. Therefore, by manipulating the structure of the stimulation sequence (e.g. white noise), it is possible to control the instantaneous phase of the relevant perceptual rhythm, and thereby ensure that a given target will be perceived (if presented at the proper phase) or will go unnoticed (at the opposite phase). Better, by taking into account individual differences in oscillatory responses, we can even tailor specific stimulus sequences with an embedded target that can only be perceived by one observer, but not another - a form of "neuro-encryption".
Invité par l'équipe Vision
lun27Avr201511hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Shape Understanding: On the Perception of Growth, Form and Process
Whenever we look at an object, we can effortlessly infer many of its physical and functional properties from its shape and our previous experience with other objects. We can judge whether it is flexible or fragile; stable or likely to tumble; what might have happened to it in the past (e.g. a crushed can or bitten apple); and can even imagine how other members of the same object class might look. In this talk, I will suggests that these high-level inferences are evidence of sophisticated visual and cognitive processes that derive behaviourally significant information about objects from their 3D shape—a process I call ‘Shape Understanding’. Despite its obvious importance to everyday life, surprisingly little is known about how the brain uses shape to infer other properties of objects including their origins or typical behaviour. I will use demos and a smattering of experimental evidence to argue that when we view novel objects, the brain uses perceptual organization mechanisms to infer a primitive ‘generative model’ describing the processes that gave the shape its key characteristics. I will argue that such models facilitate us in many tasks related to shape and material perception, including: (a) identifying physical properties such as viscosity, elasticity or ductility; (b) predicting the object or material’s future states as it moves and interacts with other things; (c) judging similarity between different shapes and (d) predicting what other members of the same category might look like (‘plausible variants’), even when you’ve only seen one or a few exemplars.
lun13Avr201511hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Lexical segmentation in word learning and recognition
In adult word recognition, lexical segmentation is guided principally by lexical knowledge, with sublexical cues playing a lesser role. In word learning, however, the relative contribution of these two information sources is reversed since lexical knowledge is initially absent and only increases gradually over time. This research tracks French lexical segmentation in both adult word learning and recognition with the aim of understanding the evolution of this process across time.
Invité par l'équipe Parole
lun30Mar201511hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Mechanisms of conscious and unconscious perception
David Carmel, University of Edinburgh
The age-old interest in consciousness and subjective experience has evolved in recent years into a major effort to discern the cognitive and neural mechanisms involved in perceptual awareness. What determines the content of our consciousness at any given time? And can meaningful perceptual processing occur for sensory stimuli that do not reach awareness? In this talk I will describe recent work in which I used interocular rivalry and suppression to address these related issues. The first part of the talk will focus on a series of transcranial magnetic stimulation (TMS) studies establishing the importance and specific roles of non-visual, high-level regions of parietal cortex in selecting sensory stimuli for conscious representation. The second part will go into an ongoing series of psychophysiological studies investigating the processing of emotional stimuli in the absence of awareness; this work demonstrates that conscious and unconscious processing differ qualitatively, not only in how they develop over time but also in the specific physiological responses each type of processing evokes.
Invité par l'équipe AVOC (Claire)
lun16Mar201511hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Population averaging in the distorted map of the Superior Colliculus: A key mechanism that accounts for where humans move their eyes.
Saccades, the very brief movements of our eyes that intervene every about 250 ms, have been a matter of investigation for over a century, but the mechanisms and processes that determine where and when the eyes move have still not been unambiguously determined. Models of eye-movement control in naturalistic tasks and settings predict a large range of eye-movement patterns known to be at work during text reading or scene viewing, but the underlying saccade-programming mechanisms are quite unrealistic. On the other hand, models of saccade programming that implement well-established neural mechanisms, notably in the Superior Colliculus (SC), not only lack benchmark behavioral data that would validate them, but they are limited in that they account only for saccades in simple visual displays. In my talk, I will show that human saccade metrics in simple saccade-targeting tasks quite nicely map and complement the basic predictions of SC models, as well as previous neural, monkey data. In particular, I will demonstrate that population averaging in the distorted map of the SC accounts for several well-known saccadic phenomena. I will then argue, based on reading data, that this fundamental mechanism may play a key role in determining where the eyes move in naturalistic tasks and settings.
Invité par l'équipe Vision
lun12Jan201511hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
When social influences build vocal communication and brain processing: the example of songbirds
Songbirds, like humans, require a social model in order to learn to communicate with conspecifics using song. In most cases, learning occurs at younger stages and consists in imitating a social model which characteristics differ according to the species (father, unrelated adult...). Few species learn from tape recordings only, and direct contact and interactions are generally necessary to induce learning. Some species show sensitive periods of learning, others learn all along their life, which often relates to the social organization: mobile social species may show more flexibility. However, social inputs are so crucial that they may induce delays of learning and unusual learning in species with sensitive periods. This means that the brain processes involved, well known in songbirds as neuroethological models, do show a plasticity influenced by social factors. The European starling, a highly social species, is our focus species to investigate these questions and its study brings some responses on the modalities of social learning of song and on how social influences modulate brain development . We will share our questions on the nature of social interactions, the modalities involved, the importance of social bonding and the role of social attention.
Invitée par l'équipe Perception-Action (Arlette)
lun15Déc2014LPP seminarShow details
Alpha oscillations, alertness and attention
Oscillations near 10 Hz are the single most salient property in population activity of the human brain. Accordingly, they have been called the “alpha” rhythm. Traditionally these oscillations have been taken to indicate cortical idling but recent research has assigned them a more active role. What exactly is this role? Locally, alpha oscillations result in a rhythmic inhibition of neural activity but how this relates to active processing and behavioral benefit is still far from clear.
Some contributions to the understanding of alpha oscillations have emerged from multimodal approaches and in particular from simultaneous recordings of ongoing brain activity by EEG and fMRI. This avenue has proven interesting because the observation which neuroanatomical structures show activity changes that correlate with alpha oscillations can also inform hypotheses about the function of alpha oscillations. At least three so-called resting-state networks seem to correlate in their activity with fluctuations in different features of alpha oscillations. Based on such findings we have proposed the “windshield wiper” model according to which the functional role of alpha oscillations is to cyclically clear accumulated cortical information. As a consequence, alpha activity can bias cortical processing in favor of strong and recent signals. We postulate that this is a suitable mechanism for a low-level attentional function, that of tonic alertness. Moreover, we have shown a direct functional consequence of rhythmic inhibition on cortical processing, namely that responses evoked by brief sensory stimuli are modulated by the phase of the alpha cycle during which stimulation occurs. Strong evoked responses despite high ongoing alpha activity hence presumably require sustained and/or salient sensory input. Conversely, whenever priors permit the deployment of selective attention, this leads to the disabling of alpha activity in specific channels that are likely to convey the attended information. Whether alpha activity facilitates or impedes behavioral performance will hence depend on the neural sites where it manifests and the cognitive context within which it occurs, thus reconciling apparent discrepancies in the literature.
Invité par l'équipe AVoC (Florian)
lun24Nov201411hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Are linguistic representations needed to organize the input? Testing
different predictions with non-human animals
A recurrent question in language acquisition is the extent to which domain general processes are involved in the extraction of regularities from the input. Here, I will present two cases in which opposite predictions are made regarding this issue. First, I will review recent research on the grouping principles described by the Iambic-Trochaic Law, and how experience might be needed to trigger some aspects of them. Second, I will discuss how linguistic representations constrain the extraction of different regularities from consonants and vowels. Together, these studies serve as test cases of how language-specific predictions can be tested using animal models.
Invité par l'équipe Parole
lun03Nov201411hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Deeply social learning in infancy
In the first part of my talk I would like to introduce the recent development of the so called head touch task, and relatedly, the theory of 'rational imitation'. I would like to highlight that research has confirmed that early imitative learning is a selective, non-automatic, and inference-guided process (Gergely et al., 2002).
My aim is to defend the natural pedagogy view of selective imitation against the recent challenge of low-level interpretations (Paulus et al, 2011, 2013, Beisert et al., 2012) on empirical grounds. Three novel, modified versions of the head touch task (that are close to two of the Paulus et al.’s procedure) generates contrary predictions from the perspective of the motor resonance account and the natural pedagogy account. The results of the modified paradigms provide support for the inferential relevance and rationality-sensitive account of selective imitation
In the second part of the talk I plan to introduce a novel approach on how young infants understand other's preferential choices and perspectives in order to interpret their actions. The standard interpretation in the field is that infants understand preferential choice as a dispositional state of the agent. It is possible, however, that these social situations trigger the acquisition of more general, not person-specific knowledge. In a series of studies we showed that (1) infants do not encode the perspectives of other agents as person-specific sources of knowledge and (2) they learn about the object, rather than the agent’s disposition towards that object. We propose that early theory of mind processes lack the binding of belief content to the belief holder. However, such limitation may in fact serve an important function, allowing infants to acquire information through the perspectives of others in the form of universal access to general information
Invitée par l'équipe Perception-Action (Jacqueline & Eszter)
mar21Oct201416hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP / LSCP meeting
lun13Oct201411hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Surprise-induced learning enhancement in infants and children
Many theories propose that new knowledge is acquired through the accumulation of experiences with the environment. For example, knowledge of objects has been conceived of as the product of learning from actions on objects or observations of objects’ behavior. Here, I consider a reversal of this relationship: learning as a product of prior knowledge. Generally speaking, a learner with limited cognitive resources should allocate these resources “smartly,” directing learning toward objects, events, or relationships about which little is already known. One way to identify such opportunities is to focus learning on situations in which prior knowledge yielded the wrong prediction about the world. In this case, learning should be heightened (relative to situations in which the learner made a correct prediction). Here I review recent experiments with infants and preschool-aged children in support of this hypothesis. Our results show that early learning is enhanced when knowledge of basic principles of object behavior is violated, relative to nearly perceptually identical situations in which no such violations occur. For example, infants who saw a ball appear to pass through a solid barrier subsequently learned about the ball more effectively than infants who saw a nearly identical event in which the barrier stopped the ball. This learning enhancement is specific to the entities that participated in the surprising event, but supports learning about a range of properties. Taken together, this research suggests that children make predictions about entities in the world around them, evaluate these predictions against the evidence, and, when their predictions are wrong, direct their learning resources to revising their prior knowledge. In this sense, core knowledge guides learning.
Invitée par l'équipe Perception-Action (Véronique)Organisation: Véronique
lun06Oct201411hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Crossmodal compensation and brain plasticity after deafness and cochlear implantation.
Research with humans and animals has shown that the loss of a given sensory modality may lead to compensatory mechanisms and increased reliance on the remaining modalities. In the case of deafness, the acquisition of visual skills is one of the sensory substitution strategies developed by patients to recover social communication. This crossmodal compensation is accompanied by important functional reorganizations expressed by the colonization of the deprived cortical areas by the spared modalities. However we took the opportunity to study the crossmodal reorganization that can occur in profoundly deaf adult patients with a cochlear implant (CI). The CI is a neuroprosthesis that can efficiently allow postlingual deaf subjects to recover auditory functions especially speech intelligibility. A cochlear implant allows post-lingual adult deaf patients to understand speech through long-term adaptative processes to build coherent percepts from the coarse information delivered by the implant. Because the success of rehabilitations relies on the functional plasticity in the auditory system, it is of crucial importance to understand the reorganization of the cortical network involved in speech comprehension that occurs during deafness and following the progressive recovery.
Thus, the access to CI patients allows the unique possibility to analyze the cortical reorganization induced by a long period of deafness and its “mirror-image” when the auditory function is rescued by the implant.
Invité par l'équipe Perception-Action (Sylvie C)
lun06Oct201416h-19hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP / Berkeley workshopShow details
Opening Seminar: Balancing visual prediction and visual stability
The visual world is dynamic, but object identities do not randomly change from moment to moment; objects often change location smoothly, but they rarely pop into or out of existence. This presents two major challenges for the visual system. One one hand, because visual processing is sluggish, there is a need to predict changing object locations. On the other hand, because visual input is noisy and discontinuous, there is a need to represent object identities as continuous and stable. In three related lines of research in my lab, we have investigated how the visual system balances these competing goals of prediction and stability. First, we have used fMRI to isolate a mechanism that gates and filters information about distractor objects, which allows selective representation of attended objects. Second, using psychophysics, TMS, and fMRI, we have found that the visual system assigns predictive locations to dynamic objects, thus anticipating smoothly changing visual input. In a third line of research, we have found evidence for a mechanism that links the perception of an object’s identity and properties from moment to moment, thus promoting perceptual continuity. These results show that an object's present appearance is captured by what was perceived over the last several seconds. The spatiotemporal tuning of this serial dependence reveals the continuity field (CF), within which perceptual judgments are dictated by previous percepts—making different objects appear the same. Together, our results reveal how the visual system delicately balances the need to optimize sensitivity to image changes (prediction) with the desire to represent the temporal continuity of objects—the likelihood that objects perceived at this moment tend to exist in subsequent moments.
lun22Sep201411hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisINC Seminar, LPP seminarShow details
Attentional modulation of neuronal response gain and variability in alert monkey. New optogenetic techniques.
For the past forty five years, research on the neural mechanisms underlying attention has focused primarily on modulation of neurons' mean firing rates. Recently attention has also been found to reduce neuronal response variability (Mitchell, Sundberg & Reynolds, 2007; Mitchell, Sundberg & Reynolds, 2009; Cohen & Maunsell, 2009). We estimate that 80% of the benefit of attention is attributable to this newly discovered form of attentional modulation, with the remaining 20% attributable to changes in mean firing rate. I will describe efforts underway in my laboratory to understand these two forms of attentional modulation and will propose a unified explanation for both phenomena, which incorporate primate optogenetics, neurophysiology and computational modeling.
lun23Juin2014LPP seminarShow details
Fine-grained lexical access: sub-phonemic acoustic-phonetic details guide segmentation
How do listeners accomplish the task of word segmentation given that in spoken language, there are no clear and obvious cues associated with word beginnings and ends? A given stretch of speech can be consistent with multiple lexical hypotheses, and these hypotheses can begin at different points in the input. In the French sequence l?abricot [lab ? iko] ?the apricot?, segmental information could be compatible with several competing hypotheses, such as l?abri [lab ? i] ?the shelter?, la brique [lab ? ik] ?the brick?. Listeners are routinely confronted with such transient segmentation ambiguities, and in some cases ambiguities are total, as in Il m?a donné la fiche/ l?affiche [ilmadonelafi ? ] ?He gave me the sheet/ the poster?. Yet the word recognition system is efficient, as listeners are rarely misled and generally segment correctly, retrieving the correct meaning.
In this talk, I am going to present a series of experiment examining the role of sub-phonemic acoustic-phonetic cues in speech segmentation and lexical access. We examined acoustic differences between phonemically identical sequences (e.g., l?affiche ?the poster?, la fiche ?the sheet?, both [ lafi? ]) and listeners? discrimination and identification of these sequences. A series of off- line experiments (ABX & 2AFC paradigms) demonstrated that listeners can discriminate between and identify such utterances. Moreover the manipulation of the acoustic cues had an impact on the perceived segmentation: e.g., increasing the F0 in the /a/ of la fiche increased the percentage of vowel-initial ( affiche ) responses. A series of on-line experiments (cross-modal identity and fragment priming) suggested that listeners retrieve on-line the correct segmentation and modulate activation of targets and competitors in favour of the correct candidate, without ruling out alternative interpretations. These results provide further evidence for fine-grained lexical access and suggest that the recognition system exploit sub-phonemic information to guide segmentation towards the beginning of words. The implications of these findings for models of word recognition will be discussed.
Invité par l'équipe Parole
lun16Juin201411hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
The centroid paradigm: A new method for analyzing feature-based attention
Charlie Chubb, Department of Cognitive Sciences and Institute for mathematical Behavioral Sciences, University of California at Irvine
When a viewer attends to “the reds” in a painting while ignoring other hues, he/she gives heightened priority to information from red regions. This ability to select visual information based on its content is called “feature-based attention.” We can conceptualize feature-based attention in terms of attention filters. An attention filter is a process, initiated by a participant in the context of a task requiring feature-based attention, that operates broadly across space to modulate the relative effectiveness with which different features in the retinal input influence performance. I will start by describing a new empirical method for measuring attention filters. This method uses a task in which the participant strives to mouse-click the centroids of a briefly flashed clouds composed of items of different types (e.g., bars of different colors and orientations), weighting some types items more strongly than others. The target weights for different item-types in the centroid task are varied across different (separately blocked) attention conditions. We use simple linear regression to estimate the attention filter achieved by the participant (the weights exerted on the participant’s responses by different item types) in a given attention condition. This method is remarkably powerful; an attention filter can be derived in 200 trials. This method has yielded several surprising results with important consequences for our understanding of low-level visual processing: (1) Participants are very good at attention-filtering based on color but terrible at attention-filtering based on orientation; (2) for colors equated in threshold discriminability, attention-filtering based on differences in hue is much more effective than attention-filtering based on either saturation or luminance.
Invité par l'équipe Vision (Andrei)
ven06Juin201411hSalle des Conférences (R229), 2ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisINC SeminarShow details
What does the "arrow of time" stand for?
One hundred and thirty years after the work of Ludwig Boltzmann on the interpretation of the irreversibility of physical phenomena, and one century after Einstein's formulation of Special Relativity, we are still not sure what we mean when we talk of “time” or “arrow of time”. We shall show that one source of this difficulty is our tendency to confuse, at least verbally, time and becoming, that is the course of time and the arrow of time, two concepts that the formalisms of modern physics are careful to distinguish.
lun05Mai201411hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Towards a rational constructivist approach to cognitive development
The study of cognitive development has often been framed in terms of the nativist/empiricist debate. Here I present a new approach to cognitive development: rational constructivism. I will argue that learners take into account both prior knowledge and biases (learned or unlearned) as well as statistical information in the input; prior knowledge and statistical information are combined in a rational manner (captured by Bayesian probabilistic models); and there exists a set of domain-general learning mechanisms that give rise to domain-specific knowledge. Furthermore, learners actively engage in gathering data from their environment. I will present evidence supporting the idea that early learning is rational, statistical, and inferential, and infants and young children are rational, constructivist learners.
Invitée par l'équipe Perception-Action (Véronique)
mar29Avr201411hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Spatio-temporal mechanisms of pre-saccadic receptive field remapping in monkey visual area V4.
Sujaya Neupane, Chris Pack and Daniel Guitton
Institut Neurologique de Montréal, Université McGill, Montréal, Québec, Canada
Visual neurons have spatial receptive fields (RFs) that are sensitive to visual stimuli in a circumscribed area of visual space relative to the fovea. Because foveate animals execute frequent saccadic eye movements, this position information in retinal coordinates is constantly changing with each saccade, even when the visual world is stationary. Interestingly, visual RFs in many brain regions have been found to exhibit changes in their position even before the saccade occurs. Indeed, neurons transiently respond to stimuli flashed prior to the saccade, at the location where their RF will be after the saccade (called the future RF = FF); a shift parallel to the impending saccade vector. This has been called predictive RF remapping and has been studied in many brain areas. Here we report on this phenomenon in the higher order visual area V4, known to be strongly modulated by attention and object salience. We recorded V4 neurons in alert monkeys using 10x10 electrode arrays, with which we could measure spikes and local field potentials at many locations simultaneously. RESULTS: 1) The RFs, defined on the basis of spike discharges, showed for most neurons a classical pre-saccadic shift parallel to the saccade vector. 2) In a minority of neurons such a shift was accompanied by a subsequent shift towards the saccade target as in Tolias et al (2001). 3) By comparison, the LFPs on all viable electrodes (~90) showed RFs that shifted parallel to the saccade and then towards the saccade target, as in the latter category of neurons. 4) For neurons that had preferred features the remapping signal did not encode such a preference. 5) When the flashed probe was replaced with the static probe, used by Tolias et al (2001), both the spike and LFP RFs shrunk and shifted towards the saccade target. The manifestation of predictive remapping is therefore paradigm-dependent. 6) Finally we studied gamma activity at electrodes that recorded from the RF, FF and saccade target locations, respectively. We found a shifting enhanced coherence time-locked to saccade end: first between recording electrodes encoding RF and FF locations followed by that between electrodes encoding RF and saccade target locations. CONCLUSION: the remapping observations are compatible with shifts of attentional loci.
Invité par l'équipe vision (Patrick)
lun07Avr201411hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Probabilistic models of sensorimotor control and decision making
The effortless ease with which humans move our arms, our eyes, even our lips when we speak masks the true complexity of the control processes involved. This is evident when we try to build machines to perform human control tasks. While computers can now beat grandmasters at chess, no computer can yet control a robot to manipulate a chess piece with the dexterity of a six-year-old child. I will review our work on how the humans learn to make skilled movements covering probabilistic models of learning, including Bayesian and structural learning. I will also review our recent work showing the intimate interactions between decision making and sensorimotor control processes. This includes the relation between vacillation and changes of mind in decision making and the bidirectional flow of information between elements of decision formations such as accumulated evidence and motor processes such as reflex gains. Taken together these studies show that probabilistic models play a fundamental role in human sensorimotor control.
lun24Mar201411hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Perceptual reorganization of tone: Recent findings from the Utrecht baby lab
In this talk I will present some recent studies from our baby lab into the perception of lexical tones by non-tone-language-learning infants, based on ongoing work with Liquan Liu. The studies address the development of tone/pitch perception during the first year of life. We focus on three aspects: (a) the discrimination of tone, (b) sensitivity to tone in word learning, and (c) perceptual differences between monolingual and bilingual infants.
The first study (Liu & Kager, 2012, in preparation) addresses the development of tone discrimination in monolingual and bilingual infants’. Dutch infants in five age groups (spanning a range 5-18 months) were tested on their ability to discriminate a tonal contrast of Mandarin Chinese and a contracted tonal contrast. Monolingual and bilingual infants were able to discriminate the tonal contrasts at 5-6 months, while their tonal sensitivity deteriorates around 9 months, in accordance with earlier studies. However, the sensitivity recovered between 14-18 months in monolingual infants, and between 11-12 months in bilingual infants. Our findings reveal a U-shaped pattern in non-tone-learning infants' tone perception, with a bilingual sensitivity advantage. I will discuss some interpretations of these results relating to the acquisition of the native intonation system, and to a (delayed) development of phonological categories accompanied by maintained acoustic sensitivity in bilinguals.
In the second study (Liu & Kager, 2013, in preparation), Dutch 14-15-month-old and 17-18 month-old monolingual infants were tested via an adjusted associative word learning paradigm on their ability to use pitch information linguistically. The stimuli contained the same natural Mandarin tonal contrast as used in the first study. Results showed that 14-15-month-old infants were able to set up a tone contrast for the purposes of word learning whereas such ability was lost at 17-18 months.
Linking the two studies, we found that acoustic sensitivity to lexical tone contrasts remains in infants at 17-18 months although the linguistic function is lost. This shows that the U-shaped developmental pattern does not lead to a recovery of phonological sensitivity, but only of acoustic sensitivity. In line with previous literature (Stager & Werker, 1997), we confirmed that a discrimination task heightens the acoustic sensitivity, whereas a word learning task does not. The novel finding is that this finding is extended to a non-native contrast.
Invité par l'équipe Parole
ven21Mar201411hSalle Lavoisier B (3rd floor), Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisINC SeminarShow details
Brain-wide and cell-type specific synchronization at the service of attention
I will show that natural viewing induces very pronounced gamma-band synchronization in visual cortex. This early visual gamma synchronizes to higher areas only if it conveys attended stimuli. Attentional top-down control is mediated via beta-band synchronization. Top-down beta enhances bottom-up gamma. Across 28 pairs of simultaneously recorded visual areas, gamma mediates bottom-up and beta top-down influences. Finally, I will show how pyramidal cells and interneurons are differentially synchronized and affected by attention and by stimulus repetition.
lun17Mar201411hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
The role of time in making perceptual decisions
In models of perceptual decision making within the classical signal processing framework (e.g. integration-to-bound), time is solely used to accumulate evidence. In the recently proposed probabilistic, sampling-based frameworks, time is necessary to collect samples from subjective posterior distributions for the decision regardless whether sensory evidence is still entering the system. Which of these two roles does time assume and how do those roles relate to each other during perceptual decisions in everyday perceptual tasks? In my talk, first I will give a brief overview of the evidence integration and probabilistic sampling frameworks, then I will present results of an analytically derivation to show the theoretical progression of the error and subjective uncertainty in time for these two models of decision making. I will demonstrate that the correlation between subjects’ error and their subjective uncertainty shows a very differently evolving pattern under sampling and evidence integration. Under sampling, after a brief initial period, the correlation always increases monotonically to a non-zero asymptote with this increase continuing long after the error itself has reached its asymptote. In contrast, integration-to-bound with additive, non-negligible behavioral noise always shows a decreasing correlation. Next, I will present two sets of experiments utilizing these antagonistic predictions about the correlations. In the first decision making study, where subjects had to perform time-limited orientation matching and report their uncertainty about their decisions, the results confirmed both predictions of the sampling-based model: the correlation converged to a non-zero asymptote long after no additional evidence was provided, and correlations increased with time. The second experiment used the classical decision making task concerning the direction of random dot motion displays under various coherence. In each individual result, we found a marked decrease in error-uncertainty correlation in the first part of the trial, indicating evidence integration, and a significant increase in the second part, indicating probabilistic sampling. Moreover, the transition between these segments shifted in accordance with the change in signal coherence. These findings support a novel interpretation of the role of time during decision making processes: under typical conditions, time is mostly used for assessing probabilistically what we really know and not for gathering more information. In addition, during any decision making, probabilistic sampling works in parallel with evidence integration, the former taking the lead early but probabilistic sampling determining the later part of the process.
Invité par l'équipe Vision (Andrei)
lun03Mar201411hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Economy of vision
Sergei Gepshtein, Salk Institute for Biological Studies, La Jolla, CA, USA
A basic tenet of sensory biology is that sensory systems improve their performance as they adapt to the environment. This expectation has been contradicted in studies of visual perception. An exposure to some visual stimuli may increase or decrease your sensitivity to the same stimuli, in a manner that defied attempts of explanation in terms of neural fatigue, lateral interactions, or optimal inference. I will describe how considerations of neuronal parsimony offer a strikingly simple solution to this problem, and how phenomena of visual adaptation support a new theoretical framework for analysis of visual function.
From the economic standpoint, visual systems solve the problem of allocation of limited resources. They must encode a great number of stimuli using a limited pool of specialized cells. Physiological studies suggested that the more useful stimuli receive a larger neuronal representation. We developed a theory of optimal neuronal allocation according to cell utility, starting from the basic idea that the cells characterized by receptive fields of different spatial and temporal extents have different utilities for basic visual tasks; for example, they are differently useful for estimating the location and frequency content of stimuli. We find that the optimal allocation of cells has several remarkable properties. First, characteristics of visual performance expected from the optimal allocation across all potential stimuli are similar to some well-known measured characteristics, such as the spatiotemporal contrast sensitivity function (Kelly, 1979). Second, changes in stimulus statistics are predicted to cause a shift of the entire sensitivity function and produce a global pattern of gains and losses of sensitivity. Then the previously puzzling effects of visual adaptation find explanation as samples from the larger pattern of sensitivity change.
We confirmed these predictions in psychophysical studies of motion adaptation, by subjecting humans to varying distributions of moving stimuli. Notably, we find that the shift of sensitivity predicted by the theory and observed behaviorally can arise by means of feedforward plasticity alone, requiring no coupling or other coordination between cells and no explicit representation of stimulus statistics.
Invitation and organization: Mark Wexler
lun24Fév201411hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Bottlenecks and belief states: what the mibrain tells us about attention
The bottleneck for attention is believed to comprise a network of areas in the cerebral cortex, with frontal and parietal cortex regulating limited resources available in the sensory areas of cortex. However, subcortical structures like the superior colliculus also play a role in attention, and in this talk I will explain how our investigation of the superior colliculus has led us to a very different view of the attention bottleneck. I will present evidence that the superior colliculus plays a crucial role in the control of spatial attention, but surprisingly, the mechanisms used by the superior colliculus appear to be independent of the well-known signatures of spatial attention in visual cortex. These recent results demonstrate that processes beyond the well-known correlates in extrastriate cortex play a major role in visual spatial attention. Furthermore, based on clues from neuroanatomy and disorders of attention, I speculate that these processes involve circuits through the basal ganglia and that the attention bottleneck arises from the need to establish a belief state about the current behavioral context.
Invité par l'équipe Vision (Patrick)
Organisation : Véronique
lun17Fév201411h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Motion: Low-level motion analysis and velocity integration
We will take a new look at some old problems. Since the mid-1980s there have been three basic strategies proposed for low-level motion perception in humans – spatio-temporal correlation, motion energy and spatio-temporal gradient analysis. It has proved difficult to separate them to the extent that a common view is that these three strategies are essentially the same. We will reconsider this view through the Anstis brightening illusion. After adapting to a brightening uniform patch a static spatial gradient placed in the adapted field appears to move. Our new observation is that if the spatial gradient is curved to introduce higher spatial derivatives, the apparent motion is slower. We argue this can be explained by the gradient model but not by the energy model. Adelson and Movshon introduced the idea that estimates of velocity in separate direction channels could be integrated to recover pattern motion through the intersection of constraints algorithm. The alternatives against which this algorithm was compared to investigate the perceived direction of motion were the vector average and the vector sum. Neither of these strategies however provides a valid computation of pattern speed. The vector sum simply increases with the number of samples and the vector average provides a value that is one half the global speed. Recently we introduced the harmonic vector average (Johnston & Scarfe; Frontiers in Computational Neuroscience 7, 146, 2013). The harmonic vector average gives the true global motion for unbiased samples (relative to the global motion direction). This new approach to velocity integration will be outlined for closed curves and Gabor arrays and extended to the analysis of 2D pattern motion.
Invité par l'équipe Vision (Andrei)
lun10Fév201411hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Relations between perceptual interpretation and selective attention
Helmholtz Institute and Division of Experimental Psychology, Utrecht University, Utrecht, The Netherlands
Visual perception relies on the interaction between, on the one hand, information that enters through the eyes and, on the other, knowledge and intentions that are represented in the brain. One aspect of this interaction is what Hermann von Helmholtz called 'unconscious inference'. The idea of unconscious inference is that the brain matches retinal signals with knowledge about the world, so that perception reflects the brain's most plausible hypothesis as to what real-world scene may have given rise to these retinal signals. A second aspect of the interaction between retinal input and stored knowledge is selective attention; the process by which the brain singles out a subset of the visual input, often causing other input to remain unperceived. Here I will discuss a series of experiments that investigate the process of 'unconscious inference' by presenting observers with ambiguous visual input. This is input that could plausibly have arisen from any of several real-world sources, providing an exceptional window into the visual system trying to find an interpretation of the input and, indeed, switching between interpretations over time even though the input stays the same. I will focus on two lines of research that both suggest that, in the context of ambiguous-stimulus perception, perceptual interpretation and selective attention may bear a close association. One research line investigates the fMRI BOLD correlates of switches in the perception of ambiguous stimuli, and the extent to which these fMRI BOLD correlates resemble those associated with attention shifts. The second research line investigates parallels between the ways ambiguous-stimulus perception and attention deployment depend on trial history (i.e. priming). In sum, by making use of ambiguous stimuli, this work aims to uncover the relation between the interpretative aspect of perception on the one hand, and the selective aspect on the other.
Invité par l'équipe Vision (Mark)
lun03Fév201411hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
When social influences build vocal communication and brain processing: the example of songbirds
Songbirds, like humans, require a social model in order to learn to communicate with conspecifics using song. In most cases, learning occurs at younger stages and consists in imitating a social model which characteristics differ according to the species (father, unrelated adult...). Few species learn from tape recordings only, and direct contact and interactions are generally necessary to induce learning. Some species show sensitive periods of learning, others learn all along their life, which often relates to the social organization: mobile social species may show more flexibility. However, social inputs are so crucial that they may induce delays of learning and unusual learning in species with sensitive periods. This means that the brain processes involved, well known in songbirds as neuroethological models, do show a plasticity influenced by social factors. The European starling, a highly social species, is our focus species to investigate these questions and its study brings some responses on the modalities of social learning of song and on how social influences modulate brain development . We will share our questions on the nature of social interactions, the modalities involved, the importance of social bonding and the role of social attention.
Invitée par l'équipe Perception-Action (Arlette)
lun27Jan201411hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Professeure émérite de Psychologie
Aix Marseille Université et CNRS, UMR 7290 Psychologie cognitive, Marseille
Intellectual disability is a disorder that includes deficit both in cognition and in adaptive behavior. A prevalence of intellectual disability was reported at 10-20 per 1000, but lower and higher estimates could also be found depending on the populations surveyed and methods used (nationality and age of the population, national registry or not, cross-sectional data on children in mainstream public schools, data from special education schools etc.). Moreover inconsistency in data collected may be largely attributable to the classifications system revisions. The main causes of intellectual disability are presented (genetic and environmental). It is frequently assumed that in approximately half of intellectual disability cases, there is no known cause, but more and more requests are being made to screen for genetic defects in cases of moderate to severe intellectual disability. Environmental factors are numerous (intrauterine and neonatal insults, severe malnutrition, acute and chronic psychological stress, physical abuse, exposure to family violence, and institutional deprivation, etc.). The etiology of intellectual disability is complex and gene-environment correlations and/or interactions have been illustrated. Some genetic disorders linked to intellectual deficiency (i. e. Phenylketonuria, Fragile X, Trisomy 21) are selected to present both the research methodologies and the type of findings, before illustrating the contribution of cross-syndrome comparisons. The conclusion discusses the contribution of the pathological model to the understanding of the genetic mechanisms contributing to cognitive differences within the normal range of variation.
La déficience intellectuelle se définit par un déficit de la cognition et du comportement adaptatif. Sa prévalence est de l’ordre de 10 à 20 pour 1000 mais cette estimation est à considérer avec prudence car elle dépend des populations à partir desquelles elle a été estimée et de la méthodologie utilisée (nationalité et âge de la population, utilisation d’un registre national ou de données en provenance d’institutions spécialisées, etc.). Les écarts entre les données peuvent aussi provenir de la révision des systèmes de classification. Les causes principales de la déficience intellectuelle sont présentées (causes génétiques et environnementales). On considère que dans environ 50 % des cas on ne connaît pas la cause de la déficience mais ce chiffre tend à diminuer car on demande de plus en plus fréquemment de faire un examen génétique dans les cas où la déficience est de modérée à sévère. Les facteurs environnementaux sont multiples : prénataux (alcoolisme fœtale par ex.), néonataux, postnataux (malnutrition sévère, stress psychologique chronique ou aigüe, exposition aux violences familiales, privation sévère en institution, etc.). De fait l’étiologie de la déficience intellectuelle est complexe comme le montrent des exemples de corrélations ou d’interactions entre le génotype et l’environnement. Pour illustrer à la fois la méthodologie et le type de résultats obtenus on présente quelques maladies génétiques (phénylcétonurie, X Fragile, trisomie 21) et l’apport des comparaisons inter-syndromes. Pour conclure on discute de la contribution de la méthode pathologique à la compréhension des différences cognitives dans la marge de variation normale.
Invitée par l'équipe Perception-Action (Sylvie T)
lun13Jan201411h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Invité par l'équipe Parole
ven06Déc201311hCentre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisINC Seminar
lun02Déc201311hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Invité par l'équipe Vision (Thérèse)
lun25Nov201311h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Cochlear Implants and Tonal Languages
Poor representation of pitch information in cochlear implant (CI) devices hinders pitch perception and affects perception of lexical tones in cochlear implant users who speak tonal languages. While accurate representation of lexical tones may not be essential for tonal language sentence recognition in quiet, it is particularly important for sentence recognition in noise. In the present study, 110 Mandarin-speaking, prelingually-deafened CI subjects (age: 2.5 - 16 years) and 125 typically developing, normal-hearing subjects (age: 3 - 10 years) were recruited from Beijing and Shanghai, China. Lexical tone perception in quiet and in noise (at +12, +6, 0, and -6 dB signal-to-noise ratios) was measured using a computerized tone contrast test. Tone production was judged by native Mandarin-speaking adult listeners as well as analyzed acoustically and with an artificial neural network. A general linear model analysis was performed to determine factors that accounted for performance variability. Prelingually-deafened children with CIs scored from chance level to nearly perfect performance on the lexical tone perception task. Moderate amount of noise had very small effects on tone perception in normal-hearing children but had tremendous effects on tone perception in children with CIs. Brand of CI device (i.e., Advanced Bionics, Cochlear, or MedEl) did not show any significant effects on tone perception performance. The degree of differentiation of tones produced by the CI group was significantly lower than the control group as revealed by acoustic analysis. Tone production performance assessed by the neural network was highly correlated with that evaluated by human listeners (r = 0.94). There was a moderate correlation between the overall tone perception in quiet and tone production performance across CI subjects (r = 0.56). Duration of implant use and age at implantation jointly explained approximately 30% of the variance in the tone perception performance. Age at implantation was the only significant predictor for tone production performance in children with CIs. Thus, children with CIs demonstrate suboptimal tone perception in quiet but very poor tone perception in noise. Tone production performance in pediatric CI users is dependent on accurate perception. Early implantation predicts a better outcome in lexical tone perception and production.
ven22Nov201311hCentre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisINC Seminar
lun04Nov201311hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Interhemispheric interactions and Blindsight
Outline of the Talk:
• What is Blindsight?
• Interhemispheric summation in Blindsight: Evidence from the Redundant Signal Effect
• Which hemisphere controls response in Blindsight? Evidence from the Poffenberger Paradigm
• Callosal transmission in hemianopic patients: Evidence from Event Related Potentials
• Cues on neural substrate of Blindsight Type 1 and 2
Invité par l'équipe Perception-Action (Sylvie C)
lun28Oct201311hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
The adaptive brain: Learning to see in altered visual worlds
Experience with the environment dramatically influences how we act, think, and perceive; understanding the neural plasticity that supports such change is a long-standing goal in cognitive neuroscience. In the visual system, neural function alters dramatically as people adapt to changes in their visual world, such as increases or decreases in brightness or clarity. Most past work on visual adaptation, however, has altered visual input only over the short-term, typically a few minutes. I will present a series of experiments that investigate adaptation over a much longer term. My laboratory recently developed “altered reality” technology that allows subjects to live in, and adapt to, experimentally manipulated visual worlds for hours and days at a time. Subjects viewed the world through virtual reality goggles that display video acquired from a head mounted camera, processed in real time on a laptop computer. In order to characterize long-term visual plasticity, we used image manipulations that targeted early visual cortex, and measured adaptation with perceptual tests. Effects of adaptation grew stronger and longer-lasting as the adapting duration extended from minutes to hours to days. The long term adaptation was behaviorally distinguishable from shorter term adaptation, suggesting that it is controlled by novel neural mechanisms. These controllers may allow vision to perform near optimally in an ever-changing world.
Invité par l'équipe Vision (Pascal)
ven25Oct201311hCentre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisINC Seminar
lun21Oct201311hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Yuka Sasaki :
Enhanced Spontaneous Oscillations in the Supplementary Motor Area Are Associated with Sleep-Dependent Offline Learning of Finger-Tapping Motor-Sequence Task
Abstract: Sleep is beneficial for various types of learning and memory, including a finger-tapping motor-sequence task. However, methodological issues hinder clarification of the crucial cortical regions for sleep-dependent consolidation in motor-sequence learning. Here, to investigate the core cortical region for sleep-dependent consolidation of finger-tapping motor-sequence learning, while human subjects were asleep, we measured spontaneous cortical oscillations by magnetoencephalography together with polysomnography, and source localized the origins of oscillations using individual anatomical brain information from MRI. First, we confirmed that performance of the task at a retest session after sleep significantly increased compared with performance at the training session before sleep. Second, spontaneous and fast- oscillations significantly increased in the supplementary motor area (SMA) during post-training compared with pretraining sleep, showing significant and high correlation with the performance increase. Third, the increased spontaneous oscillations in the SMA correlated with performance improvement were specific to slow-wave sleep. We also found that correlations of oscillation between the SMA and the prefrontal and between the SMA and the parietal regions tended to decrease after training. These results suggest that a core brain region for sleep-dependent consolidation of the finger-tapping motor-sequence learning resides in the SMA contralateral to the trained hand and is mediated by spontaneous and fast- oscillations, especially during slow-wave sleep. The consolidation may arise along with possible reorganization of a larger-scale cortical network that involves the SMA and cortical regions outside the motor regions, including prefrontal and parietal regions.
Takeo Watanabe :
Roles of attention and reward in perceptual learning
Perceptual learning (PL) is defined as long-term performance improvement on a perceptual task as a result of perceptual experience. We first found that PL occurs for task-irrelevant and subthreshold features and that pairing task-irrelevant features with rewards is the key to form task-irrelevant PL (TIPL) (Watanabe, Nanez & Sasaki, Nature, 2001; Watanabe et al, 2002, Nature Neuroscience; Seitz & Watanabe, Nature, 2003; Seitz, Kim & Watanabe, 2009, Neuron; Shibata et al, 2012, Science). These results suggest that PL occurs as a result of interactions between reinforcement and bottom-up stimulus signals (Seitz & Watanabe, 2005, TICS). On the other hand, fMRI study results indicate that lateral prefrontal cortex fails to detect and thus to suppress subthreshold task-irrelevant signals. This leads to the paradoxical effect that a signal that is below, but close to, one’s discrimination threshold ends up being stronger than suprathreshold signals (Tsushima, Ssasaki & Watanabe, 2006, Science). We confirmed this mechanism by showing that task-irrelevant learning occurs only when a presented feature is under and close to the threshold (Tsushima et al,2009, Current Biol). From all of these results, we conclude that attention and reward play important but different roles in PL.
Invités par l'équipe Vision (Patrick)
lun14Oct201311hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Simulating human fetus development
Recent developmental studies have shown the importance of interaction with the uterine environment for fetus development. We constructed a simulation model of a human fetus with musculoskeletal body, uterus and brain models. In my talk, I’ll show how body, environment and nervous system contribute to the motor, nervous and cognitive development.
Invité par Kevin
ven11Oct201311hSalle des Thèses, Bâtiment JACOB, 5ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisINC Seminar
ven27Sep201311h30Salle des Thèses, Bâtiment JACOB, 5ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisINC SeminarShow details
Studying Large-Scale Brain Networks: Electrical Stimulation & Neural-Event-Triggered fMRI
The brain is "the" example of an adaptive, complex system. It is characterized by ultra-high structural complexity and massive connectivity, both of which change and evolve in response to experience. Information related to sensors and effectors is processed in both a parallel and a hierarchical fashion. The connectivity between different hierarchical levels is bidirectional, and its effectiveness is continuously controlled by specific associational and neuromodulatory centers. In the study of such systems one major problem is the adequate definition for an elementary operational unit (often called an "agent"), because any such module can be a complex system in its own right and may be recursively decomposed into other sets of units. A second difficulty arises from the synergistic organization of complex systems and of the brain in particular. Synergy here refers to the fact that the behavior of an integral, aggregate, whole system cannot be trivially reduced to, or predicted from, the components themselves. Localizing and comprehending the neural mechanisms underlying our cognitive capacities demands the combination of multimodal methodologies, i.e. it demands concurrent study of components and networks; one way of doing this, is to combine invasive methods which afford direct access to the brain’s electrical activity at the microcircuit level with global imaging technologies such as magnetic resonance imaging (MRI). In my talk, I'll discuss two such methodologies: Direct Electrical Stimulation and fMRI (DES-fMRI) and Neural-Event-Triggered fMRI (NET-fMRI).
ven20Sep201311h30Salle des Thèses, Bâtiment JACOB, 5ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisINC SeminarShow details
Circuit mechanisms regulating adult neural stem cells and neurogenesis.
Adult neurogenesis arises from neural stem cells within specialized niches. Neuronal activity and experience, presumably acting on this local niche, regulate multiple stages of adult neurogenesis, from neural stem cell activation, neural progenitor proliferation to new neuron maturation, synaptic integration and survival. I will present our recent studies using slice electrophysiology, immunohistology, genetic lineage tracing and manipulation and optogenetics to identify a novel niche mechanism involving parvalbumin-expressing interneurons that couples local circuit activity to diametric regulation of two critical early sequential phases of adult hippocampal neurogenesis.
ven13Sep201311h30Salle des Thèses, Bâtiment JACOB, 5ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisINC SeminarShow details
Synaptic and Circuit Mechanisms for Computation of Location
Neurons in the entorhinal cortex generate grid like representations of location. These spatial representations co-exist with theta and gamma frequency network oscillations. The synaptic and circuit mechanisms for computing location and for simultaneous generation of network oscillations are unclear. I will talk about experimental and modelling work that in addressing this issue provides evidence that spatial representations and network oscillations originate from shared cellular mechanisms.
lun09Sep201311hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
How do neurotransmitters help decide what we see?
In neuroscience, one pervading mystery is how the brain is able to generate an “internal” perceptual experience from the available “external” sensory information. Ambiguous stimuli, like binocular rivalry and the Necker cube, offer a unique means to investigate this process experimentally because observers generally experience changes between multiple perceptual states without corresponding changes in the stimulus. I will present results obtained using a variety of methods including pharmacology (the serotonergic hallucinogens psilocybin), pupillometry and basic psychophysics. The first half of the talk will focus on perceptual rivalry in the visual, auditory and tactile domains. The second half of the talk will move on to some recent studies using pupil dilation to investigate the role of the noradrenergic systems in simple motor and cognitive decision events. Finally I will briefly present very recent work using these characteristic pupil responses to successfully communicate with non-responsive patients with Locked in Syndrome and one individual in a minimally conscious state. Together this series of results suggest that the cycle of perceptual switching characteristic of rivalry may reflect a generalized mechanism that is common to perception, cognition and action that allows the brain to decide between multiple valid alternatives, without becoming stuck on a non-optimal decision. Furthermore, it may be possible to gain access into another person’s decisions by observing the dilation of their pupil.
More info on her work: http://psych.unimelb.edu.au/people/olivia-carter
ven21Juin201311h30Salle R229, 2ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisINC SeminarShow details
Molecular and Neural Architecture of Instinctive Social Behavior in the Mouse
Our group studies the molecular architecture of neuronal circuits underlying sex- and species-specific social behaviors in the mouse. We have taken advantage of the molecular and genetic accessibility of the mouse olfactory system to investigate the neuronal logic underlying odorant- and pheromone-mediated signals. Our goal is to understand the basic principles of neuronal circuit function that enable an animal to identify a predator, a potential mate, or a conspecific intruder and to initiate appropriate behavioral responses. We will present recent molecular and electrophysiological data that attempt to uncover the molecular and cellular mechanisms underlying instinctive neural and behavioral responses in the mouse.
lun17Juin201311h-12h30Salle de Réunion H432, 4e étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
The intelligence of swarms
Centre de Recherches sur la Cognition Animale, CNRS, UMR 5169, Université Paul Sabatier, 118, route de Narbonne, 31062 Toulouse, France
The amazing ability of social insects to solve their everyday-life problems, also known as «swarm intelligence» has received a considerable attention the past twenty years. We will describe the underlying mechanisms of these complex collective behaviors, in particular the concepts of stigmergy and self-organization. We will emphasize the role of interactions and the importance of bifurcations that appear in the collective output of the colony when some of the system’s parameters change. We will then focus on the ability of social insects to build impressive nest architectures. Not only their characteristic scale is typically much larger than the size of individual insects but some of these architectures can also be highly complex. One fundamental questions is: how do the simple actions swarms add up to create such sophisticated architectures? How do insects interact with each other to coordinate their building actions? To investigate these issues, we focused on the early stages of nest construction in the garden ant Lasius niger. This experimental paradigm was used to disentangle the coordinating mechanisms at work and characterize the individual behaviors involved (transport and assemblage of construction material). We will present a 3D model implementing the mechanisms detected on the individual level and we will show that it correctly explain the construction dynamics and the spatial patterns observed for various conditions. This model reveals that complex helicoidal structures connecting nearby chambers emerge from a constant remodeling process of the nest architecture.
Bonabeau, E. & Theraulaz, G. 2000. L’Intelligence en essaim, Pour La Science, N°271, pp. 66-73.
Bonabeau, E., Dorigo, M. & Theraulaz, G. 1999. Swarm Intelligence: From Natural to Artificial Systems. Oxford University Press.
Camazine, S., Deneubourg, J.L., Franks, N., Sneyd, J., Theraulaz, G. & Bonabeau, E. 2001. Self-Organization in Biological Systems. Princeton University Press.
Theraulaz, G., Gautrais, J., Blanco, S., Fournier, R. & Deneubourg, J.L. 2003. Le comportement collectif des insectes, Pour La Science, 314, pp. 116-121.
Theraulaz, G. 2010. L’intelligence collective des fourmis, Le courrier de la Nature, 250: 46-53.
Theraulaz, G., Picarougne, F. & Jost, C. 2012. Voyage au centre des termitières et des fourmilières, Pour La Science, 420: 36-43.
Theraulaz, G., Perna, A. & Kuntz, P. 2012. L’art de la construction chez les insectes sociaux, Pour La Science, 420: 28-35.
Directeur de Recherches au CNRS, Docteur en neurosciences et en éthologie, Guy Theraulaz dirige l’équipe “Dynamiques Complexes et Réseaux d’Interactions dans les Sociétés Animales” au Centre de Recherches sur la Cognition Animale à l’Université Paul Sabatier à Toulouse. Ses recherches portent sur les mécanismes comportementaux et cognitifs qui gouvernent les comportements collectifs des sociétés animales et les phénomènes d’intelligence collective. Depuis 20 ans, il travaille également à la conception de nouvelles techniques inspirées du comportement des insectes sociaux en informatique et en robotique collective. En 1996, il a reçu la médaille de Bronze du CNRS pour ses travaux sur l’Intelligence en essaim. Il est l’auteur d’une centaine d’articles scientifiques. Il est par ailleurs co-éditeur et co-auteur de cinq ouvrages.
Invité par l'équipe Perception-Action (Marianne)
lun10Juin201311h-12h30Salle de conférences R229, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Restoring spatial listening abilities in unilateral and bilateral deafness
Binaural hearing underpins the ability to localise sounds and is beneficial when listening to speech in the presence of other spatially-separate sounds such as competing talkers or noise. These spatial listening abilities are severely compromised or absent in individuals with a unilateral or bilateral deafness. Consequently, these individuals report difficulties with listening and communicating in many everyday listening situations and report an associated decrease in quality of life. The current standard of care for adults in the UK with a unilateral severe-to-profound deafness is a Contra-lateral Routing of Signals (CROS) hearing aid. A CROS aid improves the audibility of signals on the impaired side of the head by diverting acoustic information arriving at the impaired ear to the non-impaired hearing ear. The current standard of care for adults with bilateral severe-to-profound deafness is unilateral cochlear implantation which restores input to one ear only. Thus, while the current standard care for these individuals improves access to sound, it does not help convey the binaural cues which underpin spatial listening abilities. This talk describes studies to evaluate alternative interventions for unilateral and bilateral severe-to-profound deafness which aim to restore access to binaural cues.
Invited by Hearing group
ven07Juin201311h30Salle R229, 2ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminar
lun27Mai201311h30Salle de conférences R229, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Critical Periods Re-Examined: Tuning the Brain to See Detail, Motion, and Faces
We have been taking advantage of a natural experiment: children treated for dense cataracts that blocked all patterned vision to the retina until the cataracts were removed surgically and the eyes fit with compensatory contact lenses. I will describe the general principles that have emerged from comparing the effects of bilateral and unilateral cataracts and from studying the consequences of deprivation that began at different ages. Together, the results suggest different critical periods for damaging different aspects of vision and different principles for low level (e.g., acuity) and higher level vision (e.g., global motion). Nevertheless, some potential for rehabilitation remains even in adulthood.
lun13Mai201311h-12h30Salle de conférences R229, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Perceptual organisation of speech: Effects of lexicality on auditory streaming
Biologically salient sounds are usually heard in the presence of competing stimuli, such as when several people talk at once. We describe a novel approach to determining the mechanisms by which the auditory system parses this mixture, so that fragments of sound that arise from the same source are grouped into the same “auditory stream”. In two experiments, listeners heard sequences of repeated spoken syllables (“stem”, “sten”, “stome” or “stone”). They reported that, after several presentations, the initial “s” sounds formed a separate stream; the percept then fluctuated between the streamed and veridical state in a bistable manner. In addition to collating these verbal transformations, we obtained an objective measure of streaming by requiring listeners to detect a silent gap, occasionally inserted between the initial “s” and the rest of the syllable. Performance was better for syllables that were transformed from a word to a non-word when the initial “s” was streamed off, compared to acoustically matched syllables that were transformed by streaming from a non-word to a word. Our results show that streaming is driven not only by acoustic regularities in the stimulus, but also by higher-level cognitive processes involved in the processing of language.
ven19Avr201311h30Salle R229, 2ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisINC SeminarShow details
The neural basis of high-level cognitive processes: Focus on hemispheric asymmetries
I will present a model according to which complementary executive processes are dissociable functionally, temporally and anatomically, along the left-right axis of prefrontal cortex and related networks. Multi-modal evidence in favor of this model will be provided.
Invité par Judit Gervain
lun08Avr201311hSalle des thèses (5ème étage du bâtiment Jacob), Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Tracking objects, illusions and guesses with the eyes : human eye movements as a dynamical probe of visual processing and predictive mechanisms
Primates’ voluntary eye movements (saccades and smooth pursuit) have been studied for decades as an ideal model of the dynamic coupling between perception and action for the cognitive neurosciences. In particular, smooth pursuit eye movements have been massively used to define a detailed model of the numerous (though relatively simple) sensorimotor processes that transform the target visual motion into a well matched eye velocity. However, until recently, only very simplified visual stimuli have been taken into account in most experimental investigations.
Here, I will present the results of a set of experiments focusing on human smooth pursuit eye movements recorded under somewhat more interesting conditions, namely with ambiguous motion information or in presence of extra-retinal predictive information. First, I will show how, in the framework of a well-known visual illusion (the aperture problem) and of Bayesian inference, smooth pursuit recordings unveil the dynamics of the internal processing of ambiguous motion information and the choice of a solution to such ambiguity. Second, under particular conditions, eye movements can convey important information about the internal representation of uncertainty and prior knowledge. I will present recent data and modelling on anticipatory smooth eye movements in some experiments where uncertainty (hence internal expectancy) is manipulated parametrically. These data suggest that anticipatory smooth pursuit is highly sensitive to the statistics of past sensorimotor events.
In summary, I will propose that voluntary eye movements (smooth pursuit, but also visual saccades and in some respect fixational saccades as well) represent an explicit, fast readout of internal visual information processing, selection and decision processes.
Invitée par l'équipe Vision (Thérèse)
Organisation : Véronique
lun08Avr201314h-15h30Salle des thèses (5ème étage du bâtiment Jacob), Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Effects of age at cochlear implantation on speech perception and word-learning skills: Evidence for multiple sensitive periods of language development
Although accumulating evidence suggests that cochlear implantation before 12 months of age leads to better language outcomes, there is very little known about how very early auditory experience affects language outcomes. I will present research from my lab that has investigated the effects of very early implantation on speech perception and word-learning skills. I will also discuss the implication of our findings on our understanding of sensitive periods of language development.
ven05Avr201311h30Salle desThèses, 5ème étage Bâtiment JACOB, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisINC SeminarShow details
Sensorimotor Processing of Language
Despite the fact that the famous ‘motor theory of speech perception’ by Alvin Liberman dates almost fifty years, a strong debate still survives on the possibility that speech understanding does not rely on sensory processing alone. In my presentation I will provide evidence that Liberman was substantially right and that a motor framework for language processing does exist, not only for speech but also for language syntax. To this purpose I will present very recent TMS data, patients studies, and computational models, all converging in the same direction.
Invité par Jacqueline Fagard
lun25Mar201311h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Invitée par l'équipe Perception-Action
ven22Mar201311h30Salle R229, 2ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisINC SeminarShow details
Understanding how the brain learns and generates complex sequential behaviors, with a focus on the songbird as a model system
The songbird is an excellent model system for understanding how the brain generates and learns complex sequential behaviors. Song acquisition is thought to proceed by reinforcement learning, and involves a basal ganglia (BG)-thalamocortical loop in which the cortical component drives exploratory vocalizations. I will present a model of song learning in which the BG evaluates an efference copy of variability commands to detect and reinforce variations that lead to better song outcomes. This model has broad potential implications for mammalian BG function.
lun18Mar201311h-12h30Salle des thèses, Bâtiment Jacob, 5e etage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
FLEXIBILITE ET PLASTICITE DES COMPORTEMENTS MOTEURS
Lorsqu’un système moteur est soumis à des perturbations (intrinsèques ou extrinsèques) durables, le SNC a développé des mécanismes de régulation à long terme, définissant le contrôle adaptatif du geste, et permettant de maintenir la performance motrice. Les travaux menés au sein de l’équipe dans cet axe de recherche se sont principalement appuyés sur une approche comportementale du contrôle moteur, et sont basés sur la théorie des modèles internes. Notre originalité est que nous avons abordé la question de l’adaptation sensorimotrice par le biais de plusieurs systèmes moteurs et dans des tâches aussi variées que les gestes de pointage manuel, les saccades oculaires et la locomotion, l’objectif étant de démontrer que les processus mis à jour sont indépendants des systèmes effecteurs impliqués. La question spécifique que nous abordons est celle de la nature et du mode d'action des informations conduisant aux changements plastiques observés lorsqu'on altère la relation visuo-motrice normale de l'œil et/ou de la main, et les conséquences du développement de changements plastiques dans le système oculomoteur sur le système moteur de la main, et réciproquement. A priori, nous considérons que ces informations sont véhiculées par la vision et la proprioception, ce qui est classique, mais aussi les informations vestibulaires, ce qui l'est moins. Cette approche nous a permis de démontrer l’existence de niveaux de contrôle différenciés pour l’adaptation sensorimotrice. Ces mécanismes plastiques sont illustrés au niveau du contrôle de l’amplitude des saccades oculaires et au niveau du contrôle de la précision du geste de pointage.
Invité par l'équipe Vision (Pascal)
lun11Mar201311h-12h30Salle de conférences R229 (2ème étage), Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Sound symbolism: Neural representation and relation to language development
Sound symbolism refers to a non-arbitrary relationship between linguistic sound and meaning. Recently, sound symbolism has attracted researchers' attention as it seems to be connected to various important issues central to human cognition and language, including cross-modal mappings, synaesthesia, the origin of language, as well as language development and evolution. In this talk, I will present a series of studies conducted in my laboratory that explore (1) when and how sensitivity to sound-meaning correspondences arise in infants; (2) how sound symbolism is used in infant-directed speech; (3) how sound symbolism facilitates initial word-referent association in infants and later verb learning in preschool-age children; and (4) how sound symbolic words (Japanese mimetic words) are represented in the brain.
lun04Mar201311h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Quantitative modeling of early phonological development.
The past 40 years of psycholinguistic research has shown that infants learn their first language at an impressive speed. During the first year of life, even before they start to talk, infants converge on the basic building blocks of the phonological structure of their language. Yet, the way in which they are achieving this early phonological acquisition is counterintuitive from the viewpoint of (psycho)linguistic theories. For instance, they learn phonotactics at an age where they know very few words and have not yet converged on the phoneme set of their language. We show that a quantitative modeling approach based on machine learning algorithms and speech technology applied to large speech databases can help to make sense of this developmental pattern. First, we argue that because of acoustic variability, phonemes cannot be acquired directly from the acoustic signal; only highly context dependent and talker dependent phones or phones fragments can be extracted in a bottom-up way. Second, words cannot be acquired directly from the acoustic signal either, but a small number of protowords or sentence fragments can be extracted on the basis of repetition frequency. Third, these two kinds of proto-linguistic units can interact with one another in order to converge with more abstract units. The proposal is therefore that the different levels of the phonological system are acquired in parallel, through increasingly more precise approximations. This accounts for the largely overlapping development of lexical and phonological knowledge during the first year of life. Further consequences of this quantitative approach of development are discussed.
lun25Fév201311hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
The Evolution of Brain Asymmetry
Asymmetry of the brain and behaviour (lateralization) has traditionally been considered unique to humans. However, research has shown that this phenomenon is widespread throughout the vertebrate kingdom and found even in some invertebrate species. A similar basic plan of organization exists across vertebrates. Lateralization from four perspectives – function, evolution, development and causation – will be considered, covering a wide range of animals. The benefits of having a divided brain will be discussed, as well as the influence of experience on its development.
Invited by the Perception-Action team (J. Fagard)
organization by véronique
ven08Fév201311h30Salle R229, 2ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisINC SeminarShow details
Representation of Hand Grasping Movements in Macaque Parietal, Premotor, and Motor Cortex
Specialized brain areas in the primate parietal (AIP), premotor (area F5), and motor cortex (hand area of M1) form a functional network that integrates sensory and cognitive signals for generating hand actions. I will highlight recent experimental results on how AIP, F5, and M1 generate grasping movements and how this activity can be used for decoding. Such characterizations could be useful to evaluate the suitability of these motor-planning areas for the development of neural interfaces in paralyzed patients.
ven01Fév201311h30Salle desThèses, 5ème étage Bâtiment JACOB, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisINC SeminarShow details
Behavioural Correlates of Body Ownership in Immersive Virtual Reality
It has been shown that virtual reality can be successfully used to induce the illusion of body ownership and agency with respect to a virtual body that substitutes the own body. However, less attention has been paid to the consequences of this for attitudes and behaviour. Here we describe some experiments that have started to explore the attitudinal and behavioural correlates of the illusion of body ownership with respect to a body that has important differences from the own body.
*Mel Slater is ICREA Research Professor at the University of Barcelona,and also holds a part time position as Professor of Virtual Environments, University College London. In Barcelona he co-leads the Event Lab (www.event-lab.org). He holds an Advanced ERC grant TRAVERSE, and leads the FP7 project VERE (www.vereproject.org).
lun28Jan201311h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Ready to experience: Binocular function is turned on earlier in preterm infants
While there is a great deal of knowledge regarding the phylo- and ontogenetic plasticity of the neocortex, the precise nature of environmental impact on the newborn human brain is still one of the most controversial issues of neuroscience. The leading model-system of experience-dependent brain development is binocular vision, also called stereopsis. Stereopsis provides accurate depth perception by aligning the two eyes' views in some of the rodents, and in most carnivores, primates and humans. The binocular system is unique among other cognitive capacities because it is alike across a large number of species, therefore, a remarkable collection of molecular, cellular, network, and functional data is available to advance the understanding of human development. This system is also unique in terms of the well-defined timeline of developmental eventswhich persistently brings it into the limelight of studies on cortical plasticity.
To address the origin of early plasticity of the binocular system in humans, we studied preterm human neonates as compared to full-term infants. We asked whether early additional postnatal experience, during which preterm infants have an approximately 2 months of extra environmental stimulation and self-generated movement, leads to achange in the developmental timing of binocular function. It is remarkable that the extra stimulation time leads to a clear advantage in the cortical detection of binocular correlation. In spite of the immaturity of the visual pathways, the visual cortex is ready to accept environmental stimulation right after birth. The results suggest that the developmental processes preceding the onset of binocular function are not pre-programmed, and that the mechanisms turning on stereopsis are extremely experience-dependent in humans. This finding opens up a number of further queries with respect to human-specific cortical plasticity, and calls for comparative developmental studies across mammalian species.
lun14Jan201311h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
On the consequences of bilingualism on language processing...and beyond
In this talk I will review various current issues that are being explored in the domain of bilingual language processing. I will pay special attention to issues related to bilingual language control during speech production and the consequences of bilingualism for language processing in general. I will also comment on the consequences of bilingualism on the functioning of certain executive control abilities. Finally, I will describe some recent studies conducted in my lab regarding sentence comprehension in bilinguals.
lun17Déc201211h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Linguistic computation & the illusion of conceptual change in number word learning
In this talk I address the problem of conceptual change in the domain of number. My suggestion, on analogy to the birth of formal semantics in the 1960's, is that characterizing the semantics of early numerical concepts seems impossible if pragmatics is ignored, since children's early numerical representations appear to be incommensurable with their later knowledge of number. I argue that this incommensurability is likely an illusion, however, and that when pragmatic inference is accounted for, number word meanings can be explained by existing representational resources that children use to acquire non-exact quantifiers. To make this case, I examine two candidate conceptual changes. First, I ask whether exactness is unique to number words, and conclude that it is not, but falls out from non-exact lexical meanings shared with quantifiers, plus Gricean quantity implicature. To make this case, I will review evidence that young children can compute sophisticated conversational implicatures, and that behaviors which appear to support true conceptual change are in fact explained by pragmatic inference. Second, I ask whether learning to use counting to label sets involves a conceptual change, and argue that it does not. Instead, learning to count is purely procedural in nature, though it lays the groundwork for learning the inferential roles of number words, and thus for acquiring mathematical knowledge.
Invité par Véronique
Organisé par Véronique
ven14Déc201211hR229, 2ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Nature and Nurture in language acquisition: Anatomical and functional brain-imaging studies in infants
The first months of life are the "terra incognita" of our knowledge on child development. Although research in psychology showed that the child is the prime actor of his learning from the first days of life on, we have difficult access to what the child of this age thinks, feels, and learns. Thanks to the development of non invasive brain imaging techniques, we can now study the cerebral bases of infant cognition. I will present studies combining structural and functional brain imaging techniques to understand what are the main features of the human infant brain that can explain language learning.
lun03Déc201211h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Visual attention and visual remapping in patients with damage to posterior parietal cortex
I will review evidence showing that the posterior parietal cortex (PPC) is acrucial cortical region for visual attention and visual remapping in humans, as suggested by the *Balint-Holmes **syndrome* and also by monkey electrophysiology. We also review evidence showing a right hemispheric dominance for visuo-spatial processing and representation in humans. Accordingly, visual disorganization symptoms (intuitively related to remapping impairments) are observed in both *neglect *and *constructional apraxia*. More specifically, we review findings from the intervening saccade paradigm in humans and present data suggesting a specific role of the asymmetrical network at the temporo-parietal junction (TPJ) in the right hemisphere in visual remapping: Following damage to the right dorsal PPC as well as part of the corpus callosum connecting the PPC to the frontal lobes, patient OK in a double-step saccadic task exhibited an impairment /when/ /the second saccade had to be directed rightward/. This singular and lateralized deficit cannot result solely from the patient's cortical lesion and therefore, we propose that it is due to his callosal lesion which may specifically interrupt the interhemispheric transfer of information necessary to execute accurate rightward saccades toward a remapped target location. This suggests a specialized right hemispheric network for visuo-spatial remapping which subsequently transfers target location information to downstream saccade planning regions which are symmetrically organized.
Invité par l'équipe vision
lun19Nov201211h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Metacognitive approach to phenomenal and access consciousness
In many circumstances, conscious perception fails despite activations of relevant brain regions by subliminal visual presentation. Conceptually, failure to register a stimulus in awareness could be attributed to suppression of early sensory signals (perceptual blindness) and/or failure of attention to register suprathreshold signals (attentional blindness). However, these two types of failure of awareness are difficult to distinguish behaviourally because in both cases, observers would report the absence of conscious percepts. To distinguish these two types of subjective blindness, we have previously developed a metacognitive framework called subjective discriminability of invisibility, which is derived from the so-called Type 2 signal detection framework (Kanai, Walsh & Tseng, 2010 in Consciousness & Cognition). This new analysis method distinguishes blindness due to signal reduction such as lowering of contrast, backward masking and interocular suppression as perceptual blindness, whereas it classified reduction of visibility due to attentional distraction, attentional blink and enhanced spatial uncertainty as attentional blindness. Moreover, when we explicitly manipulated decision criterion by changing the likelihood of target present trials, the percentage of target misses increases. When the blindness induced by criterion shift was induced by conservative criterion, the same experimental paradigm shifted from perceptual blindness to attentional blindness. The relevance of these findings for philosophical concepts of phenomenal and access consciousness will be discussed.
lun05Nov201211h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Medio-frontal cortex and response monitoring: From response detection to general purpose evaluation of actions
Errors play an essential role to keep adaptive behaviors. The "Error Negativity" ("Ne", or "Error Related Negativity", ERN, a medio-frontal brain potential occurring juste after an erroneous action), discovered by Michael Falkenstein and colleagues, has been thought to be the neural correlate of error detection. Since its discovery, it has been at the core of several influential models of cognitive control. However, the presence of a similar activity on correct trials (after Current Source Density estimation) is problematic for all those models. Although it has been argued that the activity observed on correct trials does not reflect the same phenomenon, we will present Independent Component Analysis, Source Localization and intracerebal evidences showing that the negative wave reported after correct responses, partial errors and overtly erroneous responses reflect the same activity whose amplitude is modulated by the degree of correctness. We will also report the functional link between this early medio-frontal activity and later more anterior ones, that seems more specific of errors. The consequences of these results in functional terms will be discussed.
lun22Oct201211h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
The attentive brain and its failures
The relationships between spatial attention and conscious perception are currently the object of intense debate. Evidence of double dissociations between attention and consciousness casts doubt on the time-honored concept of attention as a gateway to consciousness. However, recent results from experimental psychology, neuropsychology, neurophysiology and neuroimaging studies, indicate that distinct sorts of spatial attention can have different effects on visual conscious perception. While endogenous, or top-down attention, has weak influence on subsequent conscious perception of near-threshold targets, exogenous, or bottom-up forms of spatial attention appear instead to be a necessary, although not sufficient, step in the building of reportable visual experiences. Fronto-parietal networks important for spatial attention, with peculiar inter-hemispheric differences, constitute plausible neural substrates for the interactions between exogenous spatial attention and conscious perception.
Chica AB and Bartolomeo P (2012) Attentional routes to conscious perception. Front. Psychology 3:1. doi: 10.3389/fpsyg.2012.00001
Bartolomeo P, Thiebaut de Schotten M and Chica AB (2012) Brain networks of visuospatial attention and their disruption in visual neglect. Front. Hum. Neurosci. 6:110. doi: 10.3389/fnhum.2012.00110
Invité par l'équipe Vision (Patrick)
lun15Oct201211h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Multisensory perception of the body in early life
The methods used for investigating perceptual competency in early life have revealed much about infants' visual perceptions of their extrapersonal world. However, we know much less about the development of infants' perceptions of their own bodies and their nearby environments. In this talk I will describe a number of recent findings from the Goldsmiths InfantLab which provide both behavioural and physiological evidence concerning the early development of multisensory representations of the body and the environment impinging upon the body. We have investigated a number of questions including: i) how infants and children develop the ability to appropriately integrate multisensory cues to the position of their own limbs, ii) how infants and children come to be able to represent the layout of their body and limbs across changes in limb posture, and iii) how children come to perceive their own body in terms of a set of categorical parts. I will argue that developmental researchers will need to investigate multisensory perceptual abilities from an embodied perspective in order to gain a full picture of early perceptual and cognitive development.
Invité par Kevin O'Regan et Jacqueline Fagard
lun08Oct201211h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Human brain activity during selective and divided auditory and visual attention
Studies applying positron emission tomography or functional magnetic resonance imaging have shown that during auditory and visual selective attention, activity is enhanced in prefrontal and parietal cortical areas involved in attentional tuning of modality-specific areas. However, there is little or no such activity during highly trained tasks such as reading or selective listening to a particular speaker (Alho et al. Cogn. Brain.Res. 2003, Brain Res. 2006). Moreover, our results in the auditory modality (Salmi et al. Brain Res. 2009) show that, unlike in the visual modality (cf. Corbetta & Shulman, Nat. Neurosci. 2002), there is marked overlap between prefrontal, parietal, and modality-specific areas activated by voluntary orienting of attention and by involuntary attention to task-irrelevant sounds. Our results also indicate enhanced prefrontal and parietal activity during division of attention between auditory and visual phonological and spatial tasks (Salo et al. in preparation) and during selective attention to one of two simultaneous dichotic speech sounds (Westerhausen et al. Neuropsychologia, 2010). Our related magnetoencephalographic study suggests that the so-called right-ear advantage typically observed during divided attention to dichotic speech is caused by a rightward bias of attention (Alho et al. Brain Res. 2012).
mar02Oct2012Salle Leduc, rez-de-chaussée, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLateralization, praxis and communicative gesturesShow details
9h: Welcome coffee
9h30-10h: Natalie Uomini (U. of Liverpool, UK): "Traces of handedness in prehistoric humans"
10h-10h30: Peter MacNeilage (U. Texas, USA): "Rightward Action asymmetries in vertebrates: From the whole body to the hand"
10h30 – 11h: Break
11h-11h30: Ghislaine Dehaene (INSERM, Neurospin/CEA, Saclay): "Structural and functional lateralization in the human infant brain"
11h30-12h: Peter Hepper (The Queens U. Belfast, Ireland): "The emergence and disappearance of fetal handedness"
12h-12h30: George Michel (U. North Carolina, Greensboro, USA): "Infant handedness as a scaffold for developing language"
12h30-14h Lunch-buffet / poster session
14h-14h30: Jacqueline Fagard (U. Paris Descartes, France): "What is the link (if any) between the development of hand preference for object manipulation and for communicative gestures in infancy?"
14h30-15h: William D. Hopkins (Yerkes Center & Agnes Scott College, Atlanta, USA): "Handedness and neuroanatomical asymmetries in primates"
15h30-16h: Amandine Chapelain, A. Maille, A. Laurence, & C. Blois-Heulin (U. Rennes): “What the Bishop QHP task can tell us about manual laterality of guenons and mangabeys ?”
16h-16h30: Helène Meunier (U. Strasbourg, France): "Hemispheric specialization for a communicative gesture in different primate species"
16h30-17h: Adrien Meguerditchian (U. Aix-Marseille, France): "Handedness in wild chimpanzees for gestural communication, baobab fruit cracking and bimanual processing"
17h: Jacques Vauclair (U. Aix-Marseille, France): Concluding remarks and discussion
jeu27Sep2012ven28Sep2012Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisAmsterdam-Paris Perception-Action FestShow details
Thursday, 27 Sept.
14:30-14:40 Patrick Cavanagh: Introductory remarks
14:40-15:00 Jeroen Smeets: Inconsistencies in perception
15:00-15:20 Claire Sergent: Retro-perception: post-cued attention can trigger conscious perception
15:20-15:40 Maria Matziridi: Perisaccadic mislocalisation
15:40-16:00 Thérèse Collins: Attributing non-predicted retinal errors to self or to world
16:00-16:20 John Greenwood: Crowding and saccades
16:20-16:40 Katinka van der Kooij: Spatial adaptation
16:40-17:00 Coffee break
17:00-17:20 Pascal Mamassian: Stereoacuity + audiovisual integration
17:20-17:40 Irene Kuling: Haptics
17:40-18:00 Mark Wexler: Spatial constancy in vision and touch
18:00-18:20 Nienke Debats: Haptic cue combination
18:20-18:40 Andrei Gorea: Mean computation in time
18:40-19:00 Patrick Cavanagh: You only mistake what you attend to
Friday, 28 Sept.
9:00-9:20 Eli Brenner: Interception
9:20-9:40 Vincent de Gardelle: Confidence: Second-order sensitivity and biases
9:40-10:00 Devika Narain: Learning hidden dependencies
10:00-10:20 Dov Sagi : Perceptual learning
10:20-10:40 Coffee break
10:40-11:00 Emmanuel Guigon: A model of reward- and effort-based optimal decision making and motor control
11:00-11:20 Leonie Oostwoud Wijdenes: Fast responses
11:20-11:40 Joe MacIntyre: Thoughts on multisensory integration
11:40-12:00 Rob van Beers: Motor learning
mar25Sep201216h-17h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
The acquisition of phonological constancy: Early development of ability to recognize words spoken in an unfamiliar accent
Numerous prior findings indicate that 19-month-olds detect minimal-pair phonetic distinctions in newly-learned and already-known words more quickly and reliably than 14-15 month-olds. Those studies all employed single-phoneme manipulations ("mispronunciations") of words the child knew or had been taught, which essentially tests the children's sensitivity to phonetic differences that also convey phonological distinctions. I will present findings from our lab that indicate a similar trajectory for development of the complementary ability to recognize the "phonological constancy" of familiar words when they are pronounced in ways the child has not experienced, i.e., produced in an unfamiliar regional accent of the child's native language. We recently reported findings that 19- but not 15-month-olds can recognize words spoken in an unfamiliar accent (Best, Tyler, Gooding, Orlando & Quann, 2009). I will also present several follow-up studies showing that the younger age has difficulty recognizing even native-accented words when stimulus variability is increased (more speakers, words, and tokens), that vocabulary size rather than age per se is the correlate of stable versus unstable phonological constancy, and that these patterns hold up across a direct measure of word identification as well as in listening preferences for familiar toddler words over unfamiliar low frequency adult words. Additionally, I will describe two recently-completed studies that suggest that perceptual assimilation of accent differences within the native language may help account for these developmental effects on recognition of familiar words spoken in other accents. Implications for understanding the relationship between vocabulary development and the growth of phonological skills will be discussed.
lun24Sep201211h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Action video games as exemplary learning tools
Technology, from chatting on the internet to playing video games, has invaded all aspects of our lives and, for better or for worse, is changing who we are. Can we harness technology to effect more changes for the better? Yes we can, and not always in the way one might have expected. In a surprising twist, a mind-numbing activity such as playing action video games appears to lead to a variety of behavioral enhancements in young adults.
We will see that playing action-packed entertainment video games induces improvements in perceptual, attentional and cognitive abilities that extends well beyond the specific tasks in the game. A training regimen whose benefits are so broad is quite unprecedented. Evidence for the range of skills modified will be reviewed and the factors in action game play that promote generalization of learning and brain plasticity discussed.
lun09Juil201214h-15h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Constraints on Visual Statistical Learning in Infancy
Statistical learning is the process of identifying patterns of probabilistic co-occurrence among stimulus features, essential to our ability to perceive the world as predictable and stable. Research on auditory statistical learning has revealed that infants use statistical properties of linguistic input to discover structure, including sound patterns, words, and the beginnings of grammar, that may facilitate language acquisition. Previous research on visual statistical learning revealed abilities to discriminate probabilities in visual sequences, leading to claims of a domain-general learning device that is available early in life, perhaps at birth. More recent research, however, challenges this view. Visual statistical learning appears to be constrained by limits in infants' attention and memory, raising the possibility that statistical learning, like rule learning, may be best characterized as domain-specific. Implications for theories of cognitive development will be discussed.
lun25Juin201211h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Attention, integration, and the multisensory perception of synchrony
The last few years have seen a rapid growth of interest in issues related to the temporal aspects of multisensory perception in humans. In this talk, I will discuss some of the key factors that have been shown to modulate people's sensitivity to temporal asynchrony for both simple and complex stimuli using both simultaneity and temporal order judgment tasks. I will then review the evidence concerning how the brain responds (i.e., adapts) to various kinds of on-going asynchronous stimulation. Psychophysical research demonstrating the role of various kinds of attentional manipulation on temporal perception will also be highlighted. This will lead on to a discussion of "the unity effect" and the role of attention in the temporal segregation versus integration of multisensory signals. Finally, I will highlight some of the latest neuroimaging evidence concerning multisensory temporal perception including some intriguing findings concerning the existence of a spatial representation of temporal order in superior temporal sulcus.
ven15Juin201211h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Studying Time Perception: Weber law and a few other critical issues
One fundamental aspect of adaptation to environment is the capacity to process temporal information. One the most influential contemporary theory to account for this capacity is the scalar expectancy theory. This theory is based on the assumption that there is an internal, central clock described as a pacemaker-accumulator device. According to this theory, the variability to time ratio, or Weber fraction, should be constant over a wide range of durations. In other words, the Weber’s law for time should hold. However, there is quite a bit of evidence in the timing and time perception literature that this fraction is not constant. This presentation proposes a close look at the Weber fraction for very brief intervals (< 2s), and this look indicates that, in many different experimental conditions involving human participants, this fraction is not constant. For instance, the Weber fraction is higher at 1.9 than at 1 s. This violation to scalar timing seems to apply not only whatever the method used but also when single and multiple intervals are presented. The talk will also emphasise a multi-modal approach to time perception, including a close look at the auditory mode (music and speech). Finally, the talk will also offer an overview of other the research activities in my lab.
lun11Juin201211hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Impact of neural variability on human motor behavior
Neural systems have to deal with spatial and temporal imprecision, which is due in large part to sensory and motor signal variability. Some insights on how this 'noise' is tamed by the human CNS have come from studies of behavioral variability in perception and action, that have suggested that the brain optimally limits the impact of noise. I will review recent studies from my lab on eye saccades and arm pointing movements in healthy subjects and Parkinson's Disease patients. Results give further credit to the concept of optimal estimation and control, while indicating some of its limits for our understanding of action planning and control.
Invité par l'équipe vision (Thérèse)
lun04Juin2012Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Awareness of visual targets within the field defect of hemianopic patients.
In hemianopic patients, Blindsight type I refer to detection within field defect in the absence of any awareness, whereas type II blindsight refers to above chance detection with reported awareness, but without seeing per se. Systematic sensory stimulation is the principle approach in many sensory and motor impairments in brain damaged patients. The parameters for visual stimulation are crucial in mediating any change. In detailed case studies, evidence for dependency of awareness responses on the stimulus properties will be presented. In addition in a number of cases it appears that the detection ability at early stages of training varies as a function of distance of the stimulated area from the sighted field border. There is a lack of detection ability at retinal locations deep within the field defect. Nevertheless following repeated stimulation and after 5,000 to 10,000 trials, the detection performance improves. Therefore, there appears to be a continuum of performance from no detection, to blindsight type I, and eventually type II detection.
Invité par l'équipe Vision (Pascal)
lun14Mai201211h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Auditory Deficit as a Consequence Rather Than Endophenotype of Specific Language Impairment: Electrophysiological Evidence
It has long been debated whether specific language impairment (SLI) is caused by a low-level auditory deficit. A recent paper by Shafer et al (2011) reported striking abnormalities of an auditory ERP component known as the T-complex in children with SLI. We replicated this finding in a study of 32 children and teenagers with SLI who were compared with matched controls. We considered four models that might explain the results: (1) The Endophenotype model, in which auditory impairment is seen as the underlying cause of literacy problems; (2) The Additive Risks model, in which auditory deficit is not part of the phenotype of dyslexia, but nevertheless exacerbates literacy problems; (3) The Pleiotropy model, in which auditory deficit and literacy problems are separate consequences of the same genetic risk factor; and (4) The Neuroplasticity model, in which auditory processing is adversely influenced when a child has literacy problems. Inclusion of data from parents can help distinguish between causal models, and in this case supported the idea that auditory ERP abnormalities are more a consequence than an underlying cause of language impairment.
Invité par l'équipe parole
lun23Avr201211h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Humans do not possess a visual sense of approximate number
There is current interest in how we are able to make an estimate of the approximate number of objects in a scene: existing evidence suggests that we may have a dedicated "visual number" sense that may even be linked to mathematical ability. In this talk I will present evidence from my lab that, contrary to this position, human sense of number is not independent of our sense of space/distance and that our ability to estimate one depends on the other. This implicates a common perceptual metric, and we describe one candidate based on the relative output of high and low spatial frequency-tuned filters (emulating the operation of visual neurons in the geniculate and primary visual areas). This simple model can explain a range of perceptual phenomena associated with approximate number estimation.
invitée par l'équipe Vision
mar03Avr201213h55salle Dussane, à l'Ecole normale supérieure, 45 rue d'Ulm, 75005 ParisLPP seminarShow details
High levels of speech recognition with electric stimulation of the brainstem in adults and children
Invité par l'équipe audition
lun05Mar201211hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
IS TIME IDENTICAL TO BECOMING ?
One century after Einstein's formulation of Special Relativity, we are still not sure what we mean when we talk of "time" or "arrow of time". We shall try to show that one source of this difficulty is our tendency to confuse, at least verbally, time and becoming, i.e. the course of time and the arrow of time, two concepts that the formalisms of modern physics are careful to distinguish.
The course of time is represented by a time line that leads us to define time as the producer of duration. It is customary to place on this time line a small arrow that, ironically, must not be confused with the "arrow of time". This small arrow is only there to indicate that the course of time is oriented, has a well-defined direction, even if this direction is arbitrary.
The arrow of time, on the other hand, indicates the possibility for physical systems to experience, over the course of time, changes or transformations that prevent them from returning to their initial state forever. Contrary to what the expression "arrow of time" suggests, it is therefore not a property of time itself but a property of certain physical phenomena whose dynamic is irreversible. By its very definition, the arrow of time presupposes the existence of a well-established course of time within which - in addition - certain phenomena have their own temporal orientation.
We think that it is worthwhile to emphasize the difference between several issues traditionally subsumed under the label "the problem of the direction of time". If the expressions "course of time", "direction of time" and "arrow of time" were better defined, systematically distinguished from one another and always used in their strictest sense, the debate about time, irreversibility and becoming in physics would become clearer.
Key Words: Time, time's arrow, temporal asymmetry, principle of causality, irreversibility.
Invité par l'équipe Perception-Action
mar21Fév201211h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Human neuroscience; first and third person perspectives
Throughout my career I have been involved in empirical science, clinical medicine and from that to studies of first person accounts of chronic neurological conditions; sensory loss, facial paralysis, spinal cord injury, chronic pain and cerebral palsy. I view the two approaches to understanding, empirical and narrative/biographical, as providing complementary and occasionally synergistic insights. In the lecture I will explore examples of this. Merleau-Ponty wrote in one of his last works, that ‘science manipulates things and gives up living in them.’ I want to explore both.
Invité par l'équipe vision
lun13Fév201211hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Bayesian judgement of sameness and difference in human observers.
Cognitive scientists have long debated the mechanisms by which observers judge similarity and difference in the visual environment. One classic finding is that human observers are faster at judging two visual stimuli to be the same than different. This 'fast-same' effect is counterintuitive, because visual similarity can only be verified by an exhaustive search over all relevant features or dimensions. A further puzzle is that the effect is sensitive to the criterial number of features on which two items must match in order to be judged similar – the criterion effect. For more than 50 years, psychologists have sought to provide a unified account of perceptual comparison that can accommodate these two phenomena. Here, we show that a Bayesian observer model in which stimulus features are processed simultaneously can account for both effects. The model predicts decision latencies for humans making perceptual comparison judgments about visual stimuli with both discrete and continuously-varying feature information. The model incorporates the single assumption that perceptual inference occurs across an internal space whose geometry reflects the true physical differences among stimuli in the external world, and that participants have a bias to expect the world to remain stable. These findings contribute to a growing literature arguing that the human visual system performs perceptual inference in a statistically optimal fashion.
Invité par A. Gorea
sam11Fév201211h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Invité par l'équipe Vision (Pascal)
lun06Fév201211h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
On letter identities and letter positions
Starting from a model of orthographic processing that distinguishes between retinotopic letter detectors and word-centered sublexical orthographic representations, I will describe recent empirical research that systematically compared encoding of identity and position information for random letter strings (e.g., PGFMR) and sequences of familiar symbols (e.g., £ ? % < &). This research reveals important differences in the way these two types of stimuli are processed in a variety of paradigms designed to investigate different types of perceptual and attentional phenomena: crowding, perceptual matching, change detection, and exogenous cueing. The results provide support for the two distinct mechanisms for coding letter position information that are postulated in our model of orthographic processing.
Invité par l'équipe vision et parole
lun23Jan2012LPP seminarShow details
Hearing through our AERS – auditory scene analysis and deviance detection
In everyday situations, multiple sound sources are active in the environment. Typically, there is no unique solution to finding the sound sources from the mixture of sound arriving to the ears. To constrain the solution, the brain utilizes known properties of the acoustic environment. However, even using these “rules of perception” (Gestalt principles), for any non trivial sequence of sounds, alternative descriptions can be formed. Indeed, for some stimulus configurations, auditory perception switches back and forth between alternative sound organizations, revealing a system in which two or more possible explanations of the auditory input co-exist and continuously vie for dominance. I propose that the representation of a sound organization in the brain is a coalition of auditory regularity representations producing compatible predictions for the continuation of the sound input. Competition between alternative sound organizations relies on comparing the regularity representations on how reliably they predict incoming sounds and how much together they explain from the total variance of the acoustic input. Results obtained in perceptual studies using the auditory streaming paradigm will be interpreted in support of the hypothesis that regularity representations underlie auditory stream segregation. We shall then argue that the same regularity representations are also involved in the deviance-detection process reflected by the mismatch negativity (MMN) event-related potential (ERP). Finally, based on the hypothesized link between auditory scene analysis and deviance detection, we shall propose a functional model of sound organization and discuss how it can be implemented in a computational model.
Invité par l'équipe parole
lun09Jan2012LPP seminarShow details
Different Bodies, Different Minds: The body-specificity of language and thought
Do people with different kinds of bodies think differently? According to the body-specificity hypothesis (Casasanto, 2009), they should. When people interact with the physical environment, their bodies constrain their percepts and actions. In this talk, I will review evidence that beyond influencing perception and action, the particulars of people’s bodies also shape their words, thoughts, feelings, and choices. Moreover, patterns of body-world interaction partly determine how word meanings, mental images, and emotions are implemented in the brain, according to converging evidence from fMRI, EEG, rTMS, and visual hemifield studies. Finally, these studies show that influences of the body on the brain and mind are not static. To the extent that habits of body-world interaction are stable, the habits of neurocognitive activity they encourage are stable over time; to the extent that they change, neurocognitive representations may change accordingly. The body is an ever-present part of the context in which we use our minds, and therefore exerts pervasive influences on our thoughts by mobilizing perception, action, attention, and learning in body-specific ways. Bodily Relativity effects emerge from the body-specific deployment of ordinary and possibly universal neurocognitive mechanisms.
For more information:
Casasanto, D. (2011). Different Bodies, Different Minds: The body-specificity of language and thought. Current Directions in Psychological Science, 20(6), 378–383.
Willems, R.M., Hagoort, P., & Casasanto, D. (2010). Body-specific representations of action verbs: Neural evidence from right- and left-handers. Psychological Science, 21(1), 67-74.
Casasanto, D. (2009). Embodiment of Abstract Concepts: Good and bad in right- and left-handers. Journal of Experimental Psychology: General, 138(3), 351-367.
Invité par l'équipe Perception-Action
lun12Déc201111hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
The links between attention and visual awareness
Consciousness, as described in the experimental literature, is a multi-faceted phenomenon, that impinges on other well-studied concepts such as attention and control. Do consciousness and attention refer to different aspects of the same core phenomenon, or do they correspond to distinct functions? One possibility to address this question is to examine the neural mechanisms underlying consciousness and attention. If consciousness and attention pertain to the same concept, they should rely on shared neural mechanisms. Conversely, if their underlying mechanisms are distinct, then consciousness and attention should be considered as distinct entities. I will present here series of experiments in which both attention and consciousness were probed at the neural level, that point toward a neural dissociation between the two concepts. I will present a new hypothesis on the links between attention and consciousness, the cumulative influence model, in which attention and consciousness correspond to distinct neural mechanisms feeding a single decisional process leading to behavior, and show how that this model accounts for available neural and behavioral data. In this view, consciousness should not be considered as a top-level executive function but should rather be defined by its experiential properties.
ven02Déc201111h30Salle de conférence, R229, 2ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisNeuroscience Seminar SeriesShow details
Face perception: from vision to social neuroscience
Face perception involves highly developed visual skills, allowing us to extract a range of information about other people, including age, gender, expression, attractiveness, or identity, among others. Accordingly, data from neuropsychology and neuroimaging in human as well as research in non-human primates have demonstrated that a widely distributed brain network is recruited during face processing. Some parts of this network also overlap with systems engaged by affective and social signals conveyed by non-facial stimuli. However, much still remains unresolved concerning the exact role of different brain areas responding to faces and emotion expressions. This presentation will review recent work from our group and others investigating (by means of fMRI, DTI and EEG) the function and structural interconnection of visual areas and limbic areas involved in processing faces, facial expressions, and other facial features. It will also illustrate new approaches based on multivoxel pattern analysis of brain activations, allowing us to decode distinct information contents from cortical areas activated in fMRI. The latter approach suggests that some areas in temporal and frontal lobe, usually thought to mediate facial expression processing, may have a more general role for supramodal representation of expressed emotions and mental states.
Invité par Patrick Cavanagh.
lun28Nov201111h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
From subliminal perception to conscious access: : cognitive and neuronal mechanisms
I will present several studies using behavioral and neuroimaging methods in human adults and infants. My talk will mainly focuses on 4 topics. First, I will focus on the extent and limits of subliminal perception, both in terms of behavioral and neural influences; Secondly, I will present new approaches to perception without awareness through emotional information induced by crowded videos. Third, I will present some methods for measuring perceptual awareness in infants; Finally, I will focus on the existence of consciousness without attention/access, and discuss the alternative partial awareness hypothesis.
Invité par l'équipe Perception-Action
lun14Nov201111hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Colour Categories in Language and Thought
Although the colour spectrum is physically continuous, colour categories are present in both language (i.e., colour terms) and thought (e.g., categorical perception of colour). In this talk, I will outline a series of developmental studies that investigate the origin of colour categories. I will present converging behavioural and electrophysiological evidence that infants respond categorically to colour. I will also present evidence that colour categories are lateralized to the right hemisphere of the infant brain, and appear to switch to the left hemisphere when colour terms are learnt. The findings will be related to fundamental issues in the cognitive sciences such as: i) how and when categories form; ii) the relationship between categories in language and thought; and iii) how categories are expressed in the brain.
Invitée par l'équipe Perception-Action
ven14Oct201111h30-12h30Salle des thèses Bât JACOB, 5ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisINC SeminarShow details
Are individuals with autism or schizophrenia insensitive to pain ? A reconsideration of the question
Case reports and clinical studies reported reduced or absence of pain reactivity in autism or schizophrenia. We conducted an experimental study using a neurophysiologic measure of pain reactivity (the nociceptive RIII reflex) in order to examine the empirical basis for the reported pain insensitivity of autism and schizophrenia. The RIII threshold and the neurovegetative responses were measured in individuals with autism (N=20), schizophrenia (N=10) and typical development (N=20) matched on age, sex and puberty stage.The results will be presented and discussed.
lun03Oct201111h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
The Phenomenon of Task-Irrelevant Perceptual Learning
Task-irrelevant perceptual learning (TIPL) has captured a growing interest in the field of perceptual learning. The basic phenomenon is that stimulus features that are irrelevant to a subject's task (i.e. convey no useful information to that task) can be learned due to their consistent presentation during task-performance. Here I give an overview of existing research on TIPL with an emphasis on recent studies that demonstrates that TIPL can result in equal to or greater learning than direct training procedures on the same stimuli, that TIPL can be inhibited by attention, and that TIPL can produce enhanced memorization of visual scenes on the time-scale of a single experimental trial.
Invité par l'équipe Vision
ven30Sep201111h30-12h30Salle du conseil, R229, 2ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisINC SeminarShow details
Perinatal inflammation: impact on the developing brain and long-term outcome
Clinical and experimental data strongly support the hypothesis that exposure to infection / inflammation during pregnancy or the perinatal period is deleterious for the brain. This can lead to the appearance of perinatal brain damage which can lead to long term neurological and cognitive disabilities. In addition, some data also suggest that the perinatal exposure to inflammatory factors can alterate in a subtile manner the programs of brain development which will however result in neurological deficits in adulthood. The relationship between this latter observation and human diseases remains to be demonstrated.
lun26Sep201111h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Hearing voices: social neuroscience of human voice perception
The human voice is the most important sound category of our auditory environment. The voice carries speech, but it is also an "auditory face" rich in affective and identity information. Little is known on how the processing of these different types of vocal information is organized in the human auditory cortex. We study the cerebral processing of vocal information using behavioural (voice morphing), EEG, MEG and fMRI methods. In a series of experiments in normal subjects, we examined the cortical processing of sounds of human voices. The results obtained suggest that: 1) Perceiving sounds of voice involves activation of "voice selective" areas of auditory cortex, mostly located in superior temporal sulcus (STS) bilaterally, much more activated by sounds of voice than by non-vocal sounds; 2) Voice selective areas in the right anterior STS are particularly involved in the paralinguistic aspects of voice perception, including speaker recognition. 3) This selectivity to voice appears to be largely species-specific, i.e., sounds of animal voices induce a much more restricted activation of STS. These results, as well as those from other neuroimaging studies, suggest that the different types of vocal information could be processed in partially dissociated functional pathways, and suggest a neurocognitive model of voice perception largely similar to those proposed for face perception. We present recent evidence related to the time-course of cerebral voice discrimination, and to network of cerebral regions involved in processing a socially-relevant percept: vocal attractiveness.
Invité par l'équipe audition
lun26Sep201115hLNP conference room H335, 3rd floor, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Motion planning, perception and compositionality: Time arising from a mixture of geometries.
Behavioral and theoretical studies have led to the identification of kinematic and temporal features characterizing a variety of movements ranging from reaching to drawing and curved trajectories. These features were quite instrumental in investigating the organizing principles underlying trajectory formation. Similar constraints play also a significant role in visual perception of abstract and biological motion stimuli and in action observation.
Tamar Flash will report on several brain mapping and psychophysical studies aiming at identifying the neural correlates of these behavioral findings. She will also present a new theory of trajectory formation, inspired by geometrical invariance. The theory proposes that movement duration, kinematics, and compositionality arise from cooperation among several geometries- Euclidian, affine and equi-affine. Different geometries possess different measures of distance.
Hence, depending on the selected geometry, movement duration is proportional to the corresponding distance parameter. Expressing these ideas mathematically, the theory led to several predictions concerning drawing and locomotion trajectories, which were confirmed by examining experimental data. Tamar Flash will also discuss several of the theory implications regarding brain representations of motion.
Finally, if time permits she will describe recent studies of compositionality and multi-joint coordination in locomotion and upper limb movements.
For more information on Tamar Flash, go to: http://www.wisdom.weizmann.ac.il/~tamar/
lun05Sep201111h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
The condition of the rhythm classes in England
Both adult listeners and young infants can discriminate certain languages, such as English and Spanish, when segmental information is impoverished or eliminated. Such findings have been interpreted as support for categorical distinctions between language groups, the so-called “rhythm classes”. Parallel speech production studies have attempted to quantify language rhythm, specifically the durational marking of stress. By contrast with the categorical rhythm class distinctions suggested by perception experiments, metrics of temporal stress contrast have indicated gradient language distinctions.
We attempted to reconcile these divergent findings in a series of experiments probing the prosodic timing factors that predict language categorisation. We used an ABX categorisation task, with flat sasasa-type utterances used to focus specifically on the temporal cues to linguistic distinctions. Results showed that English adult listeners can distinguish languages within as well as between rhythm classes, using a variety of timing cues for the task. Speech rate differences are consistently exploited, where available, in preference to other cues. We interpret this in the light of findings indicating the importance of rate for listeners’ understanding of segmental and prosodic structure. We further suggest that infants may exploit sensitivity to speech rate and other timing cues, rather than categorical rhythmic distinctions, in early language acquisition.
Invité par l'équipe parole
lun04Juil201111h30-12h30Salle Sabatier A, 2nd floor, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisINC Seminar, LPP seminarShow details
The brain has evolved to generate action in the real world and sensory modalities are optimized to support this objective. For action and perception to support survival, behavior must satisfy specific needs and their derived goals. Hence, this raises the fundamental question how the integration of goals and perception occurs in terms of bottom-up and top-down processing. In my presentation I will investigate this question from the perspective of a neuromimetic robot based cognitive architecture called Distributed Adaptive Control (DAC) (Verschure, Voegtlin et al. 2003). DAC assumes that the integration of perception, action and motivation is structured at three distinct levels: reactive, adaptive and contextual. In my presentation I will in particular look at the adaptive layer where the state space of the environment is constructed. Specifically I will show how key anatomical features of the visual cortex can give rise to a visual processing system that provides rapid non-hierarchical classification and processing of complex stimuli, like faces, while allowing for state dependent modulation of the processing stream in order to support segmentation using a, so called, Temporal Population Code (Wyss, Konig et al. 2003). In order to support these claims I will discuss experiments with a rodent based mobile platform and the humanoid robot iCub. Subsequently, I will present an integrated framework that shows how acquired cognitive structures can be integrated with feed-forward visual processing (Mathews and Verschure In Press) giving rise to a self-contained real-world intentional vision (and action) system. I will present psychophysical experiments that validate key predictions of this framework.
Mathews, Z. and P. F. M. J. Verschure (In Press). "PASAR-DAC7: An Integrated Model of Prediction, Anticipation, Sensation, Attention and Response for Artificial Sensorimotor Systems." Information Sciences.
Verschure, P. F., T. Voegtlin, et al. (2003). "Environmentally mediated synergy between perception and behaviour in mobile robots." Nature 425: 620--624.
Wyss, R., P. Konig, et al. (2003). "Invariant representations of visual patterns in a temporal population code." Proc Natl Acad Sci U S A 100: 324--329.
ven17Juin201111h30-12h30Salle Lavoisier A (3rd floor), Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisINC SeminarShow details
Investigating brain architecture through active touch sensing in animals and robots
Tony Prescott is Professor of Cognitive Neuroscience, and leader of an important European research consortium on Active Touch. His research combines System Neuroscience, Behavioral experiments and Robotics. It aims at understanding how sensorimotor loops enable vibrissal sensing in animals and at designing new biomimetic robots.
The seminar will be followed by coffee and light refreshments, providing an opportunity for participants to meet and engage with Professor Prescott.
ven17Juin201111h30-13hSalle des thèses, 5e etage du Bâtiment Jacob, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisNeuroscience Seminar SeriesShow details
A core brain system in intelligent behaviour
In human fMRI studies, a common or multiple-demand (MD) pattern of frontal and parietal activity is associated with diverse cognitive demands, and with standard tests of fluid intelligence. In complex behaviour, goals are achieved by assembling a series of sub-tasks, creating structured mental programs. Behavioural, neuropsychological and fMRI data suggest a key role for MD cortex in defining and controlling the parts of such programs, providing a neurophysiological basis for intelligent thought and action. I shall discuss how fluid intelligence contributes to deficits in “executive function” after frontal lobe damage; the role of fluid intelligence in “goal neglect” when new behaviour is constructed; and the activity of MD cortex as a complex sequence of goal-directed behaviour unfolds.
ven27Mai2011LPP seminarShow details
Investigating Language Acquisition through the Prosodic Development of Japanese: Nature of Infant-Directed Speech in Japanese
Infants learn much about the phonology of their own language during the first year of their lives. Since Japanese differs from English and European languages in important phonological ways, investigation of its acquisition has the potential to illuminate our general understanding of phonological acquisition. In this talk, we present data from Japanese infant-directed speech (IDS) to exemplify this point. When adults speak to infants and young children, they modify their speech, using higher pitch, exaggerated intonation, and shorter, slower phrasing. Although many IDS characteristics are assumed to be universal, not much is known about how the language-specific properties of a given language manifest themselves in IDS. In this talk, we will present the results of an ongoing project at the Laboratory for Language Development at RIKEN Brain Science Institute, Japan, to study the nature and the role of infant-directed speech for language acquisition. To this end, we are constructing a corpus of Japanese infant-directed speech (RIKEN Japanese Mother-Infant Conversation Corpus; R-JMICC), which contains both segmental (phonetic/phonological) and intonational annotation (X-JTobii). Using this corpus, we have begun detailed analyses of the segmental, lexical, as well as prosodic characteristics of Japanese IDS. Results of some of these analyses will be introduced. In addition, the results of an fMRI study of IDS to examine the effect of IDS on mothers will also be discussed. Preliminary results from these studies show that although some overall properties of IDS may be universal -- e.g., "exaggerated prosody" -- the specific ways to achieve it may differ from language to language.
ven13Mai201111hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
From subliminal perception to conscious access: cognitive and neuronal determinants
I will present empirical data using behavioral and neuroimaging methods in human adults and infants. My talk will focus on 4 main topics. First, I will talk about the extent and limits of subliminal perception, both in terms of behavioral and neural influences; Secondly, I will present the gaze-contingent crowding method, a new approach for probing unconscious perception with long-lasting and dynamic stimulation. Third, I will present several methods using psychophysical estimates and high-density EEG for probing perceptual awareness in infants; Finally, I will focus on the existence of consciousness without attention/access, and discuss the alternative partial awareness hypothesis.
ven01Avr201111h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Peripersonal Space: a multisensory interface for voluntary actions toward objects
A.Farnè , C.Brozzoli, L.Cardinali, F.Pavani
INSERM U1028, CNRS UMR5292, Lyon Neuroscience Research Centre, ImpAct Team, F-69000 Lyon, France; University Claude Bernard Lyon I, F-69000 Lyon, France; Department of Neuroscience, Brain, Body & Self lab, Karolinska Institut, SE-17177 Stockholm, Sweden; Center for Mind/Brain Sciences (CIMeC) & Dipartimento di Scienze della Cognizione e della Formazione, Università di Trento, Italy.
Neurophysiological studies in monkeys described visuo-tactile neurons, presenting both tactile and visual receptive fields, the latter being limited to the space surrounding the former. Evidence is available both in neurotypical and brain-damaged populations about the existence of a similar Peripersonal Space (PpS) representation in humans.
We combined perceptual and kinematic recordings to probe the link between the PpS and the planning/execution of voluntary actions (i.e., grasping or pointing). In a series of experiments, participants grasped (or pointed to) an object while solving a tactile discrimination task on the acting right hand (Index finger=“top” or Thumb=“bottom”). We measured the visuo-tactile interference evoked by a visual distractor, appearing on the to-be-grasped object with congruent/incongruent elevation with the tactile target.
Visuo-tactile interference was modulated as a function of the action phase: stronger during planning and onset of the action than at baseline (before the action started). This increase was more important in the early (200ms after onset) and late execution phase (grip closing). This modulation on the right hand performance was effector-specific, being absent if the left hand was grasping. The kinematic differences between Grasping and Pointing were mirrored by different PpS modulations. Finally, similar modulations were also present in experienced participants who were merely observing (no action) someone else’s grasping.
These findings converge in showing that the PpS is a multisensory interface not only implied in defensive reactions, but also involved in the production of voluntary actions, as a function of the sensory-motor transformations and the kinematic demands specifically requested.
invité par F. Waszak
ven25Mar201111h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
The typical and atypical development of the human “social brain”
A central issue in human development is how regions of the cerebral cortex become specialized for specific perceptual, motor, and cognitive functions. I will compare and contrast three general viewpoints on human functional brain development: a maturational view (in which cognitive and behavioural change is attributed to the maturation of underlying brain regions), a skill-learning view (in which the brain changes with cognitive development are viewed as similar to those seen when adults acquire complex new skills), and “Interactive Specialisation”. The latter view hypothesises that the functional specialisation of some regions of the cortex becomes increasingly finely tuned during postnatal development through interactions between different cortical regions, between cortical and sub-cortical structures in the brain, and interactions between the baby and its social and physical environment.
As an example of the Interactive Specialization approach I will review studies from our laboratory and others on the emergence of the “social brain”, a cortical network that enables us adults to recognise the identity, actions and intentions of other humans. My review of studies of face processing, eye gaze perception and human voice perception in infants and children support the Interactive Specialisation perspective. In the final part of my talk I turn to the atypical development of the social brain, and discuss recent studies of babies at-risk for a later diagnosis of autism.
Johnson, M.H. (2001) Functional brain development in humans. Nature Reviews Neuroscience, 2, 475-483.
Johnson, M.H. & de Haan, M. (2011) Developmental Cognitive Neuroscience, 3rd Edition. Wiley -Blackwell
invité par le groupe de langage
ven18Mar201111h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
English duration patterns mirror perceptual asymmetries
This paper reports the results of two English experiments on timing and perception. The finding is that temporal patterns in production correspond to asymmetries in perception. We argue that these phenomena are best described in terms of auditory rather than articlatory representations. When more segments are present in a syllable, each segment is shorter; e.g. /æ/ and /d/ shorten from add to sad. These effects, referred to as compression, hold for English onset and coda consonants (Fowler 1983). We further distinguish simple compression, between items with one consonant vs. none in a given position (e.g. add-sad); from incremental compression, with one consonant vs. several (e.g. lad-clad). No study has fully examined how consonant manner and syllable-position affect compression. We present an English nonce-word study of obstruents, nasals, liquids, and clusters in onset and coda position. All consonants are associated with simple vowel-compression, but only some drive incremental compression. Liquids induce incremental compression in onset and coda position, nasals only in onset position, and obstruents in neither position; e.g., /rod/-/brod/ and /dɔr/-/dɔrb/ display shortening, but not /don/-/donz/.
The results have broad consequences for the theory of timing. One analysis of compression treats it as emergent from general principles of articulatory organization (Fowler 1983, Nam et al. 2009). When articulatory gestures overlap, acoustic manifestations of those gestures are shorter. Thus, patterns of acoustic compression should correspond to independent facts about gestural timing. The asymmetries reported here, however, are not explained by any known facts about English gestural organization. While articulatory studies find that consonant clusters impinge more on a following vowel than singletons do (part of a phenomenon known as the C-center effect), the same is not generally found for coda consonants (Honorof & Browman 1995). Even if this effect did hold for codas, differences between consonant manners would be impossible to explain in articulatory terms.
We argue instead that compression effects are due to conflicting temporal pressures on segments and syllables (Fujimora 1987, Flemming 2001). Consonants differ with respect to compression because constraints on duration are perceptual and consonants convey different amounts of perceptual information about adjacent vowels. Timing patterns are thus explained by independent facts about perception. For instance, vowels shorten more adjacent to liquids than to obstruents because liquids help satisfy the duration requirements of an adjacent vowel more than obstruents do; liquids contain more information about adjacent vowels.
A series of perceptual experiments tested hypotheses about the relative vowel information in various parts of the speech stream outside the ‘vowel proper’. Subjects identified forward- and reverse-gated stimuli with excised vowels. Results mirror the production asymmetries discussed above. Subjects are significantly better at identifying adjacent vowels from liquids alone than from obstruents or nasals alone. In onset position, where nasals but not obstruents induce incremental compression, sensitivity to vowel contrasts increases significantly more as CV transitions are added back into stimuli for /nV/ sequences than for obstruent-vowel sequences. In coda position, where neither manner drives incremental compression, no such perceptual asymmetry exists.
Flemming, E. (2001). Scalar and categorical phenomena in a unified model of phonetics and phonology. Phonology 18, 7-44.
Fowler, C. (1983). Converging Sources of Evidence on Spoken and Perceived Rhythms of Speech: Cyclic Production of Vowels in Monosyllabic Stress Feet. Journal of Experimental Psychology 112(3), 386-412.
Fujimura, O. (1987). A Linear Model of Speech Timing. In Channon & Shockey (Eds.), In Honor of Ilse Lehiste (pp. 109-124). Dordrecht: Foris Publications.
Honorof, D. & C. Browman. (1995). The center or edge: how are consonant clusters organised with respect to the vowel? In Elenius and Branderud (eds.), Proceedings of the XIIIth ICPS (pp. 552-555). Stockholm, Sweden: KTH and Stockholm University.
Nam, H., L. Goldstein & E. Saltzman. (2009). Self-organization of syllable structure. In Pellegrino et al. (Eds.), Approaches to phonological complexity (297-328). Berlin: Walter de
invité par D. Pressnitzer
ven11Mar201111h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Natural object colours
We readily call to mind the colour of a banana or strawberry. This phenomenon of “memory colour” for familiar objects, as Hering observed in the 19th century, depends to a large extent on colour constancy. If colours of objects did not remain stable under changing illumination, memory colours could not be reliably formed. In turn, “memory colour” itself may contribute to colour constancy, by providing a reference against which the incoming image may be compared to recover information about the illumination. In this talk, I will take a look at the surface properties of natural objects and how these lead to chromatic signatures that aid colour constancy as well as a modified interpretation of memory colour, the “memory gamut”, which encompasses the mottled yellow of the banana or the speckled red of the strawberry. I will also discuss the role that natural surface colours play in object recognition, illustrating with results from speeded object classification and texture discrimination tasks.
invité par P. Cavanagh
ven04Mar201111h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
The place of color in a world of moving (and still) shadows
I shall present an evolutionary argument for the structure of the chromatic system. Colors are there to solve the problem of telling illumination from reflectance boundaries in constrained situations (i.e., those in which other heuristics are not available). If shadows are evolutionarily earlier perceptual items than objects, it can be argued that color constancy for object surfaces could have exploited pre-existing mechanism for discounting change in illumination. Object color constancy would thus be a by product ofan earlier mechanism. Ccolor is not there to help us classify objects, but for telling shadows from non-shadows in a mutable environment. In the presentation I'll address questions such as whether monochromats can make sense of shadows, and will contrast the present account to Shepard's hypothesis on the evolutionary origin of the architecture of the color system.
invité par P. Cavanagh
lun14Fév2011Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisCAVLab Fests
ven11Fév201111h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Behavioral Dynamics of Visually-Guided Locomotion
How do humans generate paths of locomotion through a complex, changing environment? Behavioral dynamics studies how stable patterns of behavior emerge from the interaction between an agent and its environment. In this talk I will describe our studies of the on-line visual control of walking in a virtual environment, including steering, obstacle avoidance, interception, following, and pursuit-evasion. By modeling these elementary behaviors, we can predict paths of locomotion in more complex environments, explain why the rabbit escapes the fox on a zig-zag path, and ultimately aim to understand the collective behavior of crowds. The results demonstrate that locomotor behavior can emerge on-line as a stable solution of the system’s dynamics, making explicit path planning unnecessary.
ven04Fév201111h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Target selection for saccadic eye movements
The question why we chose certain locations of a visual scene as fixation targets for our eyes has intrigued scientists ever since they were able to measure eye movements at all. Here I will show how several different factors such as salience, objects recognition, action plans, and reward affect saccadic target selection.
invité par P. Mamassian
ven28Jan201111h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Hearing lips in L2: Multisensory contributions to speech perception in bilinguals
Spoken communication provides a unique manifestation of human multisensory processing because the correlates of speech are available to the auditory and the visual systems. I am interested in addressing how these auditory and visual speech signals are integrated during perception, and in particular, how this multisensory processes take place when dealing with speech input in a second language. I will present evidence from several studies with adults and infants revealing the potential benefits and limitations of multisensory integration when parsing speech input in non-native languages. I will also present some findings about the potential neural correlates underlying these benefits.You can find more information about current projects and links to publications here: http://www.mrg.upf.edu
invité par le groupe de langage
ven21Jan201111h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Perceptual category learning at two interacting levels: speech interpretation in early language development
It is usually assumed that the development of speech perception works like this: first, infants use distributional clustering to learn the consonants and vowels of their language; then, young children use these sounds to recognize and distinguish words. On this account, distributional learning in infancy solves the problem of phonological interpretation in learning words. I will argue on the contrary that infants may use words to learn speech sounds, and that this learning does not itself cause mature phonological interpretation.
invité par le groupe de langage
lun17Jan201115hSalle de réunion du LNP,UMR8119, 3ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
A "sensorimotor" view of seeing and sensation.
ven14Jan201111h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Structure-Function Relations Revealed by High-Resolution in vivo Retinal Imaging
This seminar will describe the development of laboratory instrumentation for three-dimensional imaging of the human retina at a cellular scale. The purpose is to study structure-function relations in normal aging and in retinal and optic nerve disease. These instruments use adaptive optics to correct for higher-order, temporally varying, ocular aberrations. When combined with a fundus camera that illuminates the retina with a light flash, it is possible to image the cone photoreceptor mosaic due to the improvement in lateral resolution (~ 2 µm). Adaptive optics, however, do not improve axial resolution so we combine this approach with an interferometric technique, optical coherence tomography. We will describe how these techniques work and how we have achieved resolution of ~3.0 µm in three dimensions. Application of these methods demonstrated correlations between visual performance measures (multifocal ERG, contrast sensitivity, visual field sensitivity) and cone densities in a variety of patients with retinal and optic nerve diseases. Irregularities in the arrangement of cone photoreceptors of patients, changes in photoreceptor outer segment length, as well changes in inner retinal layers will also be described.
ven17Déc2010sam18Déc20109h30-13h15Amphithéâtre Claude BERNARD, 3ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisAVA Christmas Meeting
mar14Déc201015h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Specific language impairment (SLI) 20 years on: What have we learned?
In 1990 the view that subgroups of specific language impairment (SLI) with distinct behavioural and genetic underpinnings existed was met with scepticism. Furthermore, the existence of a relatively pure form of this disorder - Grammatical(G)-SLI - was considered controversial. I will present data spanning some 20 years from individuals with G-SLI and more general SLI that have revealed the following: (1) G-SLI provides evidence for the existence of a relatively "pure" form of Grammatical impairment alongside normally functioning auditory and non-linguistic abilities. (2) In 1996, behavioural data indicated that a strong genetic-biological factor that was consistent with an autosomal dominant inheritance underlay G-SLI. (3) Investigations of SLI illustrates that language is not one system but multiple systems or "components" (including syntax, morphology, phonology) that can be differentially impaired at the behavioural and neural level. (4) Behavioural and brain-imaging investigations reveal that, more specifically, hierarchical structures in grammatical components affecting syntax, morphology, and phonology can be impaired alongside normal auditory, and non-linguistic processing (the Computational Grammatical Complexity (CGC) hypothesis). These component deficits show independent yet cumulative impact on linguistic acquisition and performance. In contrast, I will argue that pragmatic semantic and lexical development can be relatively spared in G-SLI, but these components show secondary impairments resulting from impaired grammatical cues for learning.
The specificity of grammatical deficits in SLI has enabled us to explore the role of grammar in development and functioning of other cognitive abilities, such as scalar implicatures, number knowledge, theory of mind, and the effect of higher cognitive functions on attention and memory, and challenge some existing theories. Thus, “turning-up the microscope” on SLI and on grammatical abilities in other genetic language impairments (e.g., dyslexia, Autism, Huntington’s Disease) is starting to provide valuable insight on the biological and neural basis or cognitive development and functioning.
Finally, research findings and theoretical implications from SLI have translated into a cross-linguistic clinical and research tool. It has enable us to develop a highly focussed screening test for grammar and phonology (GAPS test), that enables children with grammatical deficits or at risk of SLI and/or dyslexia to be identified early in development so that they can receive the help they need.
mar14Déc201017hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Lateralization of Auditory Perception and Learning in Human Newborns: How it Happens and So What
The talk will offer evidence that nature has forced lateralized, asymmetrical, auditiory processing on us. The initial force is physical (non-biological) but its effects unfold developmenally (biologically). Some early consequence for learning are clear and dramatic. My hope for the presentaion is that there will be an open discussion of the causes and the implications of asymmetric processing for the development of both speech perception and production.
ven03Déc201014h-16hSalle du Conseil, the headquarters of Université Paris Descartes - 12, rue de l’Ecole de médecine - 75006 ParisINC SeminarShow details
President A. Kahn will open the afternoon. Following the President’s remarks, several INC colleagues will give brief presentations on projects that reflect the multidisciplinary aspect of our research. We are also delighted that Idan Segev (Hebrew University, Jérusalem, Israël), member of our new Advisory Board, will be present to make the closing remarks. Representatives of the specialized press will be invited for this afternoon event, which will end with a cocktail.
The INC was created less than a year ago, resulting from the shared ambition of our nine member laboratories* to bring together researchers from different areas of neurosciences and cognition, in our common desire to understand how the brain functions, as well as to train the future generation of scientists.
Please join us the afternoon of Friday, December 3rd for the inauguration of the INC. We’re counting on your presence at this important event…what better way to show our solidarity and diversity!
INC member laboratories
- Laboratory of Neurophysics and Physiology (LNP), Université Paris Descartes, CNRS
- The Laboratory of Psychology and Perception (LPP), Université Paris Descartes, CNRS
- The Centre for Research in Sensorimotor Control (CESEM), Université Paris Descartes, CNRS
- Development and Neuronal Migration Team, Department of Genetics and Development, Institut Cochin, Université Paris Descartes, CNRS, Inserm
- Department of Genetics and Development of Skeletal Muscles, Institut Cochin, Université Paris Descartes, CNRS, Inserm
- Pathophysiology of Psychiatric Disorders Team, Hôpital Sainte-Anne, Université Paris Descartes, Inserm
- Relais d'Information sur les Sciences de la Cognition (RISC), Université Paris Descartes, CNRS, L'Ecole Normale Supérieure (ENS) Descartes, CNRS, Ecole Normale Supérieure
- Applied Mathematics Paris Descartes (MAP), Université Paris Descartes, CNRS, Fondation Sciences mathématiques de Paris
- Laboratory of Computer Science, Paris Descartes (LIPADE), Université Paris Descartes
jeu02Déc201014h-15h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisINC SeminarShow details
Mechanisms of visual categorization inferred from brain activity
If the brain is a machine that processes information, then its cognitive activity can be interpreted as a set of information processing states linking stimulus to response (i.e. as a mechanism or an algorithm). The cornerstone of this research agenda is the existence of a method to translate the measurable states of brain activity into the information processing states of a cognitive theory. Here, we contend that reverse correlation methods and concepts of Information Theory can provide this translation and we frame the transitions between information processing states in the context of Automata Theory. We illustrate, using examples from visual cognition, how this novel framework can be applied to understand the information processing algorithms of the brain in cognitive neuroscience.
mar23Nov201016h45Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
I see what you said! Infant sensitivity to articulator congruency between audio and video with native and nonnative consonant contrasts
C. T. Best (MARCS Labs/UWS Australia & Haksins Labs, USA)
C. H. Kroos (MARCS Labs)
J. Irwin (Haskins Labs, USA & Southern Connecticut State University, USA)
We examined infants’ sensitivity to articulatory organ congruency between audio-only and silent-video consonants (lip vs. tongue tip closure) to evaluate three theoretical accounts of audio-visual perceptual development for speech: 1) learned audio-visual associations; 2) intersensory perceptual narrowing; 3) amodal perception of articulatory gestures. Effects of language experience were investigated in 4- vs. 11-month-olds’ cross-modal perception of native (English stops) and nonnative (Tigrinya ejectives) consonant contrasts. The 4-month-olds showed an articulator-congruency preference for both native and nonnative consonants, but it was constrained by trial order. The 11-month-olds’ more complex cross-modal responses differed for native vs. nonnative speech, suggesting an effect of increased native language experience. Results are at odds with associative learning and perceptual narrowing, but consistent with experiential tuning of amodal perception for two distinct articulators.
ven19Nov201011h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Behavioural and neural evidence for separate visual memory systems in embryos and juveniles of cuttlefish (Sepia officinalis).
Among invertebrates, cephalopod mollusks (cuttlefishes, squids, and octopuses) exhibit very high behavioural flexibility from predatory to defensive behavior as well as interindividual communication. They also display impressive memory ability in a wide range of learning. These behavioral skills are controlled by the most developed and centralized nervous system of all invertebrates. Amongst Cephalopods, cuttlefish are particularly valuable models to study the ontogenesis of memory systems : in juvenile, memory abilities seem to mature gradually during development. Very short term memory processes involved in prey-pursuit behaviour develop early in life (within the first few days after hatching) while long term retention performance of an associative learning task (learned inhibition of the predatory behaviour) increase throughout the first three months of life. These phenomena are correlated with the post-embryonic maturation of the vertical lobe complex, a highly associative brain structure in cephalopods. Paradoxically cuttlefish will prefer, for days, to feed on prey to which they were familiarized at early stages of development (in ovo or just after hatching). These recent data suggest the existence of food imprinting in early juveniles of Sepia. Putative neural basis and adaptive advantages of such early visual memory abilities will be discussed.
invité par J. Fagard
ven29Oct201011h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Role of extraretinal monitoring signals in the perceptual integration of space across eye movements
In everyday life, we continuously sample our visual environment by rapid sequences of saccadic eye movements and intervening brief fixations. For the successful integration of visual information into a coherent scene representation the visuo-motor system needs to deal with these constant self-induced displacements of the visual input on our retinae and distinguish them from motion in the outside world. Internal forward models may help to solve this problem: The brain may use an internal monitoring signal associated with the oculomotor command to predict the visual consequences of the corresponding saccadic eye movement and compare this prediction with the actual postsaccadic visual input. Recent neurophysiological studies in primates identified one candidate pathway for an internal monitoring signal that ascends from the superior colliculus to the frontal cortex, relayed by medial parts of the thalamus. I will present psychophysical work on perisaccadic and transsaccadic space perception in normal control subjects and from a recent case study in a patient with a lesion affecting trans-thalamic monitoring pathways. Our findings point towards an important role of internal monitoring signals for perceptuo-motor integration and conscious visual perception across eye movements. Internal monitoring signals may be important for the correct attribution of self-induced versus externally imposed changes in the continuous flow of our sensory experiences.
invité par T. Collins
ven24Sep201011h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
The dynamics of spoken-word recognition in context
Theories of speech comprehension have assumed that the recognition of spoken words consists of mapping the speech signal onto context-free representations. These representations, it is argued, abstract from most of the phonetic consequences of utterance context, such as prosodic structure or talker identity. However, my work using listeners’ saccadic eye movements to visual objects during speech comprehension shows that listeners utilize (rather than neutralize) the acoustic correlates of the context in which a spoken word occurs. Word forms are not abstract phonological prototypes, nor are they collections of past exemplars. Instead, I argue, the forms that listeners expect words to take are dynamically updated to their current context.
jeu23Sep201017hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisShow details
Molecular psychophysics: a hands-on introduction
In psychophysical experiments, irrespective of their topic, we usually summarize the behavior of the subjects by computing for each block or experimental condition an index for accuracy/sensitivity and/or response bias. Thus, the behavioral data are summarized by computing an average of many individual subjective judgments. David Green (1964) termed this way of looking at the data "molar psychophysics" and suggested a different and complementary approach, which he termed "molecular psychophysics". In this approach, trial-by-trial analyses are used to relate the responses on individual trials to stimulus variability or sometimes also to internal variability (i.e., internal noise). In this talk, I will introduce a molecular psychophysics approach that is a powerful tool for understanding the decision process when subjects face multiple sources of information, as for example when judging the loudness of a multitone complex, or when trying to detect a face in a random noise background. This "perceptual weight analysis" approach uses a specific combination of stimuli and data analysis techniques, and finds increasing use in audition, vision, and physiology. Perceptual weight analysis estimates the importance of individual stimulus components (e.g., the lowest frequency component in a multitone complex) for the decision of the listener. I illustrate perceptual weight analysis by data from experiments on the temporal weighting of loudness conducted in my lab. I also discuss other potential applications in audition, vision, and inter- or multisensory integration.
ven02Juil201011h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Active Perception: Sensorimotor Circuits As A Cortical Basis For Language
The brain’s action and perception mechanisms are functionally linked, but a hotly debated question is whether the perception and comprehension of stimuli depend on motor circuits. Brain language mechanisms are ideal for addressing this question. Neuroimaging studies demonstrate activity of articulatory motor areas during both speech perception and production, and magnetic stimulation of motor areas influences the recognition of speech sounds. The meaning of action and object words is manifest in activity in motor, sensory and multimodal cortices, and the necessary role these areas play in semantic processing is confirmed by semantic deficits in patients with lesions in these areas. Finally, activity in inferior frontal cortex (Broca´s area) occurs during syntactic processing and lesions to this area cause agrammatic speech along with comprehension deficits for complex sentences and action sequences. These data demonstrate that comprehension depends on frontocentral action systems, thus supporting interdependence of action–perception circuits.
lun28Juin201011h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
High frequency flicker captures attention – and so does low frequency flicker!
Does visual flicker capture attention, and if so, are faster flicker rates more effective? This study used speeded visual search involving horizontal and vertical target singletons amongst oblique distractors, all located equidistantly around fixation. Oriented elements were surrounded by a luminance modulating annulus. In Experiment 1 distractor annuli all flickered at 1.3 or 12.1 Hz while the target temporal rate was either: 1.3, 2.7, 5.4, 8.1, 10.8, 12.1 Hz. Set size was either 4, 7 or 10. Search improved monotonically with increasing temporal frequency separation between target and distractor annuli, producing parallel search performance separations of >=5 Hz. Results were symmetrical with respect to temporal frequency (low frequencies pop out from high; and vice versa). These results imply temporal frequency is a salient and efficient segregation cue, and agrees with profiles of human temporal frequency filters. In Experiment 2, excepting a single annulus (at either target or distractor locations), all annuli modulated at either 1.2 or 12.1 Hz. In addition to symmetric temporal frequency pop-out effects we found a performance cost when the unique temporal frequency corresponded with a distractor location. The combination of pop-out and attentional costs indicates that low and high flicker frequencies can capture attention. These visual search data show that low temporal frequencies are attentionally salient when embedded among high temporal frequencies. The data also map very neatly on to the known shapes of human visual temporal frequency filters, revealing a high-frequency bandpass channel, and a broad low-pass channel at lower frequencies.
ven25Juin201011h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Monitoring our actions and their outcomes
People are able to detect their own errors and enact appropriate behavioural adjustments, an ability of obvious adaptive significance. The neural mechanisms of error monitoring have been widely studied, but fundamental questions remain regarding the information we use to detect our errors—i.e., how we could know when we’ve made a mistake—and how this information processing is reflected in neuroimaging measures—i.e., whether these measures capture precursors to error detection such as conflict monitoring, the error detection process itself, or subsequent reactions to a detected error. In my talk, I will discuss lines of research addressing each of these questions. The first line of research attempts to build computational models that are capable of explaining and predicting the time-course and amplitude of error-related neural activity. The second uses multivariate analysis techniques to estimate EEG activity on single-trials in relation to objective measures and subjective ratings of performance accuracy.
lun21Juin201011h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
How nimble is saccade adaptation?
Saccades are often thought of as machine-like behaviors that move the eye from place to place as rapidly as possible and have their accuracy maintained by parametric feedback after the saccade is done (saccade adaptation). I will present results showing that saccades are not so machine-like: saccade latency is controlled by attention, and saccade adaptation is driven by complex predictive error signals, is sensitive to the visual characteristics of the target and may even take place in the dark. Thus, saccade adaptation is more like a general learning mechanism than a specialized servo system.
ven18Juin201011h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Different cognitive systems struggling for word order
I will present arguments indicating that the grammatical diversity observed among the world’s languages emerges from the struggle between individual cognitive systems trying to impose their preferred structure on human language. Evidence from language change, grammaticalization, stability of order, parsing advantages, and theoretical arguments, indicates a syntactic preference for SVO. There reason for the prominence of SOV languages is not as clear. I will present experiments aimed at establishing the cognitive bases of the two most common word orders in the world’s languages: SOV (Subject–Object–Verb) and SVO. In two gesture-production experiments and one gesture comprehension experiment, I will show that SOV emerges as the preferred constituent configuration in participants whose native languages (Italian and Turkish) have different word orders. I will propose that improvised communication does not rely on the computational system of grammar. The results of a fourth experiment, where participants comprehended strings of prosodically flat words in their native language, shows that the computational system of grammar prefers the orthogonal Verb–Object orders.
lun07Juin2010Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminar
lun31Mai2010LPP seminarShow details
Functional neuroanatomy of perceptual timing and temporal expectations
We all have a sense of time. Yet there are no sensory receptors specifically dedicated for perceiving time. It is an almost uniquely intangible sensation: we cannot see time in the way that we see colour, shape or even location. So how is time represented in the brain? I will present a series of fMRI studies showing that (1) perceptual timing invariably activates structures traditionally implicated in motor function (SMA, basal ganglia) despite non-motor task goals and (2) the ability to use temporal information in order to make predictions about when an event is likely to occur in the future activates left intraparietal sulcus, an area often linked to motor intention. The functional significance of the neuroanatomical overlap between timing and motor function is, however, a question that remains to be answered.
lun17Mai201011h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Sensorimotor Intelligence from Developmental Synchrony to Developmental Asynchrony, a Dynamical Systems Approach
We will present a developmental scenario covering the interval before and after the 6th month old period when a transition occurs in cognitive skills. During the early-developmental period, before 6 month, infants present a great sensitivity to the temporal extent of their sensorimotor activity (e.g. synchrony, contingency, rhythm) to sense their own body (body image), to perceive their own actions (agency) and to structure their interactions with the environment (cross-modal integration). As it has been suggested in developmental psychology, contingency detection could help to construct robust congruent sensorimotor patterns such that temporal discrepancies could disrupt the perceptual experiences. We propose that the biological mechanism of spike timing-dependent synaptic plasticity, discovered to regulate the neural dynamics' synchronization at the millisecond order in many parts of the central nervous system, could underlie some of the computational mechanisms for modeling such functional integration in sensorimotor networks: the coherence or the dissonance in the sensorimotor information flow will impart then the neural representations. Although not yet-matured, these representations could correspond to a first stage of how the self relates to others. At their 6th month, however, infants present impressive changes in almost all domains marked by memory categorization and novelty. We propose that this developmental shift underlies the functional reorganization of the medial temporal lobe to operate progressively as a working memory, letting the infant to start dealing with the unexpected.
lun12Avr201011h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Interaction verbale, attention mutuelle et convergence phonétique
Doter un agent conversationnel de la capacité à communiquer au-delà des mots – c’est-à-dire de gérer l’ensemble des boucles de perception-action (Thórisson 2002; Bailly, Elisei et al. 2008) impliquées dans la gestion de l’interaction – est un défi permettant de renouveler les paradigmes actuels des technologies vocales. Nous nous intéresserons ici à deux modalités essentielles de l’interaction face-à-face : le regard et la parole. Nous décrirons et commenterons les résultats de deux expériences originales menées au sein de notre équipe : (a) l’analyse de la distribution spatio-temporelle des fixations et des clignements des yeux de sujets placés en situation d’interaction face-à-face médiatisée (Bailly, Elisei et al. 2007; Bailly, Elisei et al. 2008; Raidt 2008; Bailly, Raidt et al. to appear) montrant l’impact sur les stratégies de scrutation de l’état cognitif et du rôle des sujets dans la conversation ; (b) l’analyse des cibles vocaliques produites dans un jeu de dominos verbaux (Arléo 1997; Bailly and Lelong submitted) montrant l’importance des conditions expérimentales – face-à-face vs. conversation téléphonique – et de l’expérience commune des interlocuteurs sur les stratégies d’accommodation (Giles, Coupland et al. 1991).
Arléo, A. (1997). Un jeu de dominos verbal: Trois p'tits chats, chapeau d'paille. Chants enfantins d'Europe. A. Arléo, A.-M. Despringre, J. Fribourg, E. Olivier and P. Panayi. Paris, L'Harmattan: 33-68.
Bailly, G., F. Elisei and S. Raidt (2007). Controlling the gaze of animated conversational agents during for face-to-face interaction. Computer Game Conference, Lyon - France
Bailly, G., F. Elisei and S. Raidt (2008). "Boucles de perception-action et interaction face-à-face." Revue Française de Linguistique Appliquée XIII(2): 121-131.
Bailly, G. and A. Lelong (submitted). Speech dominoes and phonetic convergence. Interspeech, Tokyo
Bailly, G., S. Raidt and F. Elisei (to appear). "Gaze, conversational agents and face-to-face communication." Speech Communication - special issue on Speech and Face-to-Face Communication.
Giles, H., J. Coupland and N. Coupland (1991). Contexts of Accommodation: Developments. Cambridge, Cambridge University Press.
Raidt, S. (2008). Gaze and face-to-face communication between a human speaker and an embodied conversational agent. Mutual attention and multimodal deixis. PhD Thesis. GIPSA-Lab. Speech & Cognition dpt. Institut National Polytechnique Grenoble - France: 175 pages.
Thórisson, K. (2002). Natural turn-taking needs no manual: computational theory and model from perception to action. Multimodality in language and speech systems. B. Granström, D. House and I. Karlsson. Dordrecht, The Netherlands, Kluwer Academic: 173–207.
lun29Mar2010LPP seminarShow details
Enhanced reactivity to visual events in profoundly deaf individuals
A central question in the study of compensatory changes occurring as a consequence of long-term sensory deprivation relates to the neural basis subtending such crossmodal plasticity. In the case of profound deafness, this is particularly relevant as the extent and nature of brain reorganisation is a predictor of the time-course of recovery following sensory re-afferentation with neuroprosthetic devices (e.g., cochlear implants). In the present talk, I will discuss behavioural evidence showing how reactivity to visual events is enhanced in profoundly deaf individuals. In addition, I will review the existing literature about the neural basis of crossmodal plasticity in the deaf and present recent findings from my group concerning the dynamic of visual responses in this sensory deprived population as measured by EEG. Our findings point to a functional model of crossmodal plasticity in profound deafness in which compensatory changes occur from very early stages of visual processing, possibly as a result of subcortical changes and recycling of neural circuits in the de-afferented auditory areas.
lun22Mar201011h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Diabetes and its complications for postural stability: Review and hypotheses
Included among the complications associated with diabetes is postural control. In the literature, the general observation (in 30 articles) is that diabetic neuropathy people are systematically unstable compared to healthy control people. Neuropathy is usually greater distally than proximally and more sensorial than motor and autonomic. For these reasons, the question posed is one of whether distal sensory neuropathy is the direct cause of instability. Other hypotheses of instability still have to be discussed because the available evidence does not rule out diabetes per se, other neuropathies (central, autonomic, and motor), an inability to exploit fully optical and inertial information about posture, body disorders caused by the neuropathy (e.g., reduction of the base of support). At a practical level, the instability of diabetic neuropathy people is critical because their interactions with the environment are clearly diminished.
lun15Mar201011h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
The Perception for Action Control Theory (PACT), a perceptuo-motor theory of speech perception
It is an old-standing debate in the field of speech communication to determine whether speech perception involves auditory or multisensory representations and processing, independently on any procedural knowledge about the production of speech units or on the contrary if it is based on a recoding of the sensory input in terms of articulatory gestures, as posited in the Motor Theory of Speech Perception. The discovery of mirror neurons in the last 15 years has strongly renewed the interest for motor theories. However, while these neurophysiological data clearly reinforce the plausibility of the role of motor properties in perception, it could lead in our view to incorrectly de-emphasise the role of perceptual shaping, crucial in speech communication. The so-called Perception-for-Action-Control Theory (PACT) aims at defining a theoretical framework connecting in a principled way perceptual shaping and motor procedural knowledge in speech multisensory processing in the human brain. In this paper, the theory is presented in details. It is described how this theory fits with behavioural and linguistic data, concerning firstly vowel systems in human languages, and secondly the perceptual organization of the speech scene. Finally a neuro-computational framework is presented in connection with recent data on the possible functional role of the motor system in speech perception.
jeu11Mar201019h30Théâtre Traversière, 15, rue Traversière - 75012 ParisJNAShow details
Conférences – Concert (Journée Nationale de l'Audition 2010)
La Journée Nationale de l’Audition se tient jeudi 11 mars dans toute la France. A cette occasion, l’équipe Audition du Laboratoire de la psychologie de la perception (LPP, CNRS / Université Paris Descartes / Ecole Normale Supérieure) organise une soirée d’information et de réflexion sur l’audition, en collaboration avec le service ORL de l’hôpital Beaujon, le Laboratoire des Neurosciences Cognitives (LNC) et le Centre de Recherche de Neurobiologie- Neurophysiologie de Marseille (CRN2M).
Au programme : trois conférences grand public, en présence de médecins et chercheurs tous spécialistes de l'audition. Elles seront suivies d'un concert alliant musique classique, jazz, musique de film et musique contemporaine.
Programme des conférences
• "l'oreille et la surdité à découvert"... par le Docteur Diane LAZARD, membre du LNC et médecin dans le service ORL de l’hôpital Beaujon.
• "La modélisation informatique de l'audition" par Romain BRETTE, Maître de conférence à l’ENS et membre du LPP.
• Les dangers des baladeurs (surdité et acouphène) par Yves CAZALS, Directeur de recherche au CRN2M.
Concert donné par l'ensemble d’harmonie « La Renaissance », sous la direction de Denis LANCELIN, ingénieur d’étude au CNRS au LPP.
Jeudi 11 mars 2010 à 19h30
Rendez-vous au Théâtre Traversière, situé au 15, rue Traversière - 75012 Paris
Pour en savoir plus
Consultez le site de l'association JNA : http://www.audition-infos.org/jna/association.php
lun01Mar201011h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
On groups, patterns, shapes and objects: Towards a more integrative approach to understand the interrelationships between different perceptual processes
Perceptual grouping, figure-ground organization, shape detection and object recognition are all tasks the visual system can do. In real life outside the lab, these processes are supposed to operate as components of our visual system’s normal way of processing the incoming information in support of our visually guided everyday activities. In the lab, however, these tasks are frequently studied with different types of stimuli because specific experimental paradigms have been developed in different research traditions. As a result, progress in understanding their interactions has been limited so far. We have been developing a set of stimuli and tasks that is more suitable to address issues regarding the interplay between different component processes. Specifically, we have used Gabor arrays derived from outlines of everyday objects and similar shapes in a more systematic, long-term research program. I will present a brief overview of several lines of on-going research to illustrate the variety of experimental paradigms as well as the potential benefits of a more integrative approach. The ultimate goal of this line of work is to better understand the interplay between the different component processes in the visual system, linking low-, mid- and high-levels of processing.
lun15Fév201011h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Learning to combine sense and experience for optimal perceptual judgments
Sensory information is uncertain. Two means of reducing uncertainty are integrating multiple estimates (e.g. vision and touch), and interpreting present information in light of prior knowledge (e.g. assuming a light from above to interpret shape-from-shading). Human adults can use such strategies to obtain the greatest possible (“optimal”) reduction in uncertainty. I will describe some recent studies of how these abilities develop in childhood. Overall, children’s performance differs markedly from adults’. Children do not integrate two cues to reduce uncertainty until after 8 years, whether across modalities or within a single modality. However, keeping cues separate enables younger children to make speed gains and discriminations about conflicting cues that are not available to adults. In interpreting shape-from-shading, children use similar “convexity” and “light-from-above” prior assumptions to adults, but give different weightings to these. The differences in integration behaviour suggest that developing perceptual systems are optimized for goals other than uncertainty reduction, while the differences in use of prior knowledge may reflect the differing time courses for acquiring statistics about different aspects of the visual world.
ven05Fév201011h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Early Identification and Intervention to Prevent Reading Failure
Early identification and intervention programs can prevent reading failure and reduce the incidence and severity of dyslexia. The results of a 8 year longitudinal study with approximately 950 children have shown that children at risk for reading difficulties can be detected at school entry and, if appropriate intervention, is provided, most reading failure can be prevented.Children in Canada enter school at age 5. In one school district, North Vancouver, Canada all of the children were screened during the first few months of school entry. A simple screening system, lasting 15-20 minutes and individually administered by teachers or other school personnel was used. The results showed that 25% of the children with English as a first language (L1) and 51% of children with English as a second language (ESL) were detected as being at risk for reading difficulties.The screening in kindergarten consisted of tasks assessing phonological awareness, letter naming, syntactic awareness, and memory for language. The intervention in kindergarten and grade 1 consisted of a classroom based program called Firm Foundations that stressed vocabulary, phonological awareness, and phonics. A reading comprehension training program, called Reading 44, was used in grade 2 and the later grades. In grade 7, at age 13, 1.5 % of the L1 children and 2.1% of the ESL children were dyslexic.These rates are significantly lower than what is found in most jurisdictions. The children with English as a second language performed as well, and in some cases better, than children who had English as a first language.Children at risk for reading difficulties can be detected at school entry and, if appropriate remediation is provided, most reading failure can be prevented. The program was equally successful with L1 and ESL children. Appropriate early identification and intervention can prevent most reading failure
lun01Fév201011h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminar
lun25Jan201011h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
The effects of experience on early brain and language development
Abstract: The first three years of life yield dramatic advances in language development. Yet the changes in language-relevant brain systems that precede, accompany or follow these achievements are not well understood. I will present a series of event-related potential studies in typically developing infants 6-months to 2 years of age showing how the experience of learning language shapes the organization of brain activity linked to vocabulary development. Additionally, I will examine how language-specific, social, and domain-general cognitive processes and their development, influence changes in the organization of brain activity for communicative functions.
lun11Jan201011h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Studies on speech and language processing: psycholinguistics and neuroimaging approaches
The talk will consist in an overview of my research on word recognition, second language acquisition and syntax. I will first present data suggesting that language-specific representations are computed during word recognition; then I will address the issue of second language acquisition, concentrating of the question of the origin of age effects and on the cerebral correlates of second language learning. Lastly, I will present recent work on the neural network underlying sentence processing.
ven18Déc20099h-19hAmphithéatre Dussane, Ecole Normale Supérieure, 45 rue d’Ulm, 75005 Paris, FranceLPP seminarShow details
Auditory temporal processing in normal and impaired ears
An auditory workshop at Ecole Normale Superieure, Dept d’Etudes Cognitives, Paris, France.
(Organizers: C. Lorenzi & M. Ardoint; UMR CNRS-Paris Descartes-ENS Laboratoire Psychologie de la Perception)
The auditory sensory epithelium, the cochlea, converts acoustic soundwaves into neural action potentials by breaking up sounds in frequency bands. This has led many theories of hearing and hearing impairment to focus on spatial (tonotopic) representations of sounds that emphasize the relative energy across frequency, i.e. spectral representations. There is, however, a large amount of temporal structure within the frequency bands that is encoded via neural phase-locking at a peripheral level, and preserved or recoded in the subcortical and cortical auditory pathways. Recent work suggests that the representation of this temporal information may be severely degraded following cochlear damage, offering novel explanations to the speech and music perception deficits experienced by most hearing-impaired listeners and cochlear implantees. The present workshop brings together a small number of auditory scientists investigating the nature of the temporal structure of sound, its use for perception and the effects of cochlear damage on auditory temporal processing.
A way to analyze speech and non-speech sounds is to decompose the output of cochlear filters into two temporal features, on different time scales: temporal envelope (a slow amplitude modulation) and temporal fine structure (a frequency-modulated carrier signal). This decomposition offers a theoretical framework that triggered an important number of psychoacoustical, electrophysiological, audiological and brain-imaging investigations over the last decade. However, the independence of temporal-envelope and temporal fine-structure information processing is still a matter of discussion, and the exact role of each temporal feature in pitch and speech perception remains strongly debated. The goal of the present workshop is to review and discuss old and new data argueing for and against this theoretical framework and a role of temporal cues in pitch and speech perception for normal-hearing and hearing-impaired listeners.
Roy Patterson and Ian Winter (Cambridge Univ, CNBH, UK) will open the workshop by providing an historical overview of the present debate, initiated more than half a century ago. Michael Heinz (Purdue Univ, USA), Yves Cazals (ISERM, CNRS, Paul Cezanne Univ, Marseille, France), Samira Anderson (Northwestern University, Chicago, USA) and Ian Winter (Cambridge Univ, UK) will then present recent neurophysiological and computational-modelling results uncovering the peripheral and central auditory processes involved in the perception of simple and complex temporal envelope and temporal fine structure patterns, and the potential effects of cochlear lesions on such processes. Robert Shannon (HEI, Los Angeles, USA), Kathryn Hopkins (Cambridge Univ, UK), Deniz Baskent (Groningen Univ, Netherlands) and Stanley Sheft (Rush Univ Medical Center, Chicago, USA) will present psychophysical studies investigating the role of each temporal feature (temporal envelope and fine structure) in speech identification in quiet and in noise for normal-hearing, hearing-impaired listeners, cochlear implantees and listeners equipped with electro-acoustic rehabilitation devices. Fine structure may also be a major cue to pitch. Brian Moore (Cambridge Univ, UK), Sébastien Santurette (CAHR, Copenhaguen, DK), Christophe Micheyl (Minnesotta Univ, USA), Hedwig Gockel (CBU, MRC, UK) and Isabelle Peretz (Montreal & McGill Univ, Canada) will finally present recent psychoacoustical and neuroscientific studies investigating pitch perception in normal-hearing listeners and listeners with specific pitch-perception disorders.
Christine Petit (Institut Pasteur, Collège de France, ISERM, France), Romain Brette (ENS, Paris, France), Alain de Cheveigné, Daniel Pressnitzer, Trevor Agus, Christian Lorenzi (Paris Descartes Univ, ENS, CNRS, France) will chair the different sessions of this workshop.
Pierre Divenyi (VAMC, EBIRE, California, USA) and Ian Winter (Cambridge Univ, CNBH, UK) will conclude the workshop.
Location: Amphithéatre Dussane, Ecole Normale Supérieure, 45 rue d’Ulm, 75005 Paris, France. How to reach Ecole Normale Supérieure: by RER (Stop at Luxembourg) /metro (Stop at Censier Daubenton) /bus : (Lines 21 or 27: stop at Feuillantines).
Date: Friday, the 18th of December 2009, 9a.m.-7p.m. All welcome (no registration fee)
Acknowledgments: This workshop is supported by the DUALPRO FP7 EC project and Neurelec-France.
Auditory temporal processing in normal and impaired ears
9 a.m.: General introduction
Ian Winter (Cambridge Univ, CNBH, UK) & Roy Patterson (Cambridge Univ, CNBH, UK), main moderators.
9:30 a.m. : First Session : “Auditory neuroscience and computer modelling”
Moderators: Christine Petit (Institut Pasteur, Collège de France, ISERM, France) & Romain Brette (ENS, Paris, France)
9:30-10 a.m.: Michael Heinz (Purdue Univ, USA): Effects of sensorineural hearing loss on temporal fine-structure and envelope coding in the auditory nerve
10-10:30 a.m.: Ian Winter (Cambridge Univ, CNBH, UK): Where does all the fine-structure go? Tales from the cochlear nucleus
10:30-11 a.m.: Coffee break
11-11:30 a.m.: Yves Cazals (ISERM, CNRS, Paul Cezanne Univ, Marseille, France): Auditory CNS plasticity after unilateral deafness and electrical stimulation of the cochlea : a study in guinea pigs.
11:30-12 a.m.: Samira Anderson (Northwestern University, Chicago, USA): Brainstem Correlates of Speech-in-Noise Perception
12 a.m.-1:30 p.m: Lunch
1:30 p.m. : Second session: Psychoacoustics : AM-FM perception & speech intelligibility
Moderators: Christian Lorenzi (Paris Descartes Univ, ENS, CNRS, France) & Trevor Agus (Paris Descartes Univ, ENS, CNRS, France)
1:30-2 p.m.: Stanley Sheft (Rush Univ Medical Center, Chicago, USA): Relationship Between Stochastic FM Discrimination and Speech Perception in the Elderly.
2-2:30 p.m.: Deniz Baskent (Groningen Univ, Netherlands): Speech perception with reduced frequency resolution as simulated with envelope processing
2:30-2:45 p.m.: Coffee break
2:45-3:15 p.m.: Robert Shannon (HEI, Los Angeles, USA): What implant research tells us about the relative importance of envelope and fine structure cues
3:15-3:45 p.m.: Kathryn Hopkins (Cambridge Univ, UK): The importance of temporal fine structure information in speech at different spectral regions for normal-hearing and hearing-impaired subjects
3:45- 4:15 p.m.: Coffee Break
4:15 p.m.: Third session: Psychoacoustics & cognitive neuroscience : Pitch perception
Moderators: Alain de Cheveigné (Paris Descartes Univ, ENS, CNRS, France) & Daniel Pressnitzer (Paris Descartes Univ, ENS, CNRS, France)
4:15-4:45 p.m.: Brian Moore (Cambridge Univ, UK): The role of TFS in pitch perception for tones with intermediate harmonic numbers
4:45-5:15 p.m : Sébastien Santurette (CAHR, Copenhaguen, DK) : Importance of temporal fine structure information for the low pitch of high-frequency complex tones
5:15-5:30 p.m.: Coffee Break
5:30: 6 p.m.: Christophe Micheyl (Minnesotta Univ, USA): A critical review of recent evidence for a role of temporal fine structure in pitch perception
6-6:30 p.m.: Hedwig Gockel (CBU, MRC, UK): The combination of F0 information across spectral regions
6:30-6:45 p.m.: Coffee Break
6:45-7:15 p.m.: Isabelle Peretz (Montreal & McGill Univ, Canada): Abnormal connectivity in the auditory-frontal neural pathway in congenital pitch disorder
7:15-7:45 p.m.: Concluding comments
Pierre Divenyi (VAMC, EBIRE, California, USA): Final speculations: the kind of TFS needed for speech perception
Ian Winter (Cambridge Univ, CNBH, UK)
lun14Déc200911h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Multisensory Perceptual Narrowing in Early Development
Although conventional wisdom says that sensory, perceptual, and cognitive abilities generally improve with development, evidence also indicates that some perceptual abilities narrow during infancy. Until recently, this evidence has only indicated that narrowing is a unisensory phenomenon. For example, it has been found that whereas young infants can discriminate nonnative (i.e., foreign) speech contrasts, nonnative faces (i.e., monkey faces; faces of people from other races), and nonnative musical patterns (i.e., rhythms from other cultures), older infants no longer do. In this talk, I will review the results of our recent work with human infants and young monkeys showing that perceptual narrowing is actually a pan-sensory and, thus, general feature of perceptual development.
lun07Déc2009LPP seminarShow details
Studying brain-wide network properties: recent advances with MEG
In my talk I will present the communication though coherence hypothesis, a theoretical model for local and brain-wide dynamic adjustments in network communication. Based on the CTC model explicit hypothesis can be expressed and texted for cognitive processing in general. I will present our current work on high-density ECoG recordings during a visual attention task. Furthermore, I will present recent results from MEG recordings of long-range connectivity in a memory consolidation task.
lun30Nov2009LPP seminarShow details
Segmenting Salient Shapes
I will discuss the problem of detecting and segmenting salient objects in natural images. Humans are very good at this, and very fast. Most thoroughly studied is the task of rapid animal detection, however the mechanisms underlying rapid detection remain poorly understood. Here I will report recent psychophysical results suggesting that the fastest mechanisms underlying animal detection in natural scenes use contour shape as a principal discriminative cue, while somewhat slower mechanisms integrate these rapidly computed shape cues with image texture cues. Detection continues to improve with increased stimulus exposure, suggesting progressive refinement of neural representations.
These results pose a challenge for computational vision algorithms, as the performance of current contour grouping algorithms falls short of human perception. Here I will present results of a Bayesian coarse-to-fine contour grouping algorithm in which approximate representations are computed rapidly and then refined over time. The algorithm outperforms single-scale algorithms and suggests one possible role for massive feedback projections from higher cortical areas in the object pathway to earlier visual areas.
lun19Oct200911h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Depth percepts from 'monocular' stimuli
mer14Oct2009Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
9h30 Accueil SalleR 229
9h40 Kevin’s welcome to the new members of the LPP; Presentation of the administrative and scientific staff, and of the 1st year PhD students
9h55 Thérèse Collins (new member)
10h05 Judit Gervain (new member)
10h15 Pia Rama (new member)
10h25 Romain Brette (new member)
10h 35 Coffee break SalleR 229
11h10 to 12h40 Marathon presentations 5 to 18 (Pascal M, Hughes Gethin, Christian L, Cyrille Rossant, Marianne B-R, Rana Eseilly, Ranka B, Karima Mersad, Andrei G, Martin Rolfs, Daniel P, Marion Cousineau, Arlette S , Bahia Guellaï)
12h40 Buffet campagnard Salle H 432
14h00 to 15h50 Marathon presentations 19 to 36 (Thierry N, Rida You, Mark W, Remy Allard, Thomas Otto, Trevor Agus, Patrick C, Simon Barthelmé, Josiane B, Dan Goodman, Christel S, Victoria Medina, Kevin O’R, I-Fan Lin, Marion Coulon, Willy S, Louise Goyet, Florian W)
15h50 Coffee break SalleR 229
16h30 Video connection with Véronique Izard, Harvard (new member)
16h45 to 17h30 Marathon presentations 38 to 44 (Arielle Veenemans, Sylvie T, Liliane S-C, Adrien Chopin, Marie de Montalembert, Marine Ardoint, Jacqueline F)
17h30 Conf débat Simone Bateman Novaes (CERSES, 7ème étage) Ethics, research and society (in English)
18h00 Apéritif dinatoire Salle H 432
lun05Oct2009LPP seminarShow details
Retinotopic and non-retinotopic integration: When features go around the corner
Information processing in the human brain is highly distributed, i.e. the various features of an object are processed in different parts of the brain. One prediction of distributed processing is that features of one object can be bound to a different object. Using backward masking, trans-cranial magnetic stimulation (TMS) and EEG we show how unconscious features can be rendered conscious at locations where they were not presented. The invisible features can be non-retinotopically integrated with other features across space and time. The mis-bindings of these features are not errors of the visual system but part of a computational strategy to group features into objects. We will present a fascinating litmus test to determine whether a certain visual paradigm is processed retinotopically or non-retinotopically.
lun28Sep200911h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Action and Desire: Modulations of Adaptive Decision-Making
Massive research efforts in cognitive neuroscience are now beginning to advance our understanding of the neural circuits that support cognitive control processes as involved in adaptive decision-making. Yet, the neural mechanisms underlying processes such as conflict detection, performance monitoring, learning from feedback, and reward-based decision-making, are still poorly understood. Moreover, we're only beginning to scratch the surface when it comes to studying changes in these neurocognitive processes as related to normal aging or Parkinson's disease. Here we will review the contributions of recent multi-method efforts, discussing recent studies and work in progress, and evaluating the potential advance yielded by such approaches.
jeu17Sep200917hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
View-based approaches to spatial representation in human vision
In an immersive virtual environment, observers fail to notice the expansion of a room around them and consequently make gross errors when comparing the size of objects. This result is difficult to explain if the visual system continuously generates a 3-D model of the scene based on known baseline information from interocular separation or proprioception as the observer walks. An alternative is that observers use view-based methods to guide their actions and to represent the spatial layout of the scene. In this case, they may have an expectation of the images they will receive but be insensitive to the rate at which images arrive as they walk. We describe the way in which the eye movement strategy of animals simplifies motion processing if their goal is to move towards a desired image and discuss dorsal and ventral stream processing of moving images in that context. Although many questions about view-based approaches to scene representation remain unanswered, the solutions are likely to be highly relevant to understanding biological 3-D vision.
lun14Sep200911h-12h30Salle de Conférence 2e étage porte R229, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
On the relationship between motor and perceptual behaviour – a SDT framework
Starting with Goodale & Milner’s (1992) neuropsychological observations, a large number of neuropsychological and psychophysical studies has documented a putative dissociation between perception and action. However, a closer inspection of this literature reveals a number of methodological and conceptual shortcomings. I shall present a series of experiments making use of a variety of psychophysical techniques designed to gauge the relationship between Response Times as well Saccade Perturbations and observers’ Perceptual States as assessed for not-masked and masked (metacontrast) stimuli via Yes/No, Temporal Order Judgments and Anticipation Response Times paradigms. All these studies reveal a strong action-perceptual state correlation indicating that motor and perceptual responses are based on a unique internal response. A one-path-two-decisions stochastic race model drawing on standard Signal Detection Theory provides a fair account of some of these data, hence overruling the necessity of a two-paths model of visual processing.
mar14Juil200911h-12h30Salle de Conférence 2e étage porte R229, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
On the relationship between motor and perceptual behaviour – a SDT framework
lun06Juil200911h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Monaural and binaural temporal processing towards spatial hearing.
Among sensory systems, audition excels in its speed of processing. Psychophysical and neurophysiological correlates of this feature have been particularly well studied for sensitivity to interaural time differences. I will discuss the circuit in the mammalian brainstem that is thought to underlie this sensitivity.
The circuit first transduces the acoustic waveform into a temporal code in the cochlea and auditory nerve (AN), and enhances the temporal code in the bushy cells of the cochlear nucleus (CN), which converge from left and right sides onto coincidence detectors in the medial superior olive (MSO). Here, the temporal code is transformed into a rate code which is then relayed to higher structures such as the inferior colliculus (IC). Temporal delays are a critical feature of this circuit: external acoustic delays (ITDs) are compensated by “internal” delays in the central nervous system, so that the signals from the two ears reach MSO neurons coincidently.
Our approach to this circuit is to devise broadband stimuli that reveal “pure” binaural temporal interactions, and to then compare the response of binaural neurons with responses of monaural neurons, via a coincidence analysis. I will point out difficulties with existing physiological binaural models and propose a new model.
lun29Juin200911h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Dynamics of population responses in visual cortex
The perception of visual stimuli is widely held to be supported through the activity of populations of neurons in visual cortex. Work in our laboratory seeks to record this population activity and to characterize its evolution in time. Our methods rely on optical imaging of voltage-sensitive dyes and on electrical imaging via multielectrode arrays. The results indicate that the visual cortex operates in a regime that depends on the strength of the visual stimulus. For large, high contrast stimuli, the cortex operates in a manner that emphasizes local computations, whereas for smaller or lower contrast stimuli the effect of lateral connections becomes predominant. In this interconnected regime, the population responses exhibit rich dynamics, with waves of activity that travel over 2-6 millimetres of cortex to influence distal locations. In the complete absence of a stimulus, these waves dominate, and are sufficient to explain the apparently erratic activity of local populations. These results indicate that two apparently contradictory views of visual cortex, one postulating computations that are entirely local and the other postulating strong lateral connectivity, are both correct. The cortex can operate in both regimes, and makes its choice of regime adaptively, based on the stimulus conditions.
ven26Juin20099h30-18hSalle de conférence R229 (2ème étage), Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisGDR Neurosciences Cognitives du DéveloppementShow details
Journée organisées avec le LSCP et Ghislaine Dehaene.
Vendredi 26 juin 2009
Institut Pluridisciplinaire des Saints-Pères
45 rue des Saints-Pères
Salle de conférence R229 (2ème étage)
9h30-10h00 Hervé Glasel Asymétries anatomiques dans le cerveau du nourrisson
10h00-10h30 Emmanuel Dupoux Speech perception by newborns: a NIRS study
10h30-11h00 Marianne Barbu-Roth Qu'est ce qui fait marcher les nouveaux-nés ?
11h30-12h15 Olivier Pascalis Development of Face Processing: what is really developing?
12h15-12h35 Marion Coulon et Bahia Guellai Perception du visage chez les nouveau-nés
12h35-13h05 Benoist Schaal Réactivité olfactive prédisposée chez le nouveau-né
14h30-15h00 Willy Serniclaes Quelques résultats sur la perception de la parole chez l'adulte et leurs implications pour le développement
15h00-15h20 Andy Martin Learning phonemes with a pseudo-lexicon
15h20-15h40 Mélanie Havy Acquisition de mots et asymétries consonnes/voyelles
15h40-16h10 Franck Ramus Developmental Dyslexia and Specific Language Impairment: Same or Different?
16h40-17h00 Karla Monzalvo Dyslexie
17h00-17h20 Belonia Gabalda Children's sense of ownership
17h20-17h40 Rana Esseily Rôle de l’agent et du contexte (vidéo vs live) dans une tâche d’apprentissage par observation chez le bébé de 10 mois
lun22Juin200911h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
The necessary plasticity of the adult listener
The adult listener is able to recognise tens of thousands of spoken words with ease, and can do so in spite of considerable acoustic-phonetic variability in the speech signal. I will argue that listeners can achieve this in part because their speech-recognition systems are flexible; listeners can tune in to the specific qualities of the current signal, and thus can adapt to different listening conditions. Evidence from two lines of research will be presented. The first explores plasticity in speech-sound categories. Perceptual learning experiments will be discussed which show that listeners use lexical knowledge to retune their sound categories, that this retuning generalizes to new words, that it is thorough and stable over time, and that it can be talker-specific. Lexically-guided retuning even seems to take place during second-language listening, when a bilingual watches a foreign film while reading subtitles in the language of the film. The second line of research explores plasticity in the mapping of the speech signal onto the mental lexicon. Evidence will be presented showing how native and non-native speakers of English learn to improve recognition of Italian-accented English words. Finally, data from an eye-tracking experiment will show how listeners adapt to distorted speech (i.e., speech masked by radio noise) in order to optimize spoken-word understanding. Both lines of research reveal properties of the speech-recognition system, most importantly the ways in which this system is flexible.
ven29Mai20099h-16h30Institut des Neurosciences de Montpellier / Inserm-U583 Hôpital Saint Eloi - Bâtiment INM, 80, rue Augustin Fliche, 34091 MontpellierGRAECShow details
Ce meeting sera présidé par le Pr Jean-Luc Puel (Institut des Neurosciences de Montpellier / Inserm-U583).
Les conférenciers seront :
- le Pr Quentin Summerfield, (University of York, GB), spécialiste en psychoacoustique et audiologie : Clinical effectiveness and cost-effectiveness of bilateral cochlear implantation for children: spatial listening and quality of life.
- Le Pr Pascal Belin, (University of Glasgow, GB), spécialiste en Neurosciences Cognitives du traitement auditif et du traitement de la voix : « J'entends des voix »: Bases cérébrales de la cognition vocale.
- Anne-Lise Giraud, (Ecole Normale Supérieure, Paris-Ulm), spécialiste de la Neuroimagerie et modélisation du cerveau : Traitement audio-visuel de la parole: études en MEG et IRM fonctionnelle.
lun25Mai200910h45-12h15Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
The origins of joint action and cultural cognition
Human beings build skyscrapers, play in symphony orchestras, organize conferences, and show each other their vacation photos. We have thousands of different languages, we participate in countless cultural rituals and practices, and we attach great importance to fads and fashions. Our nearest non-human relatives, chimpanzees, do none of these things, although their social-cognitive skills are surprisingly complex. Why not? There is something special about human social cognition – what is it? We propose that this ‘something special’ is shared intentionality: the skills and motivation to share goals, intentions, and other psychological states with others. In humans, these skills and motivation are already present in infancy. I review a series of studies from our lab on communication, imitation, joint attention, and joint action, comparing the more individualistic versions of these skills in chimpanzees with their more collective counterparts in 1-year-old human infants.
lun18Mai200911h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Surdétermination de la cognition olfactive dans un contexte à forte contrainte sélective (!!! The talk will be held in ENGLISH !!!)
Le rôle de l'olfaction est invariablement saillant aux stades les plus précoces du développement des mammifères, lorsque les organismes ont à réaliser des tâches dont la rapidité d'exécution garantit leur viabilité. On décrira d'abord la co-évolution de systèmes de signalisation olfactive chez les femelles et des moyens de détection et d'analyse de ces signaux chez les nouveau-nés. Ces chémosignaux maternels véhiculent des signifiés imbriqués, certains invariants (phéromones), d'autres variables à l'échelle locale ou individuelle (reflétant l'alimentation, le stress, ou l'état physiologique maternel). Les nouveau-nés sont aptes à segmenter l'information portée dans ces sécrétions complexes, en particulier dans le lait des femelles de leur espèce.
Trois mécanismes cognitifs peuvent opérer pour optimiser la réactivité néonatale aux composés odorants portés dans ces sécrétions : i) l'apprentissage par anticipation in utero, ii) l'apprentissage postnatal immédiat et iii) la réponse automatique indépendante de toute d'induction préalable par l'expérience. Le fonctionnement de ces trois processus sera analysé chez deux espèces de mammifères, le Lapin et l'Homme, qui expriment des modes extrêmes de relation mère-enfant. Nous verrons que ces différentes voies cognitives opèrent chez les nouveau-nés des deux espèces, où elles agissent de façon redondante pour optimiser les premières prises lactées, garantes du transfert d'énergie et d'immunité nécessaire à la survie immédiate , et les premiers apprentissages stables, garants de l'engagement d'une trajectoire normale de développement.
lun11Mai200911h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
The body (schema) in space: multisensory interactions of different body parts
The brain contains neurons which respond to a touch to the skin, but also to a visual stimulus near the same part of the skin. They are thus specialized to process stimuli in the peripersonal space, i.e. the space immediately surrounding the body. Such neurons have been demonstrated almost exclusively for the hands and face. Consequently, their functional role has been suggested to be related to self-feeding (hand-mouth actions), but also to self-defense (responding to possible impact on the body). Importantly, for a neuron to be able to match a tactile location with the location of a visual stimulus (to maintain an aligned tactile-visual receptive field), it must take into account that body posture is variable. As we use our hands for almost all interactions with our environment, and the hand must be heavily coordinated with vision, it is by no means clear that such multisensory principles as found for the hands also hold for other body parts. Nevertheless, the neuroscientific literature has often generalized these findings to indicate the existence of a body schema. I present a series of experiments that use several experimental paradigms to investigate interactions between the hands and the feet. I show that the posture of one type of limb influences the processing of stimuli to the other type of limb. Results suggest that the brain automatically (that is, independent of the task requirements) transforms tactile stimuli into external coordinates. It takes the posture of the whole body into account for this processing, leading to interactions between different body parts in tactile perception. However, interactions also occur in the multisensory domain (i.e. between the peripersonal spaces of different types of limbs), suggesting that the peripersonal space is specially processed for all body parts.
lun27Avr200911h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Controlling navigation via optic flow: insect autopilots
With only one million neurons in their brains and 3000 pixels in their eyes, houseflies achieve autonomous navigation at an impressive 700 body-lengths per second. These highly objectionable creatures humble us all the time by achieving just what current robots cannot achieve: autonomous take-off, dynamic stabilization, 3D navigation, ground avoidance, collision avoidance, tracking, mating on the wing, and autonomous landing on the ceiling after mating...The last seven decades have provided evidence that flying insects achieve these feats by processing the optic flow (OF). Each pixel of the fly compound eye consists of two photoreceptor cells making up the colour channel and six cells (red in the above picture) participating in the motion detection channel. Six cells from nearby facets have coaxial visual fields and add their signals onto a common pair of second-order neurons, thereby improving the signal-to-noise ratio without impairing resolution. Each cell is equipped with a private pupillary system that extends its dynamic range. We modelled the visuomotor control system that provides flying insects with a means of autonomous guidance at close range. Our explicit control schemes are based on the concept of the OF regulator, that is, a feedback system that controls either the lift, the forward thrust or the lateral thrust. The value of this control scheme is that it explains how insects may navigate on the sole basis of optic flow cues without measuring any distances or speeds: how flies or bees take off and land, follow the terrain, avoid the lateral walls in a corridor and control their groundspeed automatically. Our control schemes were simulated and/or implemented onboard two types of aerial robots, a microhelicopter and a hovercraft, which behaved much like flying insects when placed in similar environments. These robots were equipped with opto-electronic OF sensors inspired by our electrophysiological findings on houseflies’ motion sensitive neurons, which we previously studied by associating single neuron recordings with single photoreceptor illumination within a single facet. The autopilots we arrived at are simple; they require no conventional avionic sensors such as range finders, velocimeters, variometers or GPS receivers. They are consistent with the neural repertoire of flying insects and meet the low avionic payload requirements of tomorrow’s micro-air and space vehicles.
Related papers from the lab:
Franceschini, N., Ruffier, F., Serres, J., Viollet, S. (2009) Optic flow based visual guidance : from flying insects to miniature aerial vehicles. In : Aerial Vehicles, Chapt. 35 (T.M. Lam, ed.), Vienna: InTech, pp. 747-770 (on-line at http://www.intechweb.org/books.php)
Franceschini, N., Ruffier, F., Serres, J. (2007) A bio-inspired flying robot sheds light on insect piloting abilities Current Biology 17 : 329-335
Franceschini, N., Pichon, J.M., Blanes, C. (1992) From insect vision to robot vision Phil. Trans. Roy. Soc. B 337: 283-294
Franceschini, N., Riehle, A., Le Nestour M. (1989) Directionally selective motion detection by insect neurons In: Facets of Vision, Chapt. 17 (D.G. Stavenga & R.C. Hardie, eds), Berlin: Springer, pp. 360-390
Franceschini N. (1975). Sampling of the visual environment by the compound eye of the fly: fundamentals and applications.In: Photoreceptor Optics (A. Snyder & R. Menzel, eds.), Berlin: Springer, pp. 98-125
lun06Avr200911h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
!! Speech reporté au 27 avril !!
jeu02Avr200910h-13hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Découvrez les sciences de la vision.
Rencontrez nos chercheurs. Visitez nos installations.
Industriels, venez développer des partenariats.
Étudiants, nous proposons des stages (Licence, M1, M2): venez nous parler!
L'équipe Vision du Laboratoire Psychologie organise une journée portes ouvertes le Jeudi 2 Avril de 10h a 13h au 45 Rue des Saints Pères, 4eme etage, salle H432.
lun30Mar200911h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminar
lun23Mar200911h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Brain substrates of language functions: advances and limitations of neuroimaging
The explosion of new brain imaging techniques and methods has revolutionized our knowledge of the brain substrates of language functions and even the earlier clinical anatomical method in aphasic patients can now be renewed by complementing the study of lesion topography with that of possibly compensatory processes in spared territories. In addition, brain imaging has made it possible to explore the biological substrates of language disorders apparently not
associated with any visible lesion such as developmental disorders (eg dyslexia). In spite of these spectacular advances we are still very far from approaching the fine-grained neurophysiology of language processes and the ability to study language in close-to-natural conditions. Some limitations are technological, ie the spatial resolution is limited even with high-field MRI magnets and there is no technique able to resolve space and time at the desirable level. It follows that meta-analyses of literature data frequently end up with a pattern of great overlap of diverse functions in relatively narrow regions eg ‘Broca’s area’. An even more challenging difficulty is linked to inter-subject variability in the measured signals in a given experiment. Yet another difficulty is linked to the distribution over large and distant territories of fugacious events related to processes at stake; studying this connectivity in real time remains a difficult challenge.
Beyond these technical difficulties, there are also limitations linked to our ignorance of the ‘neural code’ that might be active in key-regions for a given language-related process ; indeed the current methods only allow us to record weak signals from the brain activity that are somewhat modulated by experimental conditions. It follows that one important feature for cognitive neuroscience is to study in a systematic way the relationships between the variation of cognitive performance and those of brain activity.
Rather than being a source of discouragement, these current limitations allow one to sketch out (probably) endless future research programs.
lun09Mar200911h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Reaching activity in the medial posterior parietal cortex
Recording from single neurons in the macaque brain in the last decades has provided an useful tool to get insights on how the brain organises complex processes. The act of prehension is one of these processes. It requires an analysis of visual information, and the transformation of the coordinates of graspable objects from retinal to spatial ones, in order to allow correct visuo-motor coordination during the reach-to-grasp actions The dorsal visual stream, involving superior and inferior parietal lobules, is strongly involved in object location and in the analysis of visual information needed for the visual guidance of reach-to-grasp actions (Goodale and Milner, 1992). The superior parietal lobule in particular is involved in the on-line processing of visual information for the purpose of directing the hand towards objects to be reached and grasped. Area V6A is a visuomotor area of the superior parietal lobule that shows interesting functional properties to this respect. Single V6A cells are modulated by visual as well as arm-reching movements, and use a complex frame of reference that encompasses both spatial and retinotopic coordinates. V6A cells are involved in the neural computations needed to guide the entire act of prehension, from the transport of the hand towards the object in the peripersonal space to the hand orientation to align it with the orientation of the object, and to the hand preshaping to acquire the object. According to these findings, reaching and grasping would be processed by the same population of neurons, an idea that shed new light on the way the brain plans and executes visually guided action.
lun23Fév200911h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Predicting the future without knowing the past: Infants' sensitivity to probabilities and statistical distributions
Rational agents should integrate probabilities in their predictions about uncertain future events. However, whether humans can do this, and if so, how this ability originates, are controversial issues. I will presenting evidence suggesting that 12 month olds have rational expectations about the future based on estimations of event possibilities, without the need of sampling past experiences. I will also show that such natural expectations influence preschoolers’ reaction times (RTs), while frequencies modify motor responses, but not overt judgments, only after four years of age. I will argue that at the onset of human decision processes, the mind contains an intuition of elementary probability that cannot be reduced to the encountered frequency of events nor to elementary heuristics.
lun09Fév200911h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
When I was a graduate student, I had a long argument with Denis Pelli about using letters to study vision. I said that gratings were better. Contrary evidence is the citation count for our paper about letter identification. Oh well. When I got interested in visual crowding, which was traditionally examined with letters, I decided it was time to vindicate gratings. Like letters, the orientation of a little patch of grating becomes hard to identify when you don¹t look straight at it and other gratings are nearby. What you tend to see is a semi-homogenous texture, in which all the little gratings are more-or-less parallel. This phenomenon seemed to suggest that we are particularly insensitive to orientation variance. I got so wound up trying to document this insensitivity that I have all but forgotten about crowding. I will briefly describe three ongoing projects related to my new obsession. Well, the first is pretty much finished. I had observers discriminate between various amounts of orientation variance in textures composed of little gratings. Turns out we¹re not so insensitive after all. Results of another experiment, using the visual search paradigm, suggest that we¹re actually hard-wired to detect orientation variance: plaids "pop out" from gratings, but not vice versa. The third project has by far the most surprising results. When observers are asked to ignore orientation variance and just report the average orientation in a texture made of gratings, their estimates become more accurate when the variance increases. Specifically, they lose their bias for obliquity. My presentation will end when I explain why this final result is so hard to reconcile with simple Bayesian models of orientation bias.
lun26Jan2009Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminar
jeu22Jan200911h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Rules and Perceptual Primitives: Early Mechanisms of Language Acquisition
Identifying mechanisms underlying the acquisition of linguistic structure has been one of the central goals of language acquisition theory. This talk will focus on two mechanisms, rule learning (Marcus et al. 1999) and perceptual primitives (Endress et al. 2005, 2007), reporting a series of studies that investigate the developmental trajectory of the two mechanisms from birth to about 7 months of age. The results suggest that both mechanisms might be present during the first year of life, specifically contributing to the acquisition of different structural patterns.
lun12Jan200911h-12h30Salle de conférences au 2ème étage (R229) !!!!!, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Modulation of activity in the auditory pathway by the auditory cortex.
It is generally acknowledged that the act of listening is more than a process of passively hearing sounds. We bring to listening, experience of the world and our own predispositions and we are able to concentrate our listening efforts to select specific information from the incoming sounds. It is often assumed that these top down aspects of listening are mediated via the descending auditory nervous system. Alongside the complex ascending auditory system there is an equally complex descending system that provides the framework through which auditory centres all the way up to the auditory cortex are able to modulate the flow of incoming information. Certainly, attention considerably modifies the amount of activation of the auditory cortex and must also thereby alter the feedback to the lower centres. We have begun to study the function of these descending pathways by using a technique that allows us to reversibly inactivate the cortex in an animal model. We have shown widespread influences of the cortical output in the auditory thalamus and auditory midbrain. Even using a rather crude cortical inactivation technique, we have revealed a range of effects some of which appear to selectively affect inputs to these nuclei that arrive from the two ears. As a consequence we have shown that at least some of the sensitivity to binaural cues is under cortical control. It is not too much of a stretch of the imagination to suggest that such mechanisms might be involved in spatial attention to sound or at the very least in plasticity to localization cues. More recently we have been studying the effect that inactivation of the cortex has on the functioning of the cochlea. We have found changes in the threshold and amplitude of the cochlear potentials. While this is most likely the result of the cortex modulating the activity of the brainstem reflex circuits our results are currently not entirely consistent with this hypothesis.
ven12Déc200813h-14hSalle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
The role of attention in perceptual learning
The role of attention in perceptual learning is a controversial topic in psychology. While some studies have shown that perceptual learning cannot occur without attention, others have claimed the opposite. To understand the role of attention in perceptual learning, a series of psychophysical and functional magnetic resonance imaging (fMRI) experiments were conducted. First of all, the effects and processing of task-irrelevant visual signals were examined. The behavioral results demonstrate that subthreshold task-irrelevant visual signals lead to a stronger disturbance in performance on a visual task than suprathreshold signals. The results of the parallel fMRI experiments demonstrate that with subthreshold task-irrelevant visual signals, activation in the visual cortex was higher, but activation in the dorsolateral prefrontal cortex (DLPFC), a region known to play a significant role in inhibitory control of irrelevant signals, was lower than with suprathreshold signals. These results suggest that subthreshold irrelevant signals are not subject to effective inhibitory controls. Secondarily, the effect of the strength of task-irrelevant signals on task-irrelevant perceptual learning was investigated. Strong and weak task-irrelevant signals were used within subjects in psychophysical experiments. The experimental results indicated that task-irrelevant learning occurred only when the irrelevant features were weak. Importantly, while previous studies that used only suprathreshold stimuli as task-irrelevant features demonstrated no task-irrelevant perceptual learning, previous studies that used only weak (near threshold) task-irrelevant stimuli found the opposite result. The results of the studies described here suggest that absence or presence of task-irrelevant learning depends upon the strength of the task-irrelevant stimuli. Thirdly, a model of the role of attention in perceptual learning based on the results of these psychophysical and fMRI experiments was developed. The model predicts that perceptual learning will not occur for suprathreshold task-irrelevant features because the irrelevant features can be detected and effectively inhibited by the attentional system within DLPFC. This model differs from earlier models that suggest attention to a feature is necessary for the feature to be learned.
lun01Déc200811h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Visual averaging of color and motion
I shall present some new illusions of color, developed with Rob van Lier and Mark Vergeer. We show that a single adapting pattern can give afterimages of different colors at the same location, according to black contours within the white test field. This shows that colors of an afterimage are spatially averaged within but not across contours. This can be generalised to real colors as well. I shall also demonstrate "zigzag motion", in which a random-dot field alternately makes small jumps to the right, alternating with jumps 10 times as large downwards. The perceived direciton of this motion vaires with viewing distance, being apparently to the right viewed from afar and downward when viewd from close up. The resulting motin aftereffects are often NOT opposite to the perceived direciotn of adpating motion. Results show separate adpatation of fast and slow channels, and indicate a stronge preference for some velocities over other.
lun27Oct200811h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Hearing for Dummies
Ideally, the choice of a hearing aid and its tuning parameters should be based on a detailed assessment of a patient's hearing. In clinical practice, very little data is collected and considerable emphasis is placed on the audiogram alone. If we were to collect more data, how should we best spend the limited available clinical time? What would be the best tests? Would the additional knowledge make any difference to the hearing aid prescription? To address these questions, we have been subjecting a number of hearing impaired listeners to a wide range of tests. The tests have been devised to be easy to administer under automatic computer control and easy for the patient to use. The principal finding is that there is an unexpected variation among patients in the patterns of impairment that are revealed by the tests. These patterns are, in themselves, suggestive of the corrections that might be required in a suitable hearing aid. However, the data are currently being used to help develop computer models of the patients' hearing with a view to creating 'hearing dummies' to be used in the absence of the patient to find the best hearing aid strategy and the optimum settings. This talk will review the tests used, show some of the patterns of deficit revealed by the tests and show how a 'hearing dummy' can replicate these hearing deficits.
lun20Oct200811h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
The place of perceptual organization in a world of mechanistic theories"
I will summarize my research on grouping and show how it suggests a non-reductionistic approach to perceptual organization that may produce reductionistic outcomes. A possible consequence of this approach may be to carve out a well-defined role for middle-level vision.
mar07Oct200811h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
The Neurocomputational Basis of Face Recognition
Whereas people can readily describe the differences between two highly similar objects (such as birds on the same page in a bird guide), they are at a loss in describing the difference between the faces of Tom Cruise and John Travolta. A remarkably simple account, based on early cortical (i.e., Gabor-jet) spatial filtering, may be able to explain the ineffability of faces and a wide variety of other phenomena distinguishing face from object recognition, such as why the recognition of faces, but not objects, is so severely disrupted by contrast negation (as when viewing a photographic negative) and orientation inversion, why faces, but not objects, are represented “configurally” (and what could “configural” possibly mean in neurocomputational terms?), and the nature of the deficit in prosopagnosia whereby the afflicted individual complains not that faces look blurry or otherwise degraded but that they all look the same.
lun29Sep200811h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
The origins of face processing
The large majority of the studies on newborns’ face processing has been focused on newborns’ preference for face over non-face images, using to this end highly schematized black-and-white stimuli in which the resemblance to real faces was based on the presence of black spots to represents the internal facial features. Conversely, newborns’ capacity to recognize the image of a real unfamiliar individual face has received only limited attention from researchers. Our purpose was to investigate the visual information that newborn infants rely on to perform face recognition, shedding light on the nature of newborn face representation. Evidence will be presented showing that the limited resolution capacities of the visual system at birth do not prevent few-day-old infants from detecting and discriminating the information embedded in the inner portion of a face. In particular, within the range of visual spatial frequencies visible to a newborn baby, the extreme low spatial frequency range –from 0 to 0.5 cpd- appears useful for face recognition process at birth. Also, newborns are able to derive a representation of an unfamiliar face that is resilient to partial occlusion, and to a certain degree of rotation in depth. Finally, dynamic motion information plays a crucial role in promoting newborns’ face recognition, probably aiding the derivation of a tridimensional structure of the face. By three month of age, infants are sensitive to holistic face information, as can be shown using the face composite paradigm.
lun15Sep200811h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
ERP components in infants – Neural correlates of word stress processing, cross-modal word priming, and online lexical-semantic learning
Our research group investigates language-related ERP components in infants and young children, which can be utilised as correlates of both the development of early language skills and the maturation of specific brain mechanisms involved in early language acquisition. The comparison of the ERPs of different age groups or of those of age-matched children with different behavioural language development moreover provides information about the relation between specific perceptual or neurocognitive processes and the successive progression of children’s behavioural language capabilities.
The acquisition of words, i.e., learning that an arbitrary acoustic-phonological pattern refers to a certain meaning, is the basic step in the process of language learning. In my talk I will focus on three topics of word processing in infancy: (1) the discrimination of word stress that is assumed to facilitate word form acquisition by cueing word boundaries, (2) the development of semantic integration mechanisms that might be involved in the modification of existing lexical-semantic representations or the establishment of new representations, and (3) the learning of new word forms and their mappings onto a certain meaning within a single experimental session.
mar08Juil2008LPP seminarShow details
Missed Sights : Consequences for Perceptual Development
Newborns can see but it takes many years for vision to reach adult levels. We have evaluated the contribution of early visual experience to the later development by studying children born with cataracts that initially blocked visual input. Longitudinal studies indicate that some aspects of basic vision normalize after treatment by improving faster than normal to make up for an initial deficit. For other aspects of vision, there are permanent deficits. Surprisingly, there is greater plasticity for some aspects of higher-level vision. These patterns will be illustrated with results for the processing of motion, objects, and faces. The implications for understanding developmental mechanisms will be discussed.
lun30Juin200811h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminar
lun23Juin200811h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Dissociable neural system for processing of location, object, and verbal information
lun16Juin200811h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
On Broca, brain, and binding
lun02Juin200811h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Physical Reasoning in Infancy
lun26Mai200811h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Willow is a joint effort between Ecole Normale Supérieure et INRIA, focussing on the representational issues involved in visual scene understanding. Concretely, our objective is to develop geometric, physical, and statistical models for all components of the image interpretation process, including illumination, materials, objects, scenes, and human activities. These models will be used to tackle fundamental scientific challenges such as three-dimensional (3D) object and scene modeling, analysis, and retrieval ; human activity capture and classification ; and category-level object and scene recognition. They will also support applications with high scientific, societal, and/or economic impact in domains such as quantitative image analysis in science and engineering ; film post-production and special effects ; and video annotation, interpretation, and retrieval. Machine learning is a key part of our effort, with a balance of practical work in support of computer vision application, methodological research aimed at developing effective algorithms and architectures, and foundational work in learning theory. I will present in this talk an overview of Willow’s research activities and present several recent results in 3D photography and markerless motion capture, category-level object recognition, video interpretation, and machine learning. I will conclude with a brief discussion of our ongoing and new projects and partnerships.
lun14Avr200811h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Mechanisms of letter identification and reading
I will provide evidence that line terminations are the areas of letters used most effectively to identify letters ; that the upper half of the fourth, third, and first letters are the areas of words used most effectively to read ; and that neither letters, nor words are processed in parallel.
lun07Avr200811h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Interpolation and Approximation in Visual Completion
Empirical evidence suggest that visual completion is mediated by active processes leading to the generation of missing parts. Such an activity can be modelled as either interpolation or approximation. Interpolation models cannot account for completion phenomena involving distortions of image fragments. These distortions occur in limiting cases of completion and are compatible with the idea that shapes based on fragmentary evidence are the result of approximation. Recent data and theoretical implications will be discussed.
lun10Déc200711h-12h30Salle de réunion du LPP, H432, 4ème étage, Centre Biomédical des Saints Pères 45 rue des Sts Pères, 75006 ParisLPP seminarShow details
Scanning the infant brain - tracking first words
It has long been known that word learning under natural circumstances is characterised by a slow start followed by a steeply rising curve. Previous studies using the head turn procedure (HT) have shown that both English and French infants show word form recognition by 11 months but Welsh infants show the effect only at 12 months. Here I summarise results from ERP studies of single word form recognition in English and Welsh infants (9-12 months) and word meaning processing in English infants (16 months). First, we replicated HT results with ERPs by showing that 11-month-old English infants detect the difference between untrained familiar and rare words within 250 ms of stimulus onset. Second, we found localised signs of a familiarity effect at 9 months and a main effect of familiarity already at 10 months with ERPs when studying English infants cross-sectionally between 9 and 12 months. HT results, furthermore, were consistent and correlated significantly with ERPs between 9 and 11 months. Remarkably, word familiarity effects were found in neither HT nor ERPs at 12 months in English infants, just one month after they appeared in their most robust form. In Welsh infants we failed to obtain a significant word form recognition effect at any age in either the HT or the ERP procedure, although localized ERP effects were seen at 11 months. Finally, using a picture priming paradigm, we have found evidence for adult-like graded semantic relatedness effects in 16-month-old infants based on modulation of the classical N400 component of ERPs. Our results contribute to a developmental time-line of untrained word recognition and comprehension.
mer31Oct20079h30-11hSalle de réunion du LPP, H432, 4ème étage, UFR Biomédicale, 45 rue des Saints-PèresLPP seminarShow details
In-between fixation and movement: Toward a model of microsaccade generation
University of Potsdam
Department of Psychology
Microsaccades are a distinct component of the small eye movements that constitute fixation. They contribute to fundamental visual and motoric processes, including the control of fixation position and the prevention of visual fading. However, their implementation in the oculomotor system remains unknown. I will introduce a conceptual model in which microsaccades are the result of fixation-related activity in a motor map coding for both fixation and saccades. This model provides an appropriate framework for understanding the dynamics of microsaccade behavior in a variety of tasks.
ven19Oct200711h-12h30Salle de réunion du LPP, H432, 4ème étage, UFR Biomédicale, 45 rue des Saints-PèresLPP seminarShow details
The maintenance of binocular rivalry depends on temporally coarse form processing
Presenting the eyes with spatially mismatched images causes a phenomenon known as binocular rivalry—a stochastic fluctuation of awareness whereby each eye’s image alternately determines perception. Binocular rivalry is used to study interocular conflict resolution and the formation of conscious awareness from retinal images. Although the spatial determinants of rivalry have been well-characterized, the temporal determinants are still largely unstudied. We show that conflicting images do not need to be presented continuously or simultaneously to elicit binocular rivalry. Brief stimulus presentations separated by large intervals up to 350 ms still elicit rivalry, even when the conflicting images are temporally non-overlapping. This continuation of rivalry in the absence of direct spatial conflict reveals that a temporally sluggish process underlies rivalry. This process is further characterized by showing that it is independent of low-level information such as interocular timing differences, contrast-reversals, stimulus energy, and eye-of-origin information. This suggests the temporal factors maintaining rivalry relate more to higher-level form information. Systematically comparing the role of form and motion reveals that this temporal limit is determined by form conflict rather than motion conflict. Together, our findings demonstrate that binocular conflict resolution depends on temporally coarse form-based processing, possibly originating in the ventral visual pathway.
lun15Oct200711h-12h30Salle de réunion du LPP, H432, 4ème étage, UFR Biomédicale, 45 rue des Saints-PèresLPP seminarShow details
A basic question in visual cognition is how information is combined across separate glances into a stable, continuous percept. Previous explanations have included theories such as integration in a trans-saccadic buffer or storage in short-term visual memory, or, on the contrary, the idea that perception begins anew with each fixation. Converging evidence from primate neurophysiology, human psychophysics and neuroimaging suggest a new explanation for smooth and stable perception. We argue that the intention to make a saccadic eye movement initiates a series of preparations in the brain that lead to a fundamental alteration in visual processing before and after the saccadic eye movement. This theory of "trans-saccadic perception", in contrast to previous hypotheses based on buffers or memory storage, may help to explain how it is possible—despite discrete sensory input and limited memory—that conscious perception across saccades appears stable, predictable and continuous.
lun24Sep200711h-12h30Salle de réunion du LPP, H432, 4ème étage, UFR Biomédicale, 45 rue des Saints-PèresLPP seminarShow details
What happened to the concept of functionnal modularity ? Constraints on developmental trajectories, plasticity and environment. The example of face processing development.
What should be set and pre-organized in an adaptive visual machine which is supposed to learn face processing as efficiently as human adults do ? One way for answering this question is to study how face processing develops in human infants. While doing so, it is osberved that characteristics of this development are difficult to disantengle despite they serve different developmental purposes : (i) Some of the characteristics help developing tools fort extracting multiple perceptual face invariants, multiple facial perceptual categories and prototypes, (ii) other characteristics help development of matchning mechanisms involved in recognizing that "another’ s face and body is like my own face and body" (and vice-versa), (iii) still other characteristics serve developping intraspecies bonding and the specific status of face and body.
ven21Sep200714hAmphithéâtre Lavoisier A, 3ème étage, UFR Biomédicale, 45 rue des Saints-PèresSoutenance de thèseShow details
Le caractère spatiotemporel de la perception visuelle : une étude avec des brefs stimuli autour des saccades
Nous avons effectué une série d'expériences psychophysiques sur des brefs stimuli flashés autour des saccades oculaires, toutes portant sur l'hypothèse d'une « compression » de l'espace perçu près du début des saccades. Les trois premières ont porté sur le rapport entre le comptage d'ensembles de barres flashées près des saccades et les jugements de leur étendue spatiale : celle des ensembles de barres, celle des barres individuelles, et celle de l'espace entre barres adjacentes. Une autre a porté plus directement sur le rapport entre la séparation physique des barres et leur separation jugée, ainsi que la probabilité de sous-comptage de barres. Finalement deux expériences ont traité à la fois des jugements de la position dans l'espace de barres et de leur séparation. Aucune de ces expériences n'a produit de résultats en conformité avec l'hypothèse d'une compression générale de l'espace perçu près des saccades. Nous proposons que tous les phénomènes perceptifs qui lui ont été attribués ne s'expliqueront qu'en prenant en compte l'échelle temporelle des prises de décision perceptives, qui pour les flashes juste avant les saccades s'étend jusqu'à la période après la saccade. L'influence exercée par les évènements durant cette fenêtre pourrait opérer au niveau des signaux sensoriels mêmes, ainsi qu'à celui de leur intégration pour les décisions, notamment à travers des effets sur l'incertitude ou sur l'application d'attentes a priori.
We performed a series of psychophysical experiments on stimuli briefly flashed near the time of saccadic eye movements. All bore on the hypothesis of a "compression" of perceived visual space near the beginning of saccades. The first three dealt with the relation between the counting of groups of bars flashed around saccades and judgements of their spatial extent: that of the groups as a whole, that of the individual bars, and that of the space between adjacent bars. Another bore more directly on the relation between the physical and judged separations of pairs of bars flashed near saccades, as well as the probability that they would go undercounted. Finally, two experiments dealt at once with judgements of the locations of bars in space and of the separation between them. None of these experiments yielded results in conformity with the predictions of the hypothesis of a general compression of perceived space near saccades. We suggest that all the perceptual phenomena which have been ascribed to it can only be understood by taking into account the temporal scale over which perceptual decisions are formed, which for flashes just before saccades extends until after the saccade. The influence exercised by events over this temporal window may operate at the level of sensory signals themselves, or at that of their integration in the formation of perceptual decisions, notably through effects on uncertainty or on the application of prior expectations.