syntactic structures proposed by many linguistics analyses correspond
to actual, neurally encoded, data structures during sentence
comprehension? In the first part of the talk, I will present our
attempts to address this question through a series of functional
magnetic resonance imaging (fMRI) experiments relying on a variety of
experimental paradigms (syntactic priming, manipulation of syntactic
coherence, sentence verification, …). I will also describe some
incursions made in the domains of Music and Mathematics where objects
(e.g. melodies on the one hand, arithmetic formulas or number names on
the other hand) arguably possess some syntactic structure. The results
of experiments manipulating syntactic coherence with these types of
stimuli reveal some interesting similarities and discrepancy with
Other’s gaze in human cognition: effects, mechanisms and therapeutic potential
A growing body of empirical evidences supports the hypothesis that others’ gaze implicitly modulates concomitant or subsequent cognitive processes and behaviors in Human. After presenting phylogenetic and ontogenetic data that underline the importance of other’s gaze in human cognition, I will question the cognitive and cerebral mechanisms subtending gaze processing. Then, I will develop the effects of other’s gaze on cognition, those related to direct gaze perception (which creates eye contact between individuals) and those related to averted gaze perception, a dichotomy whose value is however more pragmatic than theoretical. I will emphasize that these effects are mainly beneficial for cognition (e.g. improvement of memory, increase of body awareness, improvement of spatial orientation) and illustrate their therapeutic potential in Alzheimer’s disease and unilateral spatial neglect.
From visceral inputs to first-person perspective: how the brain coordinates sensory and cognitive maps
Neural information is expressed in different frames of reference. For instance, gaze-centered, head-centered and limb-centered frames of references have to be coordinated for visually-guided action. Coordination is also required between the more abstract spaces of cognitive maps. Still, despite the multiplicity of frames of reference, conscious experience is characterized by a single, unified viewpoint – or first-person perspective. I present here a novel hypothesis to generate first-person perspective: multiple reference frames are coordinated by the neural monitoring of visceral inputs, that act as a unifying central point, internal to the organism. This hypothesis is backed up by experimental data on brain-viscera coupling in different paradigms tapping onto subjective perception and cognition.
Learning in changing environments
People often take advantage of trends and regularities that they detect in their environment to adapt their behavior. It is true even when such trends actually change across time, as it is often the case in real-world, dynamic environments. In this talk, I will characterize some properties of the powerful machinery that the human brain uses to perform such a statistical inference, based on behavioral data, functional magnetic resonance imaging (fMRI), magneto- and electroencephalography (M-EEG) and computational modeling. Notably, this machinery is:
(1) Based on an estimation of transition probabilities between successive observations.
(2) Bayesian: people use and estimate uncertainty in a rational manner.
(3) Accessible to introspection: people can report both the estimated probabilities of future observations and the confidence that accompanies their inference.
(4) Flexible, because it gives more weight to recent observations, which is indeed optimal in dynamic environments.
(5) Hierarchical: it disentangles distinct but inter-related sources of uncertainties.
Applying the modulation theory to hearing
One of the most influential research programs in hearing sciences aims to apply the modulation theory to auditory perception. This theory, inspired by telecommunication sciences, relies on the following ‘core’ assumptions: (i) communication sounds including speech and animal vocalizations convey salient and slow temporal modulation cues; (ii) the auditory system of species using communication sounds has evolved to optimize its responses to these modulation cues; (iii) the transmission of information conveyed by communication signals is constrained by the ‘modulation transfer function’ (MTF) of the transmission path (a room, a hearing aid,…). We will show how the psychophysical and neurophysiological investigation of modulation perception by humans and by other species contributed to an in-depth understanding of the demodulation processes implemented in the peripheral and central auditory system. We will show how this work yielded models of modulation processing that account for a large range of listening situations including speech comprehension in quiet and in noise. Finally, we will show how his work contributed to a better understanding of the perceptual consequences of ageing and cochlear damage, and a better control of information transmission via rehabilitation systems.
In our recent functional magnetic resonance imaging (fMRI) study (Moisala et al., Front. Hum. Neurosci., 2015), young adult participants (N = 18) were presented with concurrent spoken and written sentences and they were to attend to either sentence or to both of them. Dividing attention between the spoken and written sentences was associated with bilateral activity enhancements in dorsolateral and -medial prefrontal areas. These areas showed also smaller activity enhancements during selective attention to speech or text at the presence of task-irrelevant text and speech, respectively, suggesting that dealing with distracting information involves the same brain areas as dividing attention. In our subsequent study applying the same experimental paradigm (Moisala et al., NeuroImage, 2016), we found in healthy adolescents and young adults (N=149) more right prefrontal activity and less accurate performance during attention to speech or text at the presence of irrelevant text or speech, respectively, the more the participants multitasked in their daily life. These results suggest that habitual multitasking may lead to enhanced distractibility. Yet, in another experimental condition (Moisala et al., Brain Res., 2016), the adolescent and young adult participants (N = 167) showed better performance and higher dorsolateral prefrontal activity during a demanding bimodal verbal working-memory task the more they played computer and video games in the daily life, suggesting that computer gaming may enhance attention and memory skills. Moreover, comparison of results from these attention and working-memory studies (Moisala et al., submitted) suggests development of prefrontal executive functions during adolescence. In our very recent study (Leminen et al., in preparation), we investigate selective attention to speech in more naturalistic conditions where the participants see the facial movements of speakers attended selectively at the presence of irrelevant speech on the background. According to our preliminary results, higher activity is observed in superior temporal and inferior parietal areas for higher quality of attended speech (natural vs. noise-vocoded speech) and for more perceivable speech-related facial movements (non-masked vs. masked). Moreover, right superior parietal and dorsolateral prefrontal areas show higher activity for lower speech quality suggesting enhanced demand for attention.