The relevance of our work

Given that we live in a world in which we are continually bombarded with information provided by our different sensory systems, such "multisensory integration" is a ubiquitous phenomenon. The utility of multisensory interactions is illustrated by the numerous studies from our lab and others that have highlighted the important role these processes play in altering our behaviors and shaping our perceptions. In addition, our lab (along with others) are beginning to highlight the important role altered multisensory function plays in clinical conditions such as autism and schizophrenia. The following video highlights some of this work in children with autism. 

controls type="video/mp4">

Impact through multidisciplinary research

Ultimately, we are interested in providing a more complete understanding of how multisensory processes impact our behaviors and perceptions, in better elucidating the neural substrates for these interactions, and in understanding how multisensory processes develop and are influenced by sensory experience. We study these fundamental questions using a multidisciplinary set of approaches, including animal behavior, human psychophysics, neuroimaging (ERP and fMRI) and neurophysiological techniques. Along with our interest in the brain bases for multisensory processes under normal circumstances, we are also interested in examining how multisensory circuits are altered in an array of clinical conditions, including attention deficit hyperactivity disorder, autism spectrum disorder and developmental dyslexia.

Current research projects

Navigate: David Tovar | Adriana Schoenhaut | Sarah Vassall | Ansley Kunnath | Mackenzie Lighterink | Adam Tiesman | Messiyah Stevens | Vivian Cai

Click HERE to enlarge image

Neurocomputational Modeling

David Tovar

The goal of my research is to uncover the neural computations that transforms incoming sensory information up the neural hierarchy to when and where it can it be used to guide behavior. My research spans between different levels of analysis from sensory processing within brain circuits, between circuits in different brain areas, and relating brain activity to computational models, such as convolutional neural networks. I analyze monkey neural spikes, local field potentials, current source density, as well as human EEG, MEG, and fMRI.

Click HERE to enlarge image

Cross-Network Multisensory Motion Processing

Adriana Schoenhaut

Although great strides have been made in recent years to further our understanding of multisensory perception and its neural correlates, there are still significant gaps in our knowledge with regards to processing more ecologically valid stimuli, such as those containing motion. One of these gaps revolves around how motion information is transformed in the presence of modulatory, cross-modal input as it makes its way through successive stages of the cortical processing hierarchy, and how these transformations map on to behavior/perception. My project aims to address this issue using behavioral paradigms that we have developed in which macaques signal the direction of an auditory, visual, or audiovisual motion stimulus. During performance of the task, we will record neural activity in two cortical domains reflecting successive levels in the processing hierarchy: the medial temporal (MT) and medial superior temporal (MST) areas. The first aim of this project is to examine how modality and motion strength within audiovisual stimuli impact discrimination behavior and contribute towards causal inference. My second aim is to use neurophysiological and computational modeling approaches to characterize auditory, visual, and audiovisual motion responses in these areas with the overarching hypothesis that as motion information ascends from MT to MST, there will be an increase in the role of modulatory auditory input, reflective of a gradual shift from encoding low-level stimulus features such as signal strength toward the encoding of features relevant to goal-oriented behavior such as stimulus direction and task demands. We are also in the process of conducting parallel EEG experiments in humans for model comparison. Collectively, this work will shed great light on the mechanistic underpinnings of multisensory perception in critical motion processing nodes. Additionally, success in these experiments would challenge how we think about the modularity of the sensory cortical processing hierarchy.

Click HERE to enlarge image

Modeling Spatial Integration Changes in Autism

Sarah Vassall

Our world is inherently multisensory, and the full picture of our environment emerges when we are able to effectively combine information from across senses. Over a decade of research has shown that in autistic individuals, however, multisensory integration and sensory function are altered, contributing to presentation of core and associated clinical features. Importantly, however, audiovisual spatial integration - combining sights and sounds with different spatial locations (e.g., tone of voice and facial expression) - has not been characterized in autism.

The goal of my research is to elucidate the relationship between audiovisual spatial integration abilities and clinical assessment scales in order to better understand how autistic children represent audiovisual space, and how changes to these representations might contribute to important multisensory activities such as language development, speech comprehension, and social engagement. Further, I aim to understand how electrophysiological differences contribute to differences in spatial representation, and whether we may develop models of sensory-elicited neural activity in autism to identify key brain-behavior relationships in this population.

Click HERE to enlarge image

Neural Plasticity in Sensory Deprivation and Restoration

Ansley Kunnath

Hearing loss is a major cause of disability that affects over 48 million Americans. Cochlear implants (CIs) are neuroprosthetic devices that allow people with profound hearing loss to recover hearing and speech comprehension. However, CI surgery outcomes are highly variable and difficult to predict, which creates a challenge for clinicians to guide patient decisions and expectations. Speech recognition is a multisensory process. Although it is known that visual speech cues can improve auditory speech recognition, the visual and audiovisual abilities of CI users have not been well characterized before and after cochlear implantation. Our lab has found that pre-implantation visual and audiovisual speech recognition predicts post-implantation auditory speech recognition, suggesting that multisensory integration may play an underappreciated role in CI outcomes.

My research explores changes in visual and audiovisual performance following CI surgery through a battery of multisensory experiments. I also study neural reorganization following CI surgery using electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS). Finally, I am interested in using our knowledge of central auditory system plasticity to develop novel interventions for hearing loss. I am coordinating a clinical trial on the effects of donepezil on cochlear implant outcomes. The cholinergic system is a powerful modulator of sensory plasticity, and we hypothesize that increasing cholinergic activity with donepezil during sensory restoration with a cochlear implant may facilitate hearing recovery. I am also investigating audiovisual training paradigms to improve speech recognition in cochlear implant users. Ultimately, I hope to bridge the gap between basic auditory neuroscience and clinical otolaryngology as a future surgeon-scientist.

Click HERE to enlarge image

Multisensory Integration and Cross-Modal Activation in Pediatric Cochlear Implant Recipients

Mackenzie Lighterink

Despite significant improvements in cochlear implant (CI) technology, speech understanding outcomes remain highly variable with the devices. Cortical activation across different sensory modalities and audiovisual benefit are two under-researched variables that could be affecting CI performance. While visually-evoked cross-modal activity in the auditory cortex has been documented in pediatric CI users, its impact on speech understanding is not well understood. Additionally, the relationship between this cross-modal activity and audiovisual processing abilities has yet to be investigated. The purpose of this research is to characterize audiovisual integration abilities and cortical activation patterns to visual and audiovisual speech in pediatric CI users and their hearing peers. This research utilizes behavioral tasks of speech understanding, both auditory-only and multimodal, as well as functional near-infrared spectroscopy (fNIRS) to gather information about cortical responses across different sensory modalities.

Click HERE to enlarge image

Behavioral and Neural Correlates of Human Audiovisual Motion Perception

Adam Tiesman

A classic paradigm for multimodal studies involve flashes, beeps, and other frozen in space and time objects. The phenomena of multisensory integration is well studied using punctate stimuli. However, the world we live in is dynamic- constantly moving around us. The goal of my project is to glean insight into studying multisensory integration using more ecologically valid sets of stimuli, specifically motion. Will the principles we know about multisensory integration hold true with moving stimuli? What are the behavioral and neural correlates of audiovisual motion perception? My project tackles just this- using a motion discrimination task alongside EEG recordings, specifically in humans. Tackling these questions within a Bayesian causal inference framework will help us understand the perceptual decision making process regarding multimodal motion discrimination. In essence, this project, along with its parallel project in the Wallace Lab spearheaded by Adriana, aims to tackle the brain to behavior relationship and underpinnings of sensory perception for stimuli that are changing in space and time.

Spatial Localization with Unilateral Occlusion

Messiyah Stevens

This study aims to investigate the influence of audiovisual stimuli on sound localization in temporarily unilaterally deafened individuals. The ability to localize sound is important and requires the integration of auditory spatial cues generated by external ears, head, and body. Illusions can provide insights into underlying sensory functionality, and the ventriloquism effect has been studied using the spatial ventriloquist paradigm. This study uses a similar paradigm to examine the relationship between localization biases, perceptual unification, and measures of a participant's stimulus disparity. The study involves participants being housed in a sound-attenuated room with LEDs and speakers mounted on a semicircular array. They are presented with visual and auditory stimuli, either spatially congruent or incongruent, while their left or right ear is plugged for two blocks and unplugged for the other two blocks. Preliminary results show that there is an effect of the visual field being shifted to the left of the unplugged condition. Study will provide insights into how sound localization is influenced by audiovisual stimuli, and the findings may have implications for individuals with hearing impairments.

Correlation between Brain and Physiological Signals

Vivian Cai

The goal of my research is to use computations to reveal commonalities and differences between neural and physiological signals. I analyze datasets containing brain signals (e.g. fMRI) and physiological signals (e.g. heart rate and respiratory rate) across experimental and clinical conditions. Physiological signals can help us gain insight into the underlying brain activity during specific tasks and in clinical conditions. In experimental conditions, I analyze physiological and neural signals collected during tasks such as watching emotionally evocative videos or listening to classical music, to find out how the physiological signals relate to brain activity during these tasks. In clinical conditions, I wish to observe which brain region is most closely correlated with physiological biomarkers (e.g. heart rate) in Alzheimer’s disease patients, mild cognitive impairment patients and controls.