Cross-Network Multisensory Motion Processing
Adriana Schoenhaut
To effectively make decisions in a dynamic, noisy environment, you need to optimally use all the information available to you. In order to do so, you need to Integrate bottom-up sensory information, which is modulated by top-down attentional processes. My goal is to understand how dynamic top-down and bottom-up unisensory and multisensory information interacts and converges across different levels of processing through neurophysiology recordings and various computational modeling approaches.
For my main project, I plan to train non-human primates to perform a motion discrimination paradigm with auditory, visual, and combined audiovisual motion stimuli. I will simultaneously record from neurons in areas MT/MST, PPC, and dlPFC, which are each sensitive to different stimulus features (low-level features vs. higher order features). During the task, I will manipulate both low-level (e.g., stimulus motion coherence) and higher-level (e.g., attention, task) factors to expose the differential effects these changes have on representations of motion in each described area. These representational (as well as behavioral) dynamics will be revealed using computational modeling methods and representational similarity analysis (RSA). Using RSA, differences in the degree of correlation between models and neural data in each region will elucidate where and when different features play a critical role in sensory processing and multisensory integration.