Multisensory integration; embodied cognition; neuroimaging; computational modeling
Humans are multisensory creatures, and nearly everything we do, from reading the newspaper to drinking a cup of coffee requires us to integrate information represented across multiple sensorimotor systems. I use behavioral studies, neuroimaging (fMRI) and computational models (artificial neural networks) to investigate how the brain transforms and integrates multisensory representations that are distributed among a network of (more-or-less) functionally-specialized brain regions. Because integration depends on communication among these regions, my research focuses on the properties of the networks that support these processes. Recent neuroimaging studies in my lab have found that ongoing connectivity development within the network supporting audiovisual integration predicts reading skill in typically-developing and dyslexic children. An ongoing neuroimaging study in my lab has started to explore similar ideas among individuals diagnosed with ADHD, and within typical adults engaged in multisensory imagery of common objects. Methods and computational tools developed for these studies have the potential to predict how changes to or differences in brain connectivity (e.g., following brain disease or as a consequence of learning) influence how we represent information in our minds.