10 February 2016
Sarah F. Muldoon
Assistant Professor, Department of Mathematics, University at Buffalo
Human brain networks and the control of brain states
ABSTRACT:
Understanding the brain as a complex network of interacting components provides useful insights insights into brain function, and one can use measurements of functional network connectivity to quantify brain states over time. In this talk, I'll first describe work using data-driven computational modeling of brain dynamics to test the relationship between regional network controllability calculations and the ability of stimulation to impart change in functional network configurations. The second half of the talk will focus on how the detection of brain states can play a role in therapeutic interventions in disorders such as epilepsy. By quantifying brain states immediately before intervention, we can predict whether or not cognitive effort (a different type of "control") will be successful in suppressing pathological brain dynamics.
RECOMMENDED READINGS:
24 February 2016
Jean-Pierre Koenig and Karin Michelson
Professors, Department of Linguistics, University at Buffalo
Are languages really that different?
ABSTRACT:
One of the questions linguists try to answer is What is in a human language? Linguists in the last 50 years have gone about answering this question in three distinct ways: Our research on the structure of Oneida (Northern Iroquoian) is informed by the third question. Critical to our approach is what we call methodological minimalism, i.e. an analysis of unusual data that does not rely on pre-existing linguistic categories. The results of our research illustrate some of the benefits of methodological minimalism when trying to uncover what makes up a human language. In this talk, we focus on three classes of what many linguists consider "universals": (1) combinatorial universals (how smaller expressions combine into larger expressions), (2) universals of word categories (e.g. nouns and verbs), and (3) counting universals. In each case, we show how methodological minimalism lead us to an analysis of the Oneida data that is at odds with claimed universals of human language. In each case, the picture that emerges is one in which what remains universal is less specific than hitherto thought and what ties Oneida grammar to that of other languages is best explained in terms of "the rational solution" to the tasks speakers face or common paths of language change.
RECOMMENDED READINGS:
9 March 2016
Marieke van Heugten
Assistant Professor, Department of Psychology at the University at Buffalo
The comprehension of unfamiliar accents during language acquisition: It's in the ear of the beholder
ABSTRACT:
Perhaps one of the most impressive feats of human cognition concerns our ability to comprehend spoken language. This task is far from trivial, in large part due to the tremendous variability in the pronunciation of words across speakers of different language backgrounds. In order to become efficient language users, it is thus important to develop the capacity to flexibly adjust to the different ways in which people pronounce words. In this talk, I will describe a series of studies examining when and how language learners accomplish this. In particular, I will first focus on the developmental trajectory of learning to cope with unfamiliar accents, both from the perspective of young children learning their first language and from the perspective of adults who are in the process of learning a second language. I will then describe a series of studies examining the potential mechanisms underlying this ability. Both similarities and differences between the two populations will be discussed.
RECOMMENDED READINGS:
23 March 2016
Fernanda Ferreira
Professor, Department of Psychology of the University of California, Davis
Prediction, Information Structure, and Good Enough Language Processing
ABSTRACT:
The Good Enough language processing approach emphasizes people's tendency to generate superficial and even inaccurate interpretations of sentences. At the same time, a number of researchers have argued that prediction plays a key role in comprehension, allowing people to anticipate features of the input and even specific upcoming words based on sentential constraint. I will present evidence from our lab supporting both approaches, even though at least superficially these two perspectives seem incompatible. I will then argue that what allows us to link them is the concept of information structure. The fundamental proposal is that given or presupposed information is processed in a Good Enough manner, while new or focused information is the main target of comprehender's prediction efforts. The result is a theory that brings together three different literatures that have been treated almost entirely independently, and which we are currently testing using a combination of behavioral, computational, and neural methods.
6 April 2016
Frank H. Guenther
Professor, Department of Speech, Language and Hearing Sciences, and Department of Biomedical Engineering, Boston University
The neural mechanisms of speech: From computational modeling to neural prosthesis
ABSTRACT:
Speech production is a highly complex sensorimotor task involving tightly coordinated processing in the frontal, temporal, and parietal lobes of the cerebral cortex. To better understand these processes, our laboratory has designed, experimentally tested, and iteratively refined a neural network model whose components correspond to the brain regions involved in speech. Babbling and imitation phases are used to train neural mappings between phonological, articulatory, auditory, and somatosensory representations. After the imitation phase, the model can produce learned phonemes and syllables by generating movements of an articulatory synthesizer. Because the model's components correspond to neural populations and are given precise anatomical locations, activity in the model's neurons can be compared directly to neuroimaging data. Computer simulations of the model account for a wide range of experimental findings, including data on acquisition of speaking skills, articulatory kinematics, and brain activity during normal and perturbed speech. Furthermore, "damaged" versions of the model are being used to investigate several communication disorders, including stuttering, apraxia of speech, and spasmodic dysphonia. The model has also been used to guide development of a brain-computer interface aimed at restoring speech output to an individual suffering from locked-in syndrome, characterized by complete paralysis with intact sensation and cognition.
RECOMMENDED READING
20 April 2016
Chris Viger
Associate Professor, Department of Philosophy Western University and Rotman Institute of Philosophy
From neural architecture to cognition
ABSTRACT:
As a materialist philosopher, I am concerned with the question of how brains give rise to minds; more specifically, whether we can glean anything interesting about the nature of cognition from its underlying neural architecture. Many philosophers have regarded this approach as wrong-headed on the grounds that cognitive functions are multiply realizable, making neural implementation inessential. To the contrary, I argue that preliminary results suggesting that the gross structure of the connectome consists primarily of local networks connected through hubs via relatively few long-ranging connections may offer important insight for dual process theory, the (in)effectiveness of many brain training techniques, and more speculatively a framework for understanding how language acquisition augments our cognitive capacities.
RECOMMENDED READINGS:
14 September 2016
Andrew Anderson
Postdoctoral Fellow, Brain and Cognitive Sciences, University of Rochester
How the meaning of words and sentences is represented in the brain, and how computational linguistic/vision and behavioral approaches can be applied to model brain representations
How the meaning of words and sentences is represented in the brain is relevant to both cognitive neuroscience and artificial intelligence but is poorly understood. A number of recent advances have been made by conjoining the two approaches and using computational models of words' meaning to explain the structure of brain activation patterns scanned as words are read and comprehended. The majority of this work has focused on concrete nouns. This talk begins by discussing recent contributions in this area made by the author. Computer vision models of object identity and text-based computational models of linguistic meaning are correlated with fMRI activation patterns elicited as not only concrete but also abstract words are read and comprehended. Both model and brain representations are put in the context of the psychological distinction made between thematically related knowledge (musicians, music and instruments frequently occur together in space and time, but are not the same thing) and taxonomic category membership (domestic cats and tigers are similar, despite rarely co-occurring). The talk will go on to present initial steps made toward predicting the representation of sentences in the brain. This work applies a recently developed 'experiential attribute' model that uses behavioral ratings to estimate the importance of different neurobiological systems to experiencing words' referents. A simple approach is then introduced to decompose fMRI sentences into words, and words into neural features associated with the experiential attributes. These neural features can subsequently be reassembled to predict new words and sentences. This demonstrates that hypotheses about the word and feature level semantic content of sentences can be tested using semantic models and that sentence fMRI and provides initial evidence that structure in the experiential attribute model is also present in brain activation patterns.
21 September
Roger Levy
Associate Professor Brain & Cognitive Sciences, MIT.
Probabilistic models of human language comprehension
Human language use is a central problem for the advancement of machine intelligence, and poses some of the deepest scientific challenges in accounting for the capabilities of the human mind. In this talk I describe several major advances we have recently made in this domain that have led to a state-of-the-art theory of language comprehension as rational, goal-driven inference and action. These advances were made possible by combining leading ideas and techniques from computer science, psychology, and linguistics to define probabilistic models over detailed linguistic representations and testing their predictions through naturalistic data and controlled experiments. In language comprehension, I describe a detailed expectation-based theory of real-time language understanding that unifies three topics central to the field - ambiguity resolution, prediction, and syntactic complexity - and that finds broad empirical support. I then move on to describe a "noisy-channel" theory which generalizes the expectation-based theory by removing the assumption of modularity between the processes of individual word recognition and sentence-level comprehension. This theory accounts for critical outstanding puzzles for previous approaches, and when combined with reinforcement learning yield state-of-the-art models of human eye movement control in reading.
5 October
Jenny Saffran
Distinguished Professor, Department of Psychology, University of Wisconsin-Madison
Building a Lexicon
Words are bundles of meanings and sounds (or signs). As mature language users, we have sophisticated knowledge about how words work, both on their own and as part of a lexicon. How does that knowledge emerge? In my talk, I will consider recent work focused on how infants and toddlers discover the forms of words, construe the meanings of words, and integrate words into their nascent semantic networks. I'll conclude by considering some of the factors that drive infants to learn complex systems like human languages.
12 October
David Shucard
Professor, Department of Neurology, University at Buffalo School of Medicine and Biomedical Sciences
Electrophysiological Evidence of Executive Control and Inhibitory Brain Processes
Executive mental functions are critical to our interaction with the environment. They allow us to attend selectively and respond appropriately in a variety of ways to an often rapidly changing environment while monitoring conflicting response options and inhibiting inappropriate responses. Our laboratory has been particularly interested in the electrophysiological signatures or biomarkers of executive mental processes such as monitoring and conflict resolution. In this presentation, I will introduce some of the terms related to these executive processes, discuss the brain structures involved, and show how event-related brain potentials, illustrated by studies from our laboratory, reflect executive functions such as inhibition and conflict.
9 November
Bradley Taber-Thomas
Postdoctoral fellow in the Department of Psychology at Penn State University, and Adjunct Instructor at the Department of Psychology at the University at Buffalo
Attention bias and social-emotional development: A cognitive neuroscience perspective
Attention, as an information processing gate-keeper, plays a crucial role in social-emotional functioning and development. In this talk research will be presented on the neural systems underlying attention to salient information, and how those systems differ between children on diverging trajectories of social-emotional development. Specifically, brain networks involved in bottom-up and top-down attention processes will be discussed based on traditional functional MRI analyses of a faces dot-probe task, as well as a novel topographical pattern analysis of intrinsic (resting state) functional connectivity. Limitations of the dot-probe task and directions for future research will also be discussed.
30 November
David Braun
Professor, Department of Philosophy, University at Buffalo
Physicalism, Representation, and Consciousness
Physicalism is the view that everything in the universe is physical. If physicalism is true, then all objects, properties, relations, states, and events are physical. Physicalism implies that there are no immaterial minds (no souls); it implies that every mental property, event, and state is purely physical. Many philosophers of mind accept physicalism. But there are two mental phenomena that present prima facie difficulties for physicalism: mental representation and consciousness. Mental representation occurs when mental events or states represent objects in the world. I will describe why most philosophers are relatively optimistic about finding a theory of mental representation that is consistent with physicalism. However, many philosophers (including many physicalists) think that consciousness presents a greater challenge to physicalism. I will distinguish between several different kinds of consciousness, focusing on the kind of consciousness that seems most mysterious, which philosophers call phenomenal consciousness. I will then sketch a theory of phenomenal consciousness, representationism, that a fair number of physicalist philosophers accept. Representationism claims (roughly) that a mental state is phenomenally conscious if and only if it has a certain functional role in a thinker's mind and it represents certain aspects of the world in a certain way. Representationism reduces the problem of consciousness to the problem of mental representation. Thus, if representationism is true, and mental representation is consistent with physical, then phenomenal consciousness is also consistent with physicalism. Some contemporary philosophers, however, have argued that consciousness cannot be physically explained. Some of their arguments are initially compelling, and rather hard to answer, even for those who accept representationism. I shall consider one such anti-physicalist argument here, Frank Jackson's Knowledge Argument, and present a physicalist reply to it.