11 February 2015
Assistant Professor, Department of Psychology, University at Buffalo
Specialization of neural pathways for reading: An fMRI study of young readers.
ABSTRACT:
This talk presents the results of a newly published functional magnetic resonance imaging (fMRI) study that examines reading development in grade-school children. Learning to read is an intrinsically cross-modal process because it involves mapping written (visual) symbols to pre-existing phonological (auditory) representations. Interestingly, despite this theoretical link, there has been little neuroimaging evidence for multisensory processing in a part of the visual object processing stream known as the Visual Word Form Area (VWFA). The VWFA is of particular importance in the neuroscientific study of reading, because it reliably shows a sensitivity to familiar combinations of printed letters (e.g., known bigrams and trigrams). Thus, the distinction between two dominant brain-based theories of the specialization of this brain region for processing written words hinges on the cross-modal sensitivity of the visual-object processing stream to phonological information. A fMRI study used rhyming judgments for 32 children between 8 and 13 years of age, who performed the task for unimodal (auditory-only, AA; visual-only, VV) and cross-modal (AV) word pairs. The interpretation of these results touches upon several important issues related to assessing multisensory integration in fMRI experiments, and using behavioral correlates to provide the necessary theoretical grounding for interpreting neuroimaging results.
25 February 2015
Associate Professor, Department of Linguistics, University at Buffalo
Paradigm leveling and the role of universal preferences in morphophonological change
ABSTRACT:
Paradigm leveling - the elimination of allomorphic stem alternations among related wordforms (e.g. old-elder > old-older) - is one of the most commonly attested kinds of morphologically motivated change. Debates over how best to account for paradigm leveling have flared up frequently since the late nineteenth century. I will focus on two related aspects of this controversy. The first concerns certain relatively unusual subtypes of leveling, including partial leveling where one aspect of a stem alternation is eliminated while another is retained, e.g. Old English frēosan-froren > freeze-frozen (with retention of the ablaut alternation). Secondly, I examine the question of whether leveling is (partially) motivated by some kind of inherent, universal bias against (stem) allomorphy. In addition to paradigm leveling, various synchronic phenomena and experimental findings have been offered as evidence for such a bias. The explicit rejection of any stem-uniformity bias has been a minority view among historical linguists from Paul (1886) to Garrett (2008). The approach advocated by these scholars cannot be the whole story because it ignores the challenge posed by some cases of partial leveling and related phenomena. I argue that a more adequate approach requires to us to recognize that speakers' morphophonological innovations are influenced, on the one hand, by the higher-order generalizations they draw about the structural properties of the grammar and, on the other hand, by the kinds of perceptual factors that historical linguists usually invoke only in accounting for folk etymology.
RECOMMENDED READINGS:
11 March 2015
William R. Kenan Professor, Department of Brain & Cognitive Sciences and the Center for Visual Science, University of Rochester
Neural correlates of statistical learning in adults and infants
ABSTRACT:
Since the initial demonstrations of statistical learning two decades ago, a variety of questions have been raised about the neural mechanism(s) that support these behavioral findings. Studies of statistical learning in adults using fMRI have focused on the outcome of the implicit extraction process -- i.e., differential activation to structured and unstructured test items during a post-learning recognition phase. I will summarize several fMRI studies that focus on the learning phase itself, using both auditory (temporal) stimuli and visual (temporal and spatial) stimuli. Because fMRI is extremely difficult to use with infants, I will also summarize some recent work that uses fNIRS (functional near-infrared spectroscopy) to study basic aspects of learning in 6-month-olds. These findings suggest some fundamental differences between how the infant and adult brain respond to sensory information, as well as some commonalities about how the infant and adult brain respond to violations of learned stimulus co-occurrences.
RECOMMENDED READINGS:
12 March 2015
NOTE: This talk will take place in Clemens 120, at 3.30pm
Distinguished Speaker
Center for Cognitive Science and Department of Psychology Dr. Donahue Tremaine Memorial Lecture Fund
William R. Kenan Professor, Department of Brain & Cognitive Sciences and the Center for Visual Science, University of Rochester
The dynamics of learning and attention in infants and adults
ABSTRACT:
Over the past two decades, there has been substantial interest in a powerful implicit learning mechanism - often called statistical learning - that enables infants, children, adults and non-humans to extract patterned information by mere exposure to structured input. A key question is how learners go beyond the input they receive to make implicit inferences about novel exemplars they have never encountered - i.e., when to generalize and when to treat new items as exceptions. A corollary of this question is how learners adapt to a changing environment - i.e., how do they build a model of the world that contains more than a single structure? A second key question is how naive learners make implicit inferences about which features of the input are most likely to be informative - how to guide attention among a large set of alternatives. I will summarize a series of experiments with infants, children, and adults that address these two questions and highlight additional challenges faced by learners in natural (complex) environments.
8 April 2015
Assistant Professor, Department of Brain and Cognitive Sciences, University of Rochester
The brain's linguistic representations: from neural population codes to syllables and semantics
ABSTRACT:
In seeking to understand how the human brain represents language, neuroimaging faces a serious problem: brain images do not reveal mental representations. Recent work in neural decoding has shown that classifier algorithms can recover information from individual subjects' fMRI data about the tasks and stimuli that evoked that activation. However, that decoding does not in itself tell us whether the classifiers extract information that is actually used by the brain. Nor does it tell us how the brain represents that information. Addressing the first of those questions, I will present evidence from speech perception showing that multivoxel fMRI patterns can predict individual differences in people's behavioural ability to discriminate between heard syllables. Addressing the second, I will show how the structure of linguistic representations can be related to the structure of similarity relations between neural activation patterns. This generates a hypothesis about how the brain represents the meanings of words, which can be stated very simply: neural similarity matches semantic similarity. The predictive power of this approach can be demonstrated by decoding the meanings of words from fMRI activation. It can also be used to address the question of how speakers of different languages (English and Chinese) can share the same concept, and to translate from one language to the other by matching neural similarity patterns across different people's brains.
15 April 2015
Associate Professor, Department of Philosophy, Carnegie Mellon University
How Questions and Answers Cohere
ABSTRACT:
There is a well-known observation that wh-questions allow for sentence-fragment answers, as in (1) below:
(1) Q: Who ate the cookies? A: Bill.
In the context of the question, the NP Bill somehow expresses the complete proposition that Bill ate the cookies.
An observation that has not previously been made is that there are many cases where an utterance of a complete declarative sentences in answer to a question expresses a proposition richer than that encoded by the sentence itself (even relativized to context in the usual way). Here are some examples:
(2) Q: Where is John going on Wednesday? A: He's going to Chicago.
(3) Q: How is John getting to Chicago? A: He's taking the train.
(4) Q: Why is John going to Chicago by [TRAIN]? A: He's afraid to fly.
The answer in (2) conveys not merely that John is going to Chicago, but that he's going to Chicago on Wednesday. The answer in (3) conveys not merely that John is taking the train (at some present/future time), but that he is doing so in order to get to Chicago (with appropriate constraints on the temporal reference). The answer in (4) conveys not merely that John is afraid to fly, but that this is the reason (explanation) for his traveling to Chicago by train.
In this paper, I develop an account of this phenomenon within a DRT framework, supplemented with ideas from Segmented DRT. The core idea is that the interpretation of the question/answer sequence involves establishing a particular coherence relation, Direct Answer, between the question and answer; and that this coherence relation has semantic consequences: specifically, it requires that discourse referents in the answer be treated, as far as possible, as anaphoric on the question; and that some content in the answer provide information about the wh-variable introduced by the question. In the second half of the talk (time permitting), I will extend this account of questions and direct answers to cases of enrichment of unasserted embedded clauses which stand in an answer-like relation to explicit questions.
This work has significance not only for formal semantics/pragmatics, but also for questions of interest to cognitive scientists studying language. As I will emphasize, the phenomenon seems to require a perspective according to which interpretation involves the construction of a (mental) representation of the ongoing discourse, relative to which new utterances are interpreted. While this is far from being a new idea, the data to be presented are a reminder of the prevalence in discourse of phenomena which require this perspective.
29 April 2015
Associate Professor, Department of Industrial & System Engineering, University at Buffalo
Human Performance Modeling and its Applications in Systems Engineering
ABSTRACT:
This research seminar introduces the major research activities at the Cognitive System Lab at SUNY-Buffalo, focusing on human performance modeling with its applications in transportation safety, human-computer interaction, and smart energy systems design. Human performance modeling is a growing and challenging area in cognitive systems engineering. It builds computational models based on the fundamental mechanisms of human cognition and human-system interaction, employs both mathematical and discrete event simulation methods in industrial engineering, and predicts human performance and workload in real-world systems. It can be used to design, improve, and evaluate systems with human in the loop. Current and future research topics will also be introduced.
RECOMMENDED READINGS: