16 January 2013
Distinguished Professor of Philosophy, The Graduate Center, The City University of New York
Linguistic intuitions are not "the voice of competence"
In Ignorance of Language (2006) I attributed to Chomskians "the voice of competence" view ("VoC") that a native speakes's linguistic competence provides, "noise" aside, the informational content of her metalinguistic intuitions. One aim of this paper has been to defend this attribution from Ludlow's criticisms. I also argued that VoC was false. In particular, I argued that we do not have any idea how embodied but unrepresented rules might provide linguistic intuitions. Another aim of this paper is defend this argument from Rey's criticisms. Instead of VoC I have urged "the modest explanation" that linguistic intuitions are ordinary empirical theory-laden central-processor responses to linguistic phenomena. If this is right, it has serious methodological consequences. First, the evidential focus in linguistics should move away from the indirect evidence provided by intuitions to the more direct evidence provided by usage. Second, insofar as the evidence of intuitions is sought, there will seldom be good reason for preferring those of folk over those of experts about language. Finally, what goes for linguistics goes for the philosophy of language. But here the needed change is more drastic because philosophers do not seem even to acknowledge the evidence available from usage. Finally, the focus on usage should yield a place for experimental work in, for example, the theory of reference.
30 January 2013
Assistant Professor, Department of Communicative Disorders and Sciences, University at Buffalo
Acquisition of tense marking in English-speaking children with cochlear implants: the role of speech perception and working memory
Young, typically developing children who speak English tend to inconsistently use tense markers (e.g., he walked; Wilson, 2003). The surface hypothesis (Leonard, 1989) suggested that young children inconsistently produce tense markers partly because these morphemes in English are acoustically insalient. These morphemes have relatively shorter duration and/or weaker energy as compared to the surrounding content words. In addition, processing tense markers requires multiple mental operations. Because tense markers are brief and there is little time before new materials appear, young children may not always be able to process tense markers completely in real-time speech. In order to fully acquire these morphemes, young children will need a number of exposures to speech materials with tense markers. Before full acquisition is achieved, they will inconsistently produce tense markers.
Acquisition of tense markers may be even more challenging for children with profound deafness who receive cochlear implants (CIs) because of the constraints in their speech perception and information processing abilities. The speech input that children with CIs perceive is a degraded, electrical signal instead of an acoustic signal (Wilson, 2006). In addition, these children do not receive significant auditory input of spoken language until they are implanted, which does not typically happen until they are at least 12 months of age. Due to the deprivation of auditory input in the early year(s), cortical development in the typical auditory cortex locations is compromised by the visual system that takes over the auditory cortex space. The perception of tense markers in children with CIs may therefore be negatively affected by the electrical input that they receive and the cortical compromise. Furthermore, recent studies have shown that children with CIs tended to have limited information processing skills because of the deprivation of auditory input in the early year(s). Thus, tense markers are more likely to be processed incompletely in children with CIs than in typical children. Given the perceptual and processing constraints, children with CIs are likely to show delayed acquisition of tense markers.
In this talk, we will first present two studies that examined the production of tense markers in children with CIs and their typical peers who had matched hearing experience. Together, the studies show that, within the groups, percent correct of tense marking changed significantly in children with CIs and in typical children. Across the groups, children with CIs were significantly less accurate in tense marking than typical children at four and five years post- implantation. In addition, the performance of tense marking in children with CIs was correlated with their speech perception skills at earlier time points. These studies, however, did not directly examine how well children with CIs can perceive tense markers. Nor did they explore the role of working memory in the acquisition of tense marking. In the second part of this talk, we will present a series of new studies that directly examine the role of speech perception and working memory in the acquisition of tense marking.
13 February 2013
Assistant Professor, Department of Linguistics, University of Maryland
Predictive facilitation in lexical processing: mechanisms and neural implementation
Although many language researchers now assume an important role for predictive mechanisms in comprehension, it has been surprisingly difficult to show that early contextual facilitation effects--such as the N400 effect in ERP--are due to prediction rather than passive long-term memory processes such as spreading activation. Determining the cortical regions supporting predictive facilitation has also been challenging, because contextual manipulations can impact multiple processing stages that are not separable in measures with low temporal resolution such as fMRI. In the studies I present here, we addressed these challenges by using a semantic priming paradigm in which manipulating the proportion of related pairs allows us to modulate prediction while holding contextual association constant. In order to determine which cortical regions contribute to early predictive facilitation, we used a within-subjects multimodal neuroimaging design (EEG-MEG and fMRI) that supplements the high spatial resolution of fMRI with the excellent temporal resolution of EEG-MEG. The results show not only that prediction contributes significantly to the N400 priming effect in ERP, but that the effect of spreading activation is surprisingly small. fMRI and MEG source localization indicate that early effects of predictive facilitation are due to reduced activity in left anterior temporal cortex, a region variously implicated by previous literature in semantic storage and semantic combination. I will argue that together these results provide new evidence for predictive facilitation in lexical processing, and suggest that facilitated conceptual access is an important contributor to the N400 effect.
6 March 2013
Professor of Music Theory at the Eastman School of Music, University of Rochester
Statistical Learning, Absolute Pitch, and Pitch Memory: Effects of Training and Tone Language
While tests of absolute pitch (AP) typically ask trained musicians to assign note names to pitches heard in isolation, evidence of accurate pitch encoding and retrieval has been demonstrated in some nonmusicians when recalling familiar music, even though they are unable to label the pitches or key (an ability we call "incipient AP"). This presentation reviews the literature on AP acquisition, including theories of innateness, learning during a critical period, genetic predisposition, and tone language as facilitator of learning. Then it turns to experiments by the author and collaborator Elissa Newport that are designed to identify incipient-AP listeners. In a statistical learning paradigm, participants are familiarized with a melody that contains structured internal repetitions of three-note pitch patterns; at test, listeners discriminate these implicitly learned patterns from minor-third transpositions and from other patterns heard with lower probabilities in the familiarization melody. We compare performance of AP and non-AP listeners (musicians and non musicians) on this test, as well as other tests of short-term memory and fine pitch discrimination. Results show that AP listeners, including the incipient-AP group, show enhanced auditory abilities: more acute pitch perception and better short-term memory for pitch sequences. Native tone language speakers as a group (AP and non-AP) show poorer pitch acuity than other listeners. The presentation concludes by discussing some implications of these results for theories of AP acquisition.
20 March 2013
Professor, Department of Philosophy, University at Buffalo
On the Ontology of Massively Planned Social Action
Philosophers who study the phenomena of social agency have tended to base their theories on small-scale actions involving individuals who share common goals. In his ''Massively Shared Agency''  the legal theorist Scott Shapiro shows how authorities are needed to bring about the meshing of shared plans of large numbers of individuals over time. Shapiro conceives the need for the meshing of plans as one important element in an understanding of the nature of law. Drawing our examples from musical performance, urban design, and organized warfare, we shall build on Shapiro's ideas to explore the role of documents as vehicles, not merely for the communication of plans across different levels in an organizational hierarchy, but also for the correction and enhancement of such plans over time .
10 April 2013
Assistant Professor, Brain & Cognitive Sciences, University of Rochester
Math, monkeys, and the developing brain
Thirty thousand years ago, humans kept track of numerical quantities by carving slashes on fragments of bone. There were no numerals and the counting system as we know it did not exist. What cognitive abilities enabled our ancestors to record tallies and conceive of counting in the first place? And, what is the physical substrate in the brain that makes quantitative thinking possible? Our research aims to discover the origins and organization of numerical concepts in humans using clues from child development, the organization of the human brain, and animal cognition. We argue that there is continuity between ancient and modern human numerical concepts in terms of their fundamental cognitive and neural processes.
18 September 2013
Assistant Professor, Brain and Cognitive Sciences, University of Rochester
Structured statistical learning in language acquisition
I'll describe my computational and experimental work on language learning. I'll discuss two primary lines of research that both focus on how learners might discover abstract aspects of language, including number words and quantifiers. In each domain, I'll argue that the key aspects of meaning are not directly observable by learners, and that the inductive challenge this poses is best solved by statistically well-formed models that operate over the domain of rich semantic representations. I'll show how such learning models can solve acquisition problems in theory, well-describe inferences made by children and adults, and lead to compelling developmental predictions. I'll then discuss current and ongoing experiments with infants and toddlers testing the core assumptions of these learning models.
25 September 2013
Assistant Research Professor, Arizona State University
The role of conflict in implicit learning
The conflict-monitoring hypothesis states that response conflict signals the anterior cingulate to recruit executive processes residing in the dorsolateral prefrontal cortex to optimize performance. I have extended this theory to explain context-specific control settings. Focusing on computational modeling and two ERP studies (with a touch of fMRI data), I will discuss the characteristics of situations that lead to this type of control, and show how implicitly learned contingencies in the environment rapidly effect our actions and behavior.
9 October 2013
NIH Post-doctoral fellow, Psychology Department at the University of Illinois at Urbana-Champaign
Prediction, error, and adaptation during human language comprehension
A fundamental challenge for human cognition is perceiving and acting in a world in which the statistics that characterize available sensory data are non-stationary. In this talk I focus on a specific instance of non-stationarity in the environment: linguistic variability. Language is variable in the sense that language is used differently depending on facts about individual speakers (or authors) as well as on various facts about the context of language use. This variability poses computational challenges to the processes underlying sentence comprehension. In this talk I provide evidence that humans respond to linguistic variability by continuously adapting to and learning the statistical regularities of novel linguistic environments. In a series of self-paced reading experiments I demonstrate that, in the face of a novel environment whose statistics violate subjects' expectations, subjects adapt in order to allow their linguistic expectations to converge towards the statistics of the current environment. The work reported here takes a step toward synthesizing three lines of research in psycholinguistics that have previously proceeded largely in parallel: (1) experience- or expectation-based processing, (2) syntactic priming, and (3) statistical learning in language.
23 October 2013
Professor, Department of Psychology, Princeton Neuroscience Institute, Princeton University
Consciousness and the Social Brain
What is consciousness and how can a brain, a mere collection of neurons, create it? In my lab we are developing a theoretical and experimental approach to these questions. The theory begins with our ability to attribute awareness to others. The human brain has a complex circuitry that allows it to be socially intelligent. This social machinery has only just begun to be studied in detail by neuroscientists. One function of this circuitry is to attribute a state of awareness to others: to build the construct that person Y is aware of thing X. In our hypothesis, the machinery that attributes awareness to others also helps attribute the property to oneself. Evidence from the clinic shows that when the same brain areas are damaged, people suffer from a catastrophic disruption in their own awareness of objects and events. The theory also draws on the relationship between awareness (the subjective experience that human brain's report having) and attention (the brain's data-handling method of focusing resources on a limited set of signals). We suggest that awareness is an internal model or 'cartoon sketch' that the brain constructs of its own state of attention. Through these perspectives we hope to understand awareness from a rational perspective as part of the information-processing toolkit used by brains. One possible ultimate benefit from this type of research, perhaps decades in the future, is an artificial intelligence that has the human-like social capability to attribute awareness to itself and to others - a machine that understands what it means to have a mind. These topics are discussed in greater detail in my recent book, Consciousness and the Social Brain.
30 October 2013
CCS Business Meeting
20 November 2013
Professor, Department of History and Philosophy of Science, Director Cognitive Science Program; Member of Center for the Integrative Study of Animal Behavior, Indiana University
Learning and development, or learning as development?
The scientific study of animal cognition involves numerous attempts to show that animals can succeed at various "high-level" tasks, such as mirror self-recognition, imitation, the false belief task, tool use, and symbolic communication. Studies of these capacities cross a wide range of species, for example primates, cetaceans, dogs, elephants, parrots, and members of the corvid family. These pursuits often seem like an exercise in trophy hunting by scientists eager to show that their favorite species can (too!) do what another species can do. The specific tasks investigated usually take human competency as the model to be emulated and it is common to see the cognitive capacities of animals likened to those of human children of various ages, as if to locate the animals with respect to particular benchmarks on the developmental trajectory from neonates to human adult cognitive competency. Although practically everyone acknowledges that these comparisons and the underlying picture of development are too simplistic, few scientists study cognitive development in animals systematically. On the one hand, fewer than 5% of the 400 articles published in the journal Animal Cognition since it was established in 1999 are about cognitive development. On the other hand, while many papers about psychological development in animals get published within the field of developmental psychobiology, nearly all the scientists doing this work operate within the framework of associationist learning theory. Their skepticism about the meaning of cognitive concepts drives most developmental psychobiologists to actively eschew cognitive vocabulary (see, e.g., Wasserman & Blumberg 2006). Meanwhile, some developmentally savvy cognitive comparative psychologists, find associationist principles insufficient to account for their data (e.g., McGonigle & Chalmers 2002) and they have attempted to close the gap between associationism and language-centered models of cognition that are implausibly applied to animals. Thus, they have introduced theoretical notions such as "private codes" (McGonigle & Chalmers 2006) and "emergents" (Rumbaugh et al. 2008) to try to account for the relationship between learning and development. In this paper, I critically examine these efforts to integrate learning and development in the field of animal cognition.
4 December 2013
Assistant Professor, Department of Linguistics, University at Buffalo
On the nature of Subject Islands
Subject phrases are traditionally seen as syntactic environments that prohibit long-distance dependencies. For example, a sentence like Who did [the sister of _] meet John? is less acceptable than a sentence with a gap in the complement phrase instead, as in Who did John meet [the sister of _]?Most research assumes that this effect is due to a universal syntactic constraint, called the Subject Condition (Huang 1982, Chomsky 1986, Rizzi 1990, Lasnik and Saito 1992, Takahashi 1994, Uriagereka 1999, Phillips, 2006, Jurka et al. 2011). In this talk I discuss mounting evidence which suggests that the Subject Condition is much less pervasive than usually assumed, and it is shown that no previous account can explain the full range of facts, including pragmatic accounts (Erteschik-Shir and Lappin 1979, Van Valin 1986, Erteschik-Shir 2006) and performance-based approaches (Kluender 2004, Hofmeister and Sag 2010). I argue that subject phrases (phrasal or clausal, finite or otherwise, derived or not) are by no means impermeable to extraction. Moreover, I provide new empirical evidence indicating that subject island violations can satiate under certain conditions, contra Sprouse (2009) and Crawford (2011). Drawing from Engdahl (1983) and others, I conjecture that subject island effects are due to the violation of frequency-based parsing expectations that are ultimately due to pragmatic and processing factors.