Day and Time: Wednesdays, 2:00-4:00pm
Location: 280 Park Hall, North Campus (unless otherwise noted)
The Cognitive Science Colliquia Series is free and open to the public. To receive email announcements of each event, please subscribe to one of the Cognitive Science mailing lists.
Speaker: Matthew Bolton
Assistant Professor, Industrial and Systems Engineering, University at Buffalo
Breakdowns in complex systems frequently occur as a result of system elements interacting in ways unanticipated by designers. In systems with human operators, human behavior (both normative and erroneous) is often associated with such failures. These problems are difficult to predict because they are emergent features of the complex interactions that occur between the human operator and the other elements of the system. Model checking is a technology that uses automated proof techniques to discover failures in formal models of computer hardware and software by considering all of the possible interactions. In this talk I will describe novel methods and tools I have developed to allow systems engineers to use task analytic models of human behavior, taxonomies of erroneous human behavior, formal system modeling, and model checking to predict when normative and potentially unanticipated erroneous human-system interactions can contribute to system failure. I will demonstrate how these techniques can be used in the analysis and design of a realistic medical system: a patient-controlled analgesia pump.
Speaker: Brendan Johns
Assistant Professor, Communicative Disorders and Sciences, University at Buffalo
The collection of large text sources has revolutionized the field of natural language processing, and has led to the development of many different models that are capable of extracting sophisticated semantic representations of words based on the statistical redundancies contained within natural language. However, these models are trained on text bases that are a random collection of language written by many different authors, designed to represent one's average experience with language. This talk will focus on two main issues: 1) how variable the usage of language is across individuals, and 2) how this variability can be used to optimized models of natural language processing. It will be shown that by optimizing models of natural language based on the same lexical variability that humans experience, it is possible to attain benchmark fits to a wide variety of lexical tasks.
Assistant Professor, Linguistics, University at Buffalo
Children learn the syntax of their native language without explicit instruction. What learning biases do they rely on to accomplish this amazing feat? In this talk, I will discuss how probabilistic models from the field of natural language processing can be used to make proposed learning biases explicit, and also to compare different sets of learning biases in terms of their practical utility for learning syntax from strings of words. I will conclude by considering how to leverage recent advances in deep learning to build models that capture long-distance dependencies.
John K. Pate, and Mark Johnson (2016) Grammar induction from (lots of) words alone In Proceedings of COLING, pp.10.
Speaker: Nazbanou Nozari
Assistant Professor, Neurology and Cognitive Science, Johns Hopkins
A large body of research has explored the nature of representations in the language production system. In comparison, little is known about how language production is monitored and regulated. I will argue that the production system is monitored and controlled in a fashion similar to other cognitive and motor systems, and endorse this claim with behavioral data from individuals with aphasia and typically-developing children, as well as electrophysiological data from neurotypical adult speakers. At the same time, I will present evidence showing that monitoring and the ensuing control are based directly on the information within specific parts of a cognitive system. Thus, to the degree that such information is specific to a domain or sub-domain, so is the cognitive control that regulates that part of the system. In summary, I will endorse a monitoring-control model of language production that is domain-general in its computational principles and neural correlates, but operates specifically to meet the needs of the language production system.
Speaker: Wayne Wu
Associate Professor and Associate Director of the Center for the Neural Basis of Cognition, Carnegie Mellon University
This talk discusses the nature of attention and its interaction with cognition. A theory of attention that conforms with experimental practice and coheres with multiple levels of analysis is presented. The theory is then applied to clarify empirical debates about the nature of consciousness (inattentional blindness and overflow) and the relation between cognition and perception.
Speaker: Emily Morgan
Postdoctoral Researcher, Psychology, Tufts University
The ability to generate novel utterances compositionally using
generative knowledge is a hallmark property of human language. At
the same time, languages contain non-compositional or idiosyncratic
items, such as irregular verbs, idioms, etc. In this talk I ask how
and why language achieves a balance between these two systems -
generative and item-specific - from both the synchronic and
diachronic perspectives. Specifically, I focus on the case of
binomial expressions of the form "X and Y", whose word order
preferences (e.g. bread and butter/#butter and bread) are
potentially determined by both generative and item-specific
knowledge. I show that ordering preferences for these expressions
indeed arise in part from violable generative constraints on the
phonological, semantic, and lexical properties of the constituent
words, but that expressions also have their own idiosyncratic
preferences. I argue that both the way these preferences manifest
diachronically and the way they are processed synchronically is
constrained by the fact that speakers have finite experience with
any given expression: in other words, the ability to learn and
transmit idiosyncratic preferences for an expression is constrained
by how frequently it is used. The finiteness of the input leads to
a rational solution in which processing of these expression relies
gradiently upon both generative and item-specific knowledge as a
function of expression frequency, with lower frequency items
primarily recruiting generative knowledge and higher frequency
items relying more upon item-specific knowledge. This gradient
processing in turn combines with the bottleneck effect of cultural
transmission to perpetuate across generations a frequency-dependent
balance of compositionality and idiosyncrasy in the language, in
which higher frequency expressions are gradiently more
idiosyncratic. I provide evidence for this gradient,
frequency-dependent trade-off of generativity and item-specificity
in both language processing and language structure using behavioral
experiments, corpus data, and computational modeling.
Morgan, Emily and Roger Levy (2016) Abstract knowledge versus direct experience in processing of binomial expressions Cognition, 157:384-402.
Background readings for each lecture are available to UB faculty and students on UB Learns. To access, please log in to UB Learns and select "Center for Cognitive Science" → "Course Documents" → "Background Readings for (Semester/Year)."
If you are affiliated with UB and do not have access to the UBLearns website, please contact Gail Mauner, director of the Center for Cognitive Science.
Since 1994, the Center has hosted an annual series of Distinguished Speakers in Cognitive Science. This series brings some of the most preeminent scholars in cognitive science to UB. Each speaker gives a major address that has been widely advertised at UB in both faculty and student venues, over the local Buffalo media to attract members of the public, and at neighboring colleges and universities.
In addition, each speaker gives a more specialized talk to the usual UB Cognitive Science community, and engages in an evening discussion in the home of a cognitive science faculty member with especially interested cognitive science participants.
Past Distinguished Speakers:
Thursday, October 22, 2015
Student Union Theater, North Campus
"How We Integrate New Information with Existing Knowledge: A Complementary Learning Systems Analysis"
Some years ago, two colleagues and I described a complementary learning systems theory of the roles of hippocampus and neocortex in memory (McClelland, McNaughton, & O'Reilly, 1995). Our theory postulated that rapid integration of arbitrary new information into neocortex is avoided to prevent catastrophic interference with existing knowledge structures. In this talk, I examine evidence from recent studies that new information can sometimes be integrated rapidly into the neocortex, challenging our theory as previously presented. I present new simulations based on our theory, showing that new information that is consistent with knowledge previously acquired by our putatively cortex-like artificial neural network can be learned rapidly without interfering with existing knowledge. It is when/ inconsistent/new knowledge is acquired quickly that catastrophic interference ensues. These results match the pattern observed in the recent studies, and provide a mechanism for understanding when and how rapid integration of new information occurs.
McClelland, J. L. (2013). Incorporating rapid neocortical learning of new schema-consistent information into complementary learning systems theory. Journal of Experimental Psychology: General, 142(4), 1190-1210. doi: 10.1037/a0033812.
James McClelland is the Lucie Stern Professor in the Social Sciences and Director of the Center for Mind, Brain and Computation, Department of Psychology at Stanford University.
Thursday, March 12, 2015
120 Clemens Hall, North Campus
"The Dynamics of Learning and Attention in Infants and Adults"
Over the past two decades, there has been substantial interest in a powerful implicit learning mechanism - often called statistical learning - that enables infants, children, adults and non-humans to extract patterned information by mere exposure to structured input. A key question is how learners go beyond the input they receive to make implicit inferences about novel exemplars they have never encountered - i.e., when to generalize and when to treat new items as exceptions. A corollary of this question is how learners adapt to a changing environment - i.e., how do they build a model of the world that contains more than a single structure? A second key question is how naive learners make implicit inferences about which features of the input are most likely to be informative - how to guide attention among a large set of alternatives. I will summarize a series of experiments with infants, children, and adults that address these two questions and highlight additional challenges faced by learners in natural (complex) environments.
William R. Kenan is a professor in the Department of Brain and Cognitive Sciences and the Center for Visual Science at the University of Rochester.
Wednesday, February 29, 2012
280 Park Hall, North Campus
"The Origin of Concepts: A Case Study of Natural Number"
A theory of conceptual development must specify the innate representational primitives, must characterize the ways in which the initial state differs from the adult state, and must characterize the processes through which one is transformed into the other. I will defend three claims: 1) With respect to the initial state, the innate stock of primitives is not limited to sensory, perceptual, or sensory-motor representations; rather, there are also innate conceptual representations. 2) With respect to developmental change, conceptual development consists of episodes of qualitative change, resulting in systems of representation that are more powerful than, and sometimes incommensurable with, those from which they are built. 3) With respect to a learning mechanism that achieves conceptual discontinuity, I offer Quinian bootstrapping.
Carey, Susan (2011) Précis of 'The Origin of Concepts'. Behavioral and Brain Sciences. Volume: 34, Issue: 3, Pages: 113-124.
Thursday, March 1, 2012
Student Union Theater, North Campus
"Infants Rich Representation of the Social World"
In this talk I present a case study of infants' representations of people, intentional agency, social relations, and morality to illustrate the methods and arguments in favor of claims that human infants are endowed with rich innate representational resources.
Wednesday, April 7, 2010
280 Park Hall, North Campus
"Event Knowledge and Sentence Processing: A Blast from the Past"
Language processing has often focused on how language users comprehend and produce sentences. Although fluent use obviously requires integrating information across multiple sentences, the syntactic and semantic processes necessary for comprehending sentences have (with some important exceptions) largely been seen as self-contained. That is, it was assumed that these processes were largely insensitive to factors lying outside the current sentence's boundaries. This assumption is not universally shared, however, and remains controversial. In this talk, I shall present a series of experiments that suggest that knowledge of events and situations—often arising from broader context—plays a critical role in many intrasentential phenomena often thought of as purely syntactic or semantic. The data include findings from a range of methodologies, including reaction time, eye tracking (both in reading in the visual world paradigm), and event-related potentials. The timing of these effects implies that sentence processing draws in a direct and immediate way on a comprehender's knowledge of events and situations (or, the "blast from the past", on knowledge of scripts, schemas, and frames).
Thursday, 8 April 2010
Student Union Theater, North Campus
"Words and Dinosaur Bones: Knowing About Words Without a Mental Dictionary"
For many years, language researchers were not overly interested in words. After all, words vary across language in mostly random and unsystematic ways. Language learners simply had to learn them by rote. Words were uninteresting. Rules were where the exciting action lay, and considerable effort was invested in trying to figure out what the rules of languages are, whether they come from a universal toolbox, and how language learners could acquire them. Over the past decade, however, there has been increasing interest in the lexicon as the locus of users' language knowledge. There is now a considerable body of linguistic and psycholinguistic research that has led many researchers to conclude that the mental lexicon contains richly detailed information about both general and specific aspects of language. Words are in again, it seems. But this very richness of lexical information poses representational challenges for traditional views of the lexicon. In this talk, I will present a body of psycholinguistic data, involving both behavioral and event-related-potential experiments, that suggest that event knowledge plays an immediate and critical role in the expectancies that comprehenders generate as they process sentences. I argue that this knowledge is, on the one hand, precisely the sort of stuff that on standard grounds one would want to incorporate in the lexicon but, on the other hand, cannot reasonably be placed there. I suggest that, in fact, lexical knowledge (which I take to be real) may not properly be encoded in a mental lexicon, but through a very different computational mechanism.
Elman, Jeffrey L. (2009), "On the Meaning of Words and Dinosaur Bones: Lexical Knowledge Without a Lexicon," Cognitive Science 33(4): 547–582.
Thursday, April 28, 2005
Baird Concert Hall, North Campus
"Why We're So Smart"
Human cognitive abilities are remarkable, and even more remarkable is the the rapidity with which children develop cognitive insight. How does this insight arise? A pervasive view in cognitive development is that these rapid gains can only be explained by assuming that infants begin with substantial amounts of innate knowledge. In this talk I propose an alternative approach, centered on mechanisms of human learning. I suggest two powerful forces that contribute to human learning and reasoning ability: (1) analogical processing; and (2) the acquisition of relational language. I will present evidence that the structure-mapping processes that occur during analogy and similarity are a core mechanism by which abstract knowledge arises from experience. Our studies of learning in adults and children show that analogical comparison processes foster learning in several ways: by aligning common relational structure, by suggesting inferences between situations, by focusing attention on relevant differences, and by inviting relational abstractions.
A further contributor to human learning and reasoning is the acquisition of relational language. Relational language provides labels that preserve and systematize the relations discovered through comparison processes. It also acts to invite analogical comparisons that reveal common structure. In sum, I suggest that mutual bootstrapping between structure-mapping processes and relational language is a major contributor to human cognition.
Dedre Gentner’s (Department of Psychology, Cognitive Science Program, Northwestern University) research is on the psychology of learning and reasoning and the development of cognition and language. Her early work on causal mental models and on the development of word meaning have been influential in cognitive research. Her most important contribution is the structure-mapping theory of analogy and similarity and its implications, including a computational model of similarity processing; a theoretical framework for analogy and metaphor; the evidence for disassociation between the kind of similarity that governs memory retrieval and the kind of similarity that governs on-line mapping and inference. In her developmental work she has proposed a relational shift in children’s similarity processing and has found evidence that this shift is knowledge-driven, rather than maturational. She has also proposed and tested a progressive alignment mechanism whereby comparison processes in ordinary experience can yield theoretical insight.
In language learning, Gentner’s hypothesis of a language-universal advantage for nouns in children’s early word learning that has engendered considerable research. Her recent work unites analogical thinking and language learning and investigates possible interactions between language and cognition. Her theoretical and empirical work provides evidence that relational language has a formative role in the development of relational thought. She is also investigating the hypothesis that analogical processes are integral to language acquisition and use.
Thursday, April 22, 2004
Screening Room, Center for the Arts, North Campus
"Conceptual Perspective of Speaker Choices"
Adult speakers choose among perspectives when they talk; they use different terms to pick out different perspectives (e.g., the dog, our pet, that animal). The perspectives adult speakers adopt affect how they both categorize and remember events. Yet studies of lexical acquisition in young children have often proposed a single-perspective view that assumes children can at first use only one term for talking about a referent object or event: a cat can only be called "cat", not "animal" or "Siamese" as well. But since children are exposed to multiple perspectives by the adults around them, it seems reasonable that they too should adopt alternative perspectives from an early age--the many-perspectives view. Moreover, adults offer children pragmatic directions about the meanings of new words and hence about new perspectives. Evidence for this many-perspectives account comes from a range of sources: children spontaneously use more than one term for the same object; they construct novel words to mark alternate perspectives; they shift perspective when asked; and they readily learn multiple labels for the same referent.
Eve V. Clark, Professor of Linguistics & Symbolic Systems at Stanford University, grew up and was educated in the UK and France. After completing her PhD in Linguistics with John Lyons at Edinburgh, she worked on the Language Universals Project at Stanford with Joseph Greenberg, and two years later, joined the Linguistics Department at Stanford University. She has taught there since, aside from several years 'off' in the UK and the Netherlands. She has been a Fellow at the Center for Advanced Study in the Behavioral Sciences, Stanford (979-1980) and a Guggenheim Fellow (1983-1984); she is a Fellow of the American Association for the Advancement of Science, and a member of the Royal Netherlands Academy of Sciences. Her research has focused on first language acquisition, in particular on the acquisition of meaning, where she has done extensive observational and experimental research; she has also worked the acquisition and use of word-formation, with detailed comparative studies of English and Hebrew in children and adults, and she has explored the pragmatics of word-coinage, applying the principles of conventionality and contrast to language use as well as to the process of acquisition. In her most recent work, she has been looking at the kinds of information adults offer children about unfamiliar words and their meanings, at the amount of negative evidence children may receive in the course of conversation, and at the relative contributions of gesture and gaze vs. language in adult exchanges with one- and two-year-olds. She has published numerous articles and chapters in linguistics and psycholinguistics. She is co-author of Psychology and Language (1977), and author of The Ontogenesis of Meaning (1979), Acquisition of Romance, with special reference to French (1985), The Lexicon in Acquisition (1993), and, most recently, First Language Acquisition (2003).
With the co-sponsorship of the Department of Psychology; the Department of Linguistics; the Department of Communicative Disorders and Sciences; the Department of Philosophy; and the Office of the UB Vice President for Research.
Tuesday, April 15, 2003
Slee Concert Hall, North Campus
"How We Reason"
A long-standing tradition postulates that human thinking is rational because it is founded on the 'laws of thought'. This talk argues to the contrary that reasoning is not based on such laws, but on the ability to envisage possibilities. A conclusion is judged to be valid if it holds in all such MENTAL MODELS of the given information, and probable if it holds in most of them. This theory is based on three main principles: each mental model represents a possibility; the structure of models corresponds to the structure of what they represent; and models normally represent only what is true. The talk outlines the evidence corroborating the theory from behavioral and brain-imaging studies. Inferences from one model are easier than inferences from multiple models. Knowledge affects the process of reasoning. And, if falsity matters, reasoners commit systematic fallacies. Humans are not always rational, but they are not intrinsically irrational, either.
Johnson-Laird was born in Yorkshire, England. He left school at the age of 15 and spent ten years in a variety of occupations until he went to University College, London to read psychology. He later gained his Ph.D. there under the supervision of Peter Wason, and he joined the faculty in 1966. In 1971, he was a visiting member of the Institute of Advanced Study, Princeton, where he began a collaboration with George A. Miller. Subsequently, he held positions at the University of Sussex (1973-1981) and at the Medical Research Council's Applied Psychology Unit (1981-1989) in Cambridge, where he was also a Fellow of Darwin College. He returned to Princeton in 1989 to be a member of the faculty at the University, where is the Stuart Professor of Psychology. He has published ten books, and over two hundred papers. He is married and has two children. In his spare time, if he had any, he would play modern jazz piano.
With the co-sponsorship of the Department of Psychology; the Department of Computer Science and Engineering; the Department of Philosophy; the Samuel P. Capen Chair of Anthropology; and the C.S. Peirce Professorship in American Philosophy.
Thursday, March 14, 2002
Slee Concert Hall, North Campus
"Possible Stages in the Evolution of the Language Faculty"
The human ability to learn language is a human cognitive specialization, encoded (in some unknown way) in our genes. The evident adaptivity of linguistic communication suggests that this capacity arose through natural selection. It is therefore a challenge for linguistics to find a plausible route by which the features of language could have evolved step by step. I will propose such a route, using evidence from child and adult language acquisition, from aphasia, from pidgin and creole languages, from "language"-trained apes, and from "fossils" of earlier forms of the language capacity still found in modern-day languages.
Ray Jackendoff is Professor of Linguistics at Brandeis University, where he has taught since 1971. He is a Fellow of the American Academy of Arts and Sciences, President-Elect of the Linguistic Society of America, and past President of the Society for Philosophy and Psychology. He is author of Semantics and Cognition, Languages of the Mind, Consciousness and the Computational Mind, and (with Fred Lerdahl) A Generative Theory of Tonal Music. His most recent book, Foundations of Language, was published by Oxford University Press in 2003.
With the co-sponsorship of the Department of Anthropology; the Department of Computer Science and Engineering; the English Language Institute; the Department of Linguistics; the Department of Philosophy; the Samuel P. Capen Chair of Anthropology; and the Geographic Information Science (IGERT).
Tuesday, April 10, 2001
Screening Room, North Campus
"Human Brains: The Difference that Makes the Difference
Within the divergent structures of human brains lie the clues to our most unique human abilities: from language to esthetic appreciation. By exploring how human brains structurally diverge from patterns common to other primate and mammal brains we can find hints about which brain differences make the difference. But there are many differences. How do we know which ones matter most? In this presentation I will review some of the evidence concerning these differences, the mechanisms underlying them, and their implications for language functions. By investigating the ways that neurodevelopmental processes have been modified in humans, I will show that our unique abilities have not been produced by the addition of new structures, but rather from subtle modifications of existing primate brain systems. Looking more closely at some of these modifications to the "standard primate brain plan" we find that they correspond closely with some of the most unusual neural demands that are imposed by language processing. Three of these neural adaptations to language are explored: the neural bases for symbolic, vocal, and syntactic abilities. Using these insights I will suggest ways to clear up some confusions about what must be innate and why (or why not), and suggest some unexpected new ways to think about how languages and brains have co-evolved in our prehistory.
With the co-sponsorship of the Department of English; the Department of Anthropology; the Department of Psychology; the Department of Computer Science and Engineering; the English Language Institute; the Department of Linguistics; the Department of Philosophy; and the Cognitive Science Graduate Student Association.
Tuesday, April 4, 2000
Knox Hall Room 20, North Campus
"Reversing the Rainbow: Reflections of Color and Consciousness"
Wednesday, April 5, 2000
280 Park Hall, North Campus
"Rethinking Perceptual Organization"
Gestalt principles of perceptual organization will be reconsidered from a modern perspective. Several new principles of grouping will be demonstrated, a new method to study them objectively will be presented.
Stephen E. Palmer is Professor of Psychology and the Director of the Institute for Cognitive Studies at the University of California, Berkeley. He is the recent author of the textbook Vision Science: From Photons to Phenomenology. Other works that he has authored or co- authored include: "Of Color and Consciousness", "Rethinking Perceptual Organization", "Reference Frames in the Perception of Spatial Structure", and "Modern Theories of Gestalt Perception". He has been the editor of the journal Cognitive Psychology and is editor of the Series in Cognitive Psychology, Bradford Books/MIT Press. He is the recipient of numerous honors and awards.
With the co-sponsorship of the Department of Psychology; the Department of Communicative Disorders and Sciences; the Department of English; the English Language Institute; and the School of Information Studies.
Thurday, April 1, 1999
280 Park Hall, North Campus
"Education the Human Brain: A View from the Inside"
The methods of neuroimaging allow examination of the normal human brain in the process of acquiring and executing such high level skills as reading calculating and retrieving facts. By combining use of high density electrical recording and changes in cerebral blood flow we can examine the anatomy of these skills in real time. Some skills are acquired very slowly. The area of the brain that synthesizes visual letters into a unified word develops very slows over years of acquiring the skill of reading. Once developed it is resistant to change. On the other hand, semantic information about words is acquired rapidly and is easily automated. Surprisingly, access to the number line in mental calculation appears similar in five year olds and adults. Acquisition of new information can influence performance either implicitly, without awareness of the subject, or explicitly through deliberate reference to past experience. In our studies we observe the tie course of the operation of these conscious and unconscious learning mechanisms.
"Cognitive Neuroscience and Brain Plasticity"
Cognitive neuroscience has uncovered a vast array of brain mechanisms related to such psychological phenomenon as strategies, priming, item learning, concept learning and development. Research will undoubtedly refine and enlarge our current views We can discuss possible research strategies we are taking in our Institute. We can consider how these findings might influence cognitive science and the strategies to to use these new finding for teaching, rehabilitation, and therapy.
Friday, April 2, 1999
225 Natural Sciences Complex, North Campus
"Develoment of Attentional Networks for Regulating Thought, Feeling, and Behavior"
A major goal of cognitive neuroscience is to link behavior to neural systems. M uch of our behavior depends upon the direction of attention. In laboratory task s when we choose among conflicting stimuli, monitor and correct errors or respond to novel events there is activity in the frontal midline (anterior cingulate) that appears to serve the function of regulating information flow. This same general area responds both to the subjective experience of pain and to individual differences in emotional awareness. In early infancy a major behavior problem is the control of distress which we believe involves the anterior cingulate. Three to five year olds, learning to regulate information flow in cognitive tasks, may activate areas of the cingulate because parts of this brain area are already involved in self regulation. Congruent with this view tasks involving the control of conflict are correlated with the ability to inhibit responses and with parental reports of attentional control and self regulation.
Michael Posner, (email@example.com) is Founding Director of the Sacker Institute of Human Brain Development at the Cornell Medical College in New York.
With the co-sponsorship of the Department of Linguistics; the Department of Philosophy; the Department of Psychology; the English Language Institute; and the School of Information and Library Sciences.
Wednesday, April 8, 1998
280 Park Hall, North Campus
Thursday, April 9, 1998
121 Cooke Hall, North Campus
"Constructing Spatial Semantic Categories: A Crosslinguistic Perspective"
Semantic development is often viewed as a process of mapping between linguistic forms encountered in the speech input and concepts established through universally shared patterns of cognitive development; e.g., children learn the word IN to express a fundamental spatial notion of "containment". This characterization is challenged by recent crosslinguistic research. Evidence will be presented that languages differ strikingly in their semantic structuring of even such basic conceptual domains as space, and that children begin to home in on language-specific classification principles remarkably early, by two years of age or before. But if children's early word meanings are not completely nonlinguistic, nor are they completely shaped by language -- there is a complex interaction between the influence of the input language and learners' nonlinguistic conceptual predispositions. Mechanisms that might underlie this interaction will be considered.
Melissa Bowerman has researched and published widely on topics in first language acquisition ranging from syntax and morphology to word meaning and phonology. Recurrent themes in her work include the relationship between conceptual development and language development, the use of crosslinguistic comparisons to disentangle what is universal and possibly innate from what is learned, the nature of children's early linguistic rules, and the potential of information about language acquisition to help decide among alternative theoretical approaches to language structure. Her most recent work focuses on the acquisition of argument structure alternations, and on the classification of topological spatial relationships by languages and by language learners.
Monday, April 218, 1997
225 Natural Sciences Building, North Campus
"Learning and Interactive Multimedia"
Educational systems are in need of radical change. One way to achieve this change is to create new teaching devices. By attempting to build intelligent computers, we have learned a great deal about how people learn. By putting this knowledge about human learning to use in designing computer-based educational programs we can create effective teaching devices for the future. The key aspect of computer-based learning environments is their ability to allow students to learn by doing. Computers can place students in new, different and interesting environments where they can direct their own learning, following their own interests and achieving goals they set for themselves. Most importantly, these environments can include large libraries of stories, on video, told by experts in particular fields. The student can hear these stories, and learn from them, at just the moment in their own work when they most need help. The programs should be designed around particular learning goals of students, creating scenarios where students are motivated to accomplish tasks that lead to successfully attaining the goals in question. Such goal based scenarios can be used for any subject matter, for any age student, in schools or in business. These programs change what students are learning as well as how they are learning. Students learn how to actually do things rather than simply memorizing isolated factual material.
With the co-sponsorship of the Graduate School of Education; the Department of Communicative Disorders and Sciences;the Department of Computer Science; the Department of Linguistics; the Department of Philosophy; and the Department of Psychology.
Thursday, April 18, 1996
225 Natural Sciences Complex
UB North Campus
"Concepts of Self"
What can be learned from a close analysis of how people "tell" about Self in different settings for different ends? How do we "map" these "tellings" on the folk construct of Self and how shall we develop our psychological constructs of Self to take account of both the "tellings" and the folk constructs? I will examine the possibility of considering Self as the central protagonist in a set of partially connected narratives that represent the relationship between Self and Others and Self and a world codified symbolically as "outside" Self- Other relationships. This is the system that yields an overall (but open) relationship between Self-Other, Culture, and Nature.
Jerome Bruner is Research Professor of Psychology and Senior Research Fellow in Law at New York University. Before coming to New York University, he was Watts Professor of Psychology at Oxford University, and before that, Professor of Psychology and Director of the Center for Cognitive Studies at Harvard University. He has won many awards for his scientific work, including the International Balzan Prize, the Gold Medal of the CIBA Foundation, and the Distinguished Scientific Awards of both the American Psychological Association and the Society for Research on Child Development. He has written extensively on the processes by which human beings achieve, remember, and transform knowledge about the world. His most recent books are The Culture of Education (1996) and Acts of Meaning (1991). He is currently much occupied with the nature and uses of narrative thinking and with the means we use to interpret narrative meanings. This work has drawn him deeply these past few years into the analysis of legal narrative and interpretation.
With the co-Sponsorship of the Department of Computer Science; the Department of Linguistics; the Department of Psychology; and the Cognitive Science Graduate Student Association.
Thursday, April 13, 1995
225 Natural Sciences Complex, North Campus
Daniel C. Dennett, the author of Consciousness Explained (Little, Brown 1991), is a Distinguished Arts and Sciences Professor and Director of the Center for Cognitive Studies at Tufts University. His first book, Content and Consciousness, appeared in 1969, followed by Brainstorms (1978), Elbow Room (1984), and The Intentional Stance. He co-edited The Mind's I with Douglas Hofstadter in 1981. He is the author of over a hundred scholarly articles on various aspects of the mind, published in journals ranging from Artificial Intelligence and Behavioral and Brain Sciences to Poetics Today and the Journal of Aesthetics and Art Criticism. He was elected to the American Academy of Arts and Sciences in 1987. He is the Chairman of the Loebner Prize Committee, which conducts the Annual Turing Test competitions for establishing intelligent thinking by computer programs.
Tuesday, February 8, 1994
Slee Hall, North Campus
With the co-sponsorship of the Department of Computer Science; the Department of Linguistics; the Department of Psychology; and the Intensive English Language Institute as part of its series on "Language and the Cognitive Revolution."