Wednesday, January 19, 2000
280 Park Hall
2:00-3:30 p.m.
North Campus
RICHARD ASLIN
Department of Brain and Cognitive Sciences
Center for Visual Science
"Statistical Learning in Linguistic
and Non-linguistic Domains"
Statistical approaches to language learning have generally been understudied because distributional information was thought to be inconsistent in the child's input and because learners were thought to be incapable of extracting many key consistencies that are present. A series of studies of statistical learning in the domain of word-segmentation from fluent speech will be reviewed. These studies demonstrate that adults, children, and 8-month-old infants are exceptionally adept at extracting some forms of distributional information. The statistical learning mechanisms that enable some forms of on-line distributional analysis are domain-general, as evidenced by similar learning of tone-sequences, and species-general, as evidenced by similar performance in Tamarin monkeys. The constraints on statistical learning have implications for the evolution of natural languages.
Wednesday, February 2, 2000
280 Park Hall
2:00-3:30 p.m.
North Campus
CHARLES J. DUFFY
Department of Neurology
Center for Visual Science
University of Rochester
"Neuronal and Perceptual Mechanisms
of Spatial Orientation"
Single-cell recordings in monkey cerebral cortex have shown that MST neurons respond selectively to optic flow, the patterned visual motion that is seen during observer self-movement. These optic flow responses reflect the direction of simulated self-movement and the three-dimensional structure of the environment. MST integrates these responses with vestibular signals about self-movement and spatial orientation cues that can be derived from moving objects.
The importance of visual mechanisms for spatial orientation is revealed by their impairment in the spatial disorientation of Alzheimer's disease. Patients with this syndrome show a selectively elevated perceptual threshold for the patterned visual motion of optic flow. This perceptual deficit may be linked to lesions in posterior association cortex that impair spatial navigation. Together, these findings suggest that extrastriate visual areas process optic flow and other self-movement cues to support spatial orientation. The failure of these mechanisms may result in debilitating spatial disorientation.
Wednesday, February 23, 2000
280 Park Hall
2:00-3:30 p.m.
North Campus
JENNIFER STOLZ
Department of Psychology
University of Waterloo
"On the Joint Effects of Attention and Word Recognition: The Relations between Resources and Meaning"
The study of attention and that of visual word recognition have both resulted in large literatures. Interestingly, despite the fact that common sense dictates that attention is involved in word recognition, there is very little work at the intersection of these two literatures. The present work addresses this intersection by examining the joint effects of attention, viewed as a resource, and a key variable important in word recognition, semantics. Two central questions are pursued. First, is attention necessary for semantics to be activated? This question is asked by examining the semantic priming effect under dual task conditions. Second, does previewing a word's meaning result in fewer resources being required for the word's subsequent recognition? This question is addressed by investigating the effects of priming a word presented in the context of an attention-demanding tone discrimination task. The results reveal a rich pattern in which resource attention affects, and is affected by, the activation and maintenance of meaning during word recognition.
Wednesday, March 22, 2000
280 Park Hall
2:00-3:30 p.m.
North Campus
Paul Luce
Language Perception Laboratory Department of Psychology
University at Buffalo
"Probabilistic Phonotactics, Neighborhood Activation,
and Spoken Word Recognition:
An Adaptive Resonance Perspective"
Recent work investigating the role of probabilistic phonotactics in spoken word recognition suggests the operation of two levels of representation, each having distinctly different consequences for processing. The lexical level is marked by competitive effects associated with similarity neighborhood activation, whereas increased probabilities of segments and sequences of segments facilitate processing at the sublexical level. I will discuss a series of studies that provide support for the hypothesis that the processing of spoken stimuli is a function of both facilitative effects associated with increased phonotactic probabilities and competitive effects associated with the activation of similarity neighborhoods. I will also describe recent extensions of this work aimed at evaluating two hypotheses regarding the segmentation of words from fluent speech, one phonotactic (the trough hypothesis) and one lexical (the lexical burst hypothesis). Finally, I will describe our attempts to account for effects of neighborhood activation and probabilistic phonotactics from the perspective of Grossberg's adaptive resonance theory (Grossberg, Boardman, and Cohen, 1997).
Wednesday, March 29, 2000
280 Park Hall
2:00-3:30 p.m.
North Campus
MARK TURNER
Department of Linguistics
University of California, Stanford
"Conceptual Compressions and Decompressions"
Everyday conceptual life is based on integrating clashes and compressing vital relations. Life is various and often diffuse. Its many lines of connection run simultaneously over large expanses of time and space and involve complicated relations of change, cause and effect, intentionality, identity, analogy, and representation. To form a conceptual apparatus and use it requires constant and frequent compressions over these vital relations. Compression is so natural to us that when literature uses them to compress large worlds of life into a few pages, we hardly notice.
When we look at the Persian rug in the store and imagine how it would look in our house, we are compressing over two different physical spaces: the physical space with the rug and the physical space where we live. When we imagine how we would now answer a criticism directed at us several years ago, we are compressing over times. We compress over time when we tell someone our life story in three minutes. We compress over space when we draw the Empire State building on the back of an envelope.
Conceptual blending is an unrivaled tool of compression.
Wednesday, April 12, 2000
280 Park Hall
2:00-3:30 p.m.
North Campus
NICHOLAS CERCONE
Department of Computer Science
University of Waterloo
"Natural Language Access to Relevant
Information on the Internet"
Information available on the World Wide Web (WWW) has grown enormously, thus rendering difficult the retrieval of relevant information. To be an unqualified success on the internet, data mining, electronic commerce, etc. will depend, in part, on the successful retrieval of relevant information from the Internet. Several tools have been developed for browsing and searching these collections of highly unstructured and heterogeneous data. These tools organize web pages into listings and allow users to search these listings to find required information. Each of these tools has its own listing or catalogue for searching. Based on how the new Uniform Resource Locators (URLs) are added to the catalogue, they can be classified as a directory or as a spider or robot. We generally refer to both types as search engines. We propose to use natural language (English) to access information on the WWW and illustrate this process with two interesting prototype systems. NLAISE is our initial prototype implementation of natural language access to internet search engines. EMATISE is our second prototype implementation, English Meta Access to Internet Search Engines. Both systems employ Head Driven Phrase Structure Grammar (HPSG) implementations for reasons explained in the talk. Initial experiments with these prototypes will be presented and discussed.
Wednesday, April 19, 2000
280 Park Hall
2:00-3:30 p.m.
North Campus
PETER W. JUSCZYK
Department of Psychology
Johns Hopkins University
"Infants' Use of Multiple Cues to Segment
Words from Fluent Speech"
Several years ago, Jusczyk and Aslin (1995) reported that infants first display some abilities to segment words from fluent speech at around 7 months of age. Many subsequent studies have focused on the nature of the information that infants rely on to find words in fluent speech. The kinds of cues investigated include prosodic, phonotactic, allophonic, and statistical cues to word boundaries. English-learners appear to develop sensitivity to some of these types of cues earlier than they do for others. I will review some of these findings, along with some more recent attempts to investigate the relative weighting that infants give to the different types of cues. Although much remains to be learned about how infants come to integrate these different types of cues, it is clear that word segmentation abilities evolve considerably in the second half of the first year.
Wednesday, April 26, 2000
280 Park Hall
2:00-3:30 p.m.
North Campus
DAVID EDDINS
Department of Communicative Disorders and Sciences
University of Buffalo
"A linear systems approach to the study
of sensory processing"
The basic principles underlying the perception of auditory temporal and spectral patterns are not well understood despite the fundamental nature of such patterns in auditory perception. This likely reflects the lack of a unifying framework for understanding the processing of acoustic features in either domain. Borrowing from principles of spatial vision, a single global process will be outlined by which the auditory system might process both temporal and spectral features of simple and complex acoustic stimuli. Support for such a process will be gathered from recent psychophysical and physiological studies and compared to relevant analogs in spatial vision.
Wednesday, August 30, 2000
2:00 p.m. - 4:00 p.m.
280 Park Hall, North Campus
Daphne Bavelier, Ph.D.
Department of Brain and Cognitive Sciences
University of Rochester
Cortical Reorganization of Visual and Language
Functions After Early Auditory Deprivation
Studies of adults who have altered sensory and language experience, such as congenitally deaf individuals, suggest that different brain systems within vision and within language display different degrees of experience-dependent modification. Within vision, the organization of systems involved in processing peripheral space and in sustaining spatial attention is most altered following auditory deprivation. Within language, altered language experience such as the exposure to a visuo-manual language (American Sign Language) does not alter bias of the left hemisphere to process natural languages. However, in contrast to English, ASL strongly recruits the right hemisphere indicating that the specific processing requirements of the language can also influence the organization.
Refreshments will be available.
Everybody is welcome.
For a printable version of this file click here
Back to Top
Back to Calendar Listing
Wednesday, September 6, 2000
2:00 p.m. - 4:00 p.m.
280 Park Hall, North Campus
Daniel Montello, Ph.D.
Department of Geography
University of California at Santa Barbara
The Multidisciplinary Concept of the �Cognitive Map�:
Empirical and Theoretical Arguments For and Against It, and Why the Fors Win
In its most general form, the concept of the "cognitive map" refers to internal representations of the world stored in the mind. The concept's precise meaning, however, has varied across disciplines and research areas. Several researchers have even suggested that cognitive maps do not exist, or that they are unnecessary to explain behavior. In this talk, I consider the history and multiple meanings of the concept of cognitive maps. Various debates about its usefulness are considered, including some versions I will propose are red herrings. Some arguments reflect misunderstandings, incorrect statements, or ideas that are limited by disciplinary constraints. The latter includes some methodological limitations of doing research with nonhuman animals. I will offer conceptual and empirical reasons why the cognitive-map concept is necessary and useful as an explanatory tool. Refreshments will be available. Everybody is welcome.
For a printable version of this file click here
Wednesday, September 20, 2000
2:00 p.m. - 4:00 p.m.
280 Park Hall, North Campus
Barbara Koslowski, Ph.D.
Department of Human Development
Cornell University, Ithaca, NY
"Formal Rules Don't Guarantee Good Theories"
Recent work in cognitive science has emphasized the importance of theory (or mechanism or explanation) in a variety of areas (categorization, causal reasoning and causal attribution, etc.) However, theories can be dubious as well as plausible and, when dubious, can motivate fruitless searches or have pernicious consequences. Thus, relying on theories requires that they be evaluated to distinguish theories that are likely to be plausible from those likely to be dubious. How can such evaluation take place? Traditionally, theory evaluation is said to occur when individuals and scientific communities rely on various formal principles such as, Consider alternative explanations, or When two factors are confounded and both covary with an effect, treat causation as indeterminate, or Prefer theories that yield good predictions. Such rules can be framed as formal principles that are independent of, and can thus be applied to, any content area, from attachment behavior to epidemiology. However, although these principles can be framed as content free, they can only be successfully applied if content is taken into account. Put differently, theories are plausible to the extent that they are congruent with related or collateral information that we have acquired about the phenomenon that we are trying to explain. The importance of collateral information puts a premium on identifying which types of collateral information are likely to be treated as evidentially relevant; how they interact with one another when more than one type is available; and how the collateral information that is initially available structures subsequent searches for additional, evidentially relevant information.
Refreshments will be available.
Everybody is welcome.
For a printable version of this file click here
Back to Top
Back to Calendar Listing
Wednesday, September 27, 2000
2:00 pm - 4:00 pm
280 Park Hall
North Campus
Richard Feldman, Ph.D.
Department of Philosophy
University of Rochester
"Naturalism in Epistemology: The Connections between Epistemology and Cognitive Science"
This paper examines some of the arguments for the view that answers to traditional questions in epistemology can be answered only with the help of empirical information to be supplied by cognitive science.
For a printable version of this file click here
Back to Top
Back to Calendar Listing
Wednesday, October 4, 2000
2:00 pm - 4:00 pm
280 Park Hall
North Campus
Randall Dipert, Ph.D.
Department of Philosophy
University at Buffalo
"Toward a Theory of Artifacts and Purposes"
Most philosophy is directed at understanding "natural" things in the world. However, most of our daily life is spent interacting with objects and events that we know or assume to be purposive. Buildings, chairs, tables, books, and so on are all artifacts. Gestures and language use, even many other deeds, are--much of it anyway--intended. In other words, we think of artifacts and actions as having a function or purpose, and perhaps a certain mental history. We might say that the "artificial" things, understood as artificial have their own special, and common, phenomenology. There are a great many puzzles and problems, psychological and philosophical, about how we DO think of such things as well as how we should. Part of what I say will be is part of a 5-year grant from the Dutch government (NWO) on "The Dual Nature of Technological Artifacts" at the Technical University of Delft in which I am a senior researcher.
For a printable version of this file click here
Back to Top
Back to Calendar Listing
Wednesday, October 11, 2000
2:00 pm - 4:00 pm
280 Park Hall
North Campus
David Knill, Ph.D.
Department of Brain and Cognitive Science
University of Rochester
"The Probabilistic Calculus of Sight"
Helmholtz described vision as a process of unconscious inference. It is not, however, the logical inference of deduction. Rather, visual perception reflects inferences about the environment that are drawn from uncertain data. In recent years, computer and human vision researchers have been applying the calculus of uncertainty - probability theory - to formulate functional models of perceptual inference. I will describe the probabilistic (often called the "Bayesian") approach to modeling visual perception and what it has to researchers interested in understanding human vision: how it can be used both to improve our understadning of the problems posed to the visual system and how it can be used to develop models of perceptual performance. In particular, I will describe a Bayesian taxonomy of qualitatively different strategies for integrating depth cues (stereo, shading, etc.), their properties, and how they are selected and combined in the solution of particular problems. I will illustrate the theory with specific examples from perceptual phenomonology and psychophysics.
For a printable version of this file click here
Back to Top
Back to Calendar Listing
Wednesday, October 18, 2000
2:00 pm - 4:00 pm
280 Park Hall
North Campus
Hector Levesque, Ph.D.
Department of Computer Science
University of Toronto, Canada
"Cognitive Robotics: When Action Requires Thought"
Cognitive robotics is the study of the knowledge representation and reasoning problems faced by an autonomous agent in a dynamic and incompletely known world. It can also be thought of as an attempt to make cognition more relevant to robotic control. The talk provides an overview of some current research at the University of Toronto in this area, including motivation for our approach, formal foundations, and recent results.
For a printable version of this file click here
Back to Top
Back to Calendar Listing
Wednesday, October 25, 2000
2:00 pm - 4:00 pm
280 Park Hall
North Campus
Joan Sussman, Ph.D.
Department of Communicative Disorders and Sciences
University at Buffalo
"Speech Perception in Children: Similarities/Differences to Adult Abilties
and How Does Language Fit In?"
I will describe the speech perception research that I have done investigating the abilities of normally developing children, children with language impairment and adults. The initial studies show that young, normally developing children have developing discrimination and identification abilities for consonant cues related to perception of place of articulation and that those abilities appear to be related to differential frequency cues. The general theme of this research involves how children perceive the quickly changing vs. long-duration acoustic cues of consonants and vowels. First, the suggestion by Tallal and colleagues that lengthening the second and third formant transitions of stop consonants may enhance perception will be considered. Then, an investigation of vowel perception will be presented that discusses whether the quick-duration or longer, more intense cues of vowels are preferred by children with normal language, language-impairment and by adults. Finally, a discussion of the relevance of backward masking in consonant perception will be presented.
For a printable version of this file click here
Back to Top
Back to Calendar Listing
Wednesday, November 8, 2000
2:00 pm - 4:00 pm
280 Park Hall
North Campus
Richard Shweder, Ph.D.
Department of Psychology
University of Chicago
"A Polytheistic Conception of the Sciences
and the Virtues of Deep Variety"
At a recent meeting of the New York Academy of Sciences entitled "Unity of Knowledge: The Convergence of Natural and Human Science" the idea and ideal of "Consilience" was promoted by the main keynote speaker of the conference E.O. Wilson.
In my own contribution I presented an alternative "polycentric" or "polytheistic" conception of the sciences, which I shall put on the table at the Center for Cognitive Science colloquia. I am skeptical of much that is presupposed and implied by the title of the NYAS conference. I do not think there is unity either between or within the human and natural sciences. And I have some doubts about whether the ideal of substantive "unity" across the natural and human sciences is any more attainable today than 200 years ago. I think it is an open question whether (for the sake of human progress and the progress of human knowledge) the ideal of substantive unity of belief is even truly desirable. In addressing such issues I shall describe my experiences at an interdisciplinary conference (bringing together cognitive neuroscientists, philosophers, anthropologists and psychologists) all concerned with the problem of "voluntary action", and I shall argue that, despite much PR to the contrary the theoretical gap between "mind" and "brain" or between "matterings" and "matter" has not been significantly narrowed in recent decades.
For a printable version of this file click here
Back to Top
Back to Calendar Listing
Wednesday, November 15, 2000
2:00 pm - 4:00 pm
280 Park Hall
North Campus
Robert Jacobs, Ph.D.
Department of Brain and Cognitive Sciences
University of Rochester
"Learning to See in Three Dimensions"
Why is seeing the world in three dimensions so easy? We believe that this ease is due to the fact that the visual world is highly redundant; there are many cues to perceptual properties such as depth and shape. However, combining information from multiple cues in an effective manner is non-trivial. We argue that people must learn their cue combination strategies on the basis of experience. Three experiments are reported whose results suggest that observers can indeed adapt their cue combination strategies in an experience-dependent manner, though their learning abilities seem to be biased in an interesting way. Next, we address the question of whether or not observers can adapt their visual cue combination strategies on the basis of consistencies between visual and haptic (touch) percepts. Berkeley (1709), Piaget (1952), and many others speculated that people calibrate their interpretations of visual cues on the basis of their motor interactions with objects in the world. Despite the intuitive appeal of this hypothesis, it has never been adequately tested. Using a novel virtual reality environment, we have conducted three experiments whose results suggest that observers adapt their visual cue combination strategies based on correlations between visual and haptic percepts.
For a printable version of this file click here
Back to Top
Back to Calendar Listing
Wednesday, November 29, 2000
2:00 pm - 4:00 pm
280 Park Hall
North Campus
Daniel Chiappe, Ph.D.
Department of Psychology
SUNY at Fredonia
"The role of comparison and categorization in the comprehension of figurative statements"
Figurative statements can be used to integrate information from distinct domains,as when we integrate the concept "alcohol" with the concept "crutch". Interestingly, figurative language offers distinct forms for do so. In particular, metaphors ("alcohol is a crutch") and similes ("alcohol is like a Crutch") are distinct ways of expressing the relation between the topic "alcohol" and the vehicle "crutch". Metaphors appear to be modeled after literal categorization statements, and similes after literal similarity statements. According to Glucksberg and Keysar's (1990) class-inclusion theory, however, the differences between metaphors and similes are merely on the surface. This theory holds that both metaphors and similes are categorization statements -- they have a subordinate-super-ordinate structure. In both cases, the vehicle term does not refer to a basic-level category. It refers to a higher-order category that does not have its own label. For instance, the word "crutch" may refer to the higher-order category "things that can be relied on to deal with tough situations". The metaphor and the simile are both understood by assigning the topic to this category. The theory predicts that if metaphors and similes both assert categorical relations, both types of statements should be equally non-reversible. Literal categorization claims typically cannot be expressed in a bi-directional manner, due to the hierarchical nature of categories. For instance, "all zebras are animals" is true, while "all animals are zebras" is false. In contrast, literal similarity statements generally do tolerate being reversed because the items compared are at the same level of a taxonomic hierarchy. Although people might prefer "olives are like cherries", one can also say "cherries are like olives". Two studies tested this prediction and found that items preferred in their simile form were more likely to tolerate being reversed than items preferred as metaphors. Thus, items preferred as metaphors behave more like categorization statements, while items preferred as similes behave more like similarity statements. Furthermore, a third study found that preference for the metaphor form of expression was determined by the familiarity of a topic-vehicle pair. As familiarity increased, preference for the metaphor form increased as well. We conclude that the surface appearances of metaphors and similes are accurate reflections of the processes required to comprehend them. The simile form is preferred when a higher-order category has to be created. The simile form indicates that a comparison process is required to comprehend the statement. The metaphor form is preferred when a higher-order category is well-established and already associated with the vehicle term. When this is the case, the categorical form is most appropriate, because the statement can be understood through a class-inclusion process.
For a printable version of this file click here
Back to Top
Back to Calendar Listing
Wednesday, December 6, 2000
2:00 pm - 4:00 pm
280 Park Hall
North Campus
Stuart Shanker, Ph.D.
Department of Psychology
Developmental & Cognitive Processes
Atkinson College, Toronto, Canada
"The Emergence of a New Paradigm
in Ape Language Research"
In recent years we have seen a shift in several different areas of communication studies from an information-theoretic to a dynamic systems paradigm. In a dynamic system, all of the elements are continuously interacting with and changing in respect to one another, and an aggregate pattern emerges from this process of mutual co-action. On this perspective, communication is seen, not as a linear, binary sequence but rather, as a continuously unfolding and co-regulated activity. On the information-processing paradigm, what is communicated is always information. The information that is communicated is said to be an internal state or an internal representation of an environmental feature, and genuine communication is said to occur when B decodes the message that A intended to encode. But on the dynamic systems paradigm, mutual understanding is something that emerges as both partners converge on some shared feeling, thought, action, intention, etc., and develop or deploy various behaviors that signify this convergence. I argue for a shift from the information-processing model that has hitherto dominated ape language research, to a dynamic systems paradigm, which places the emphasis on the dyad rather than the isolated individual; which sees ape communication as a co-regulated process, rather than a linear and discrete sequence; which focuses on the variability of ape communicative behaviors, rather than treating them as phenotypic traits; and which is thus better able to account for both the social complexity and the developmental character of nonhuman primate communicative abilities.