Wednesday, January 15, 2003
2:00 pm - 4:00 pm
280 Park Hall
North Campus
Yue Wang, Ph.D.
Department of Linguistics
University at Buffalo
"Behavioral and neuro-imaging studies of
Mandarin tone processing and learning"
This research investigated the processing and learning of Mandarin Chinese tone by native and non-native speakers. Two of the fundamental questions addressed are brain plasticity and linguistic experience; that is, to what extent the human brain is plastic in language development, and how experience with a first language influences the acquisition of a second language. The dichotic listening study shows that tone processing is lateralized to the left hemisphere for native Mandarin listeners and this left hemisphere specialization is dependent upon linguistic experience, not generalizing to non-native listeners such as American English and Norwegian. However, non-native listeners' tone processing or perception can be more "native-like" as they gain more experience with Mandarin, as shown by American and Norwegian listeners' significant tone perception and production improvement with perceptual training. Moreover, this behavioral improvement has been instantiated in the brain, as revealed by the fMRI study showing cortical reorganization in the process of learning. Further research extends these findings to the study of developmental change in Mandarin tone processing and learning in children from 6 to 14 years old, investigating brain plasticity in children when exposed to a second language. These results are discussed in terms of the behavioral and neurophysiological aspects underlying language learning.
For a printable version of this file click here
Wednesday, January 22, 2003
2:00 pm - 4:00 pm
280 Park Hall
North Campus
Ingvar Johansson, Ph.D.
Department of Philosophy
Umea University, Sweden
"Reflective Speech Acts"
Searle distinguishes between five basic kinds of illocutionary acts: assertives, directives, commissives, expressives, and declarations. Within each of these kinds, now, one can make a further distinction between reflective and non-reflective illocutionarity. Thus, the utterance 'The cat is on the mat' is a non-reflective assertive, where 'I assert that the cat is on the mat' is a reflective assertive. What is the communicative point and linguistic structure of this and similar oppositions? This is the overarching topic of my talk. I will argue, among other things, that the "two-truth-value thesis" for reflective assertives can be generalized into a "two-conditions-of -satisfaction thesis" for all speech acts with a direction of fit.
(The ideas to be presented are part of a paper, "Performatives and Antiperformatives," forthcoming in Linguistics and Philosophy. Copies are available from the author: Ingvar.Johansson@ifomis.uni-leipzig.de.)
For a printable version of this file click here
Wednesday, February 5, 2003
2:00 pm - 4:00 pm
280 Park Hall
North Campus
Michael Worboys, Ph.D.
National Center for Geographic Information and Analysis,
Dept. of Spatial Information Science and Engineering
University of Maine
"Cognitively plausible geometries of environmental space"
There is a need for computational theories of spatial representation and reasoning to be cognitively plausible, that is properly guided by the way humans think about space. This talk describes work done with human subjects concerning their view of the structure of space at the scale of buildings, neighborhoods, and cities, focusing on fundamental distance and direction relationships. Vagueness is an important component of such relationships. The talk concludes with some discussion of granularity, and the relationship between levels of detail in depictions and descriptions of environmental space.
For a printable version of this file click here
Wednesday, February 12, 2003
2:00 pm - 4:00 pm
280 Park Hall
North Campus
Larry E. Roberts, Ph.D.
Department of Psychology
McMaster University, Canada
"Neuroplastic Adaptations of the
Human Auditory System"
We have been studying how neuromagnetic (MEG) and electrical (EEG) fields evoked by tonal stimuli are modified in musicians and by laboratory training at acoustic discrimination in nonmusician subjects. Laboratory studies employ 40-Hz amplitude modulated pure tones of different carrier frequencies which allow us to distinguish activations of the auditory core areas (AI) from those of the belt and parabelt regions (AII). We find that evoked auditory fields localizing to AII (the P2 and right-sided N1c) are enlarged by laboratory training in nonmusicians, and that these same components are enhanced in skilled musicians in accordance with their musical training histories. The findings are compatible with neuroplastic accounts of functional brain attributes associated with musical skill.
However, laboratory training does not enhance the amplitude of the 40-Hz auditory steady-state response (localizing to AI) in adult nonmusicians although this response is enlarged in musicians (Schneider et al. NN 2002) and its temporal properties are modulated by acoustic training in nonmusician adults. We are investigating implications for the network behavior that underlies remodeling of the brain by experience. We are also extending the research to children enrolled in Suzuki music programmes and to imaging studies of auditory cortical function in tinnitus.
For a printable version of this file click here
Wednesday, February 19, 2003
2:00 pm - 4:00 pm
280 Park Hall
North Campus
Michael Spivey, Ph.D.
Department of Psychology
Cornell University
"Evidence for the Spatial and Image-schematic
Underpinnings of Language Processing"
For some time now, cognitive linguists have suggested that human language has as its infrastructure a spatial, perceptual, and embodied format of representation and processing. I will report on a series of experiments that support some of these claims. For example, in an eyetracking experiment, participants listening to spatially-extended stories, and staring at a blank display, tend to make eye movements in the direction of the storys events. Also, a set of offline and online experiments have demonstrated that the major spatial axis of a verbs image schema is generally agreed upon by naove participants, and also exerts an influence on their visual attention and visual memory during real-time language comprehension. These results provide evidence for the embodied perceptual-motor character of linguistic representations.
For a printable version of this file click here
Wednesday, February 26, 2003
2:00 pm - 4:00 pm
280 Park Hall
North Campus
Frederique de Vignemont, Ph.D.
L'Institut des SciencesCognitives Actualite du Laboratoire, France
"Why we feel what we see:
a plurimodale representation of the body"
We have an internal private access to our body that we don't have for the body of others. Yet, for all that, we should not reduce the knowledge of one's own body to proprioception and we have to take into ccount the role of visual information on body representations. In this paper, I intend to addresss the question of the necesary conditions of plurimodale integration in the specific case of body perception. More particularly, I'd like to investigate how we resolve conflicts between visual and proprioceptive information.
For a printable version of this file click here
Wednesday, March 5, 2003
2:00 pm - 4:00 pm
280 Park Hall
North Campus
Jeffrey Runner, Ph.D.
Department of Linguistics
University of Rochester
On the Complementarity of Pronouns and
Reflexives in English:
Evidence from Eye Movements
We investigated the role of structural constraints on pronoun and reflexive reference resolution--Binding Theory (BT--e.g., Chomsky 1981)--in sentences containing "picture" noun phrases with possessors (ex. below). We monitored subjects' eye movements while they followed instructions to manipulate one of 3 dolls at a display containing photos of each of the dolls: e.g., "Have Ken touch Joe's picture of him/himself". The photo touched indicates a "judgment" of the sentence interpretation; an analysis of the eye movements reveals how BT is used on-line. I present two findings: (1) pronouns and reflexives do not have complementary referential domains in this construction (contra BT); and (2) BT is not an "initial filter" in on-line reference resolution (contra Nicol & Swinney 1989). I will outline current research investigating the structural and pragmatic factors at play and how they interact on-line.
For a printable version of this file click here
Wednesday, March 19, 2003
2:00 pm - 4:00 pm
280 Park Hall
North Campus
Chrysanne DiMarco, Ph.D.
Department of Computer Science
University of Waterloo
"Computational Models of
Natural Language Pragmatics"
Current natural language processing (NLP) systems are, almost without exception, still able to deal only with restricted, simplified,language. While researchers in natural language are beginning to produce systems with real-world utility, NLP systems are still challenged by basic problems associated with analyzing syntax and determining semantic content. A major component of language, the pragmatics of human communication, remains understudied and under-represented in current computational systems. But, in the real world, the pragmatics of natural language---complex nuances of language such as exact choices of words and syntactic structure---carry much of the meaning of a text or utterance. If NLP systems are to be truly effective in everyday use, they must be able to handle much more of these complexities of real-world language. In this talk, I will describe some of our earlier work on building computational systems that incorporate knowledge of lexical and syntactic style, with some additional looks forward to how pragmatics may play a role in the evolution of real-world NLP systems.
For a printable version of this file click here.
Wednesday, March 26, 2003
2:00 pm - 4:00 pm
280 Park Hall
North Campus
William Rapaport, Ph.D.
Department of Computer Science and Engineering
University at Buffalo
CONTEXTUAL VOCABULARY ACQUISITION:
From Algorithm to Curriculum"
No doubt you have on occasion read some text containing an unfamiliar word, but you were unable or unwilling to find out from a dictionary or another person what the word meant. Nevertheless, you might, consciously or not, have figured out a meaning for it. Suppose you didn't, or suppose your hypothesized meaning was wrong. If you never see the word again, it may not matter. However, if the text you were reading were from science, mathematics, engineering, or technology, not understanding the unfamiliar term might seriously hinder your subsequent understanding of the text. If you do see the word again, you will have an opportunity to revise your hypothesis about its meaning. The more times you see the word, the better your definition will become. And if your hypothesis development were deliberate, rather than "incidental", your command of the new word would be stronger.
This talk discusses a research project that is developing and applying algorithms for computational contextual vocabulary acquisition (CVA): learning the meaning of unknown words from context. We are trying to unify a disparate literature on the topic of CVA from psychology, first- and second-language acquisition, and reading science, in order to help develop these algorithms. We are using the knowledge gained from the computational CVA system to build an educational curriculum for enhancing students' abilities to use CVA strategies in their reading of science texts at the middle-school and college undergraduate levels. The knowledge gained from case studies of students using our CVA techniques will feed back into further development of our computational theory.
Research done jointly with Michael W. Kibby, Department of Learning & Instruction, Center for Literacy & Reading Instruction, State University of New York at Buffalo.
mwkibby@acsu.buffalo.edu, http://www.gse.buffalo.edu/FAS/Kibby/
For a printable version of this file click here
Wednesday, April 2, 2003
2:00 pm - 4:00 pm
280 Park Hall
North Campus
Jeri Jaeger , Ph.D.
Department of Linguistics
University at Buffalo
"Current Controversies in Language Acquisition"
It is often thought that the main controversy in Language Acquisition is whether or not the ability to learn human language is innate to humans. However, this is incorrect. The ability to learn human language is innate by definition, since it is species specific (only humans learn human language) and is species general (all normally developing humans with normal input learn human language). The real question is WHAT is innate which allows humans to learn language. Answers to this question range from: general learning mechanisms, general cognitive/perceptual abilities, learning strategies specific to language, a Language Acquisition Device which is specific to morphosyntax, among others. In this tutorial, designed for the general Cognitive Science audience, I will lay out some of the assumptions behind these various claims about innateness, and then discuss what sorts of data would be needed in order to support or argue against the various notions of innateness. Finally, I will discuss a few of these data sources in some detail (e.g. Specific Language Impairment), and indicate which of the theories of innateness these facts most clearly support.
For a printable version of this file click here
Wednesday, April 9, 2003
2:00 pm - 4:00 pm
280 Park Hall
North Campus
For a printable version of this document, click here.
Poster Abstracts
Identifying Perceptually Indistinguishable Objects.
John Santore, Dept. of Computer Science and Engineering
Stuart Shapiro, Ph.D. Dept. of Computer Science and Engineering
Erwin Segal, Ph.D., Dept. of Psychology
We are investigating a simulated cognitive robot that, when it sees an object perceptually indistinguishable from one it has seen before, will use reasoning to decide if they are two different objects or same object perceived twice. We have conducting experiments with human subjects to determine what strategies they use to perform this task and how well they perform it.
Back to top
Results from a Cognitive Mapping Exercise with Hikers in the Natural Environment
Wendy Miller, Dept. of Geography, UB
Cognitive maps are valuable for understanding how an individual perceives the space around them. The majority of studies that employed this technique have focused on urban environments where roads and buildings are easily identified as components of the spatial environment. Kevin Lynch's pioneering work in this area, Image of the City (1960), determined how people perceive the form of a city through interviews, surveys, and cognitive mapping.
In this study, cognitive mapping was extended to the "natural environment" through a written survey completed by hikers at an environmental education center. The survey included demographic questions and the cognitive mapping exercise. An objective of this project was to evaluate how people perceive the natural environment and determine if cognitive maps were created in natural environments where right angles and identifiable intersections and traditional landmarks are absent.
The Use of Cognitive Feedback by Experts and Novices in a Judgment Task
Jiao Ma, Keith Kudrycki, Department of Industrial Engineering, UB
Cognitive feedback (CFB) has been shown to increase performance in multiple cue probability learning (MCPL) tasks in both real life and experimental situations. Our experiment focuses on investigating how experts and novices behave differently in applying CFB, which is presented in forms of relative importance weights of each of the cues and the display of function forms (Weaver and Steward, 2000). We believe this study will provide some interesting insight into how much domain knowledge is necessary to understand and apply CFB. We would also like to determine how well novices and experts are able to process and use different types of feedback (i.e. CFB and Outcome Feedback) and hence determine which type is more useful for novices or for those with prior knowledge of the task. A total of 20 participants, 10 experts and 10 novices, were given a series of baseball statistics and asked to predict how many wins the team would have in a season based on these statistics. Performance measures (e.g. relative weighting accuracy, Lens Model measures), mental workload and judgment strategy were measured. Domain Knowledge was statistically significant for all weights except one less important cues, "stolen bases." In other words, domain knowledge impacts weighting accuracy greatly. Domain knowledge was also found to have a significant effect on judgment performance (Lens Model measures), such as accuracy and knowledge. Mental workload was significantly affected by CFB, but none of the other dependent variables were.
Bilingual Slips of the Tongue: evidence for Multilingual Speech production planning.
Ameyo S. Awuku, Dept. of Linguistics, UB
The present study is an account of bilingual slips of the tongue. The study compares the types of errors that occur in bilingual slips with monolingual speech errors. Additionally, this study looks at an area of bilingual speech production which has not been researched in previous scholarship, namely, the setting factor in bilingual slips: given a set of hypotheses about setting, do bilinguals show consistency in their speech error behavior? The results will be explained in terms of multilingual speech production planning and sociolinguistic factors. Data for this study comes from trilingual speakers of Ewe-English-French and Hebrew-Romanian-English, in addition to bilingual speakers of Spanish-English, French-English, and Ewe-French. There was also one group of monolingual speakers of English.
Are all agents equal?
Kathy Conklin,Dept. of Linguistics, UB, Gail Mauner, Ph.D., Dept. of Psychology, J.P. Koenig, Ph.D., Dept. of Linguistics, UB
The goodness of fit between a REFERENT associated with an agent role and a described event has been shown to influence processing. Whether processing mechanisms are sensitive to differences in agent participant ROLES of particular verbs (e.g., whether or not volition is required of an agent) has not been demonstrated. We examined the processing of rationale clauses following agentless passives whose verbs introduced an implicit agent that was either required or not required to (but could) behave volitionally.
Rationale clauses took longer to process following passives whose verbs did not require volition of their implicit agents. These results demonstrate that the category of agent is not homogenous. More generally, they suggest that interpretative mechanisms are sensitive to differences in participant roles of particular verbs below the level of semantic roles like agent and patient.
The Effect of Different Writing Systems on Reading: A comparison between Korean and Mandarin Chinese
Myoyoung Kim, Dept. of Linguistics, UB
Most models of reading have been developed to account for the fluent reading of single words written in alphabetic scripts, with clear print and intelligible content (for example, Coltheart, 1978, Caplan, 1992). While this is an appropriate starting point for such models, a complete model of reading should be able to account for a broader range of writing systems, the reading of text, and the fact that readers are often confronted with illegible written material or very difficult content.
The present paper presents the results of an experiment in which subjects read texts written with varying intelligibility, in two different orthographies: Korean, a phonetically-motivated alphabetic writing system, and Chinese, a semantically-motivated logographic script. The three hypotheses being tested were the following: First, print distortion and complex subject matter will slow down the reading processes in comparison with writing that contains clear print and simple subject matter. Second, reading for speed vs. reading for understanding will affect reading time. Third, the Korean script and the Chinese script may cause different patterns of behavior in these tasks.
For this, five paragraphs were constructed which differed in the following ways: 1) simple content typed in clear print; 2) simple passage hand-written normally but printed faintly; 3) simple passage written with sloppy handwriting; 4) simple typed passage that contained misspelled words, and 5) a typed passage having complex subject matter in clear print. Twenty speakers of Mandarin Chinese and 20 speakers of Korean participated. The subjects were randomly assigned to two different groups consisting of 10 subjects each for each language: one group was asked to read for speed and the other for content. The reading time of each subject for each passage was measured.
In this study, I found the following: (1) Poor input affects reading times. As long as the input is clear, people read the passages aloud without serious problems regardless of reading conditions. This effect occurs with both alphabetic and logographic writing systems. (2) The purpose of reading makes people employ different reading strategies. When reading for content, readers of both languages took more time than when reading for speed. However, the time difference did not result from the fact that they used two different reading processes, but was due to the way readers went through the reading process. (3) Different writing systems affected reading time. Differences between two languages are not in processing per se but in representations, specifically, in organization of the lexicon and relationships between orthography, phonology and semantics within the lexicon.
Based on the findings of this experiment, I have developed a new reading model which contains a number of innovative features: 1) A Monitoring component is added to account for self-correction during reading. 2) Top-down processing components have been added to account for the role of context. 3) The processing components in the model are language-independent, and differences between reading in alphabetic vs. logographic writing systems are accounted for by positing differences in the linkage of the orthographic, phonological and semantic lexicon.
Changes in Children's Production of Plural Forms of Russian Nouns at Different Ages.
Viktoriya Lyakh, Dept. of Linguistics, UB
This experiment used an elicited production task to see how a child's ability to produce correct plural forms in Russian changes with age. Previous investigations of the acquisition of Russian morphology focused primarily on the development of case system (Zakharova, 1958) and acquisition of gender (Popova, 1958; Kempe, 2001; Babyonshev, 1993).
Nothing, however, was done on the acquisition of plural forms in Russian. However, some studies
were done on the development of the plural forms in German, which is close to Russian in terms of having a complex system of plurals. Behrens studied one child from 1;11.15 (one year, eleven months, fifteen days) through 2;5.30 (two years, five months, thirty days) and the results showed that the child changed his ?preference? in the choice of one or another overgeneralization patters; he also used different plural forms of the same noun at the same time. Based on the findings from German and other languages in terms of the development of morphology, one would also expect Russian children to find a default rule of plural, i.e. the most frequent plural pattern, and overgeneralize it to novel forms. This constitutes the hypothesis of this study.
Eight Russian-speaking children, aged 3 to 8;7 participated in the experiment. Following the methodology introduced by Berko in 1958, the stimuli used were real and novel words, each of which was accompanied by a picture. The purpose of using the novel words was to check whether a child really had acquired the rule of plural or had simply memorized the plural forms. The children?s task was to produce a plural form of the objects presented in the pictures.
Overall, the children combined the novel nouns productively with the default rule of plural, which means that they did not merely memorize the plural forms, but figured out the default rule. The stimuli (real and novel) were given to the children in such an order that they could easily follow the model the experimenter provided by pronouncing plural forms (different from a default form) of the real practice words; however, the children did so very rarely. This shows that there continues to be a strong reliance on the default rule.
The major finding of the study suggests that even the youngest children, such as three-year olds, have productive knowledge of the default rule of the plural in Russian, generalizable to novel word forms. Also, the data suggest that the acquisition of other rules of the plural which are different from the default one occurs later, probably after the age of eight.
The Diminutive as an Instrument of Semantic Precision
Gabriela Pérez Báez
Linguistic and philological literature on the subject of the diminutive is copious to say the least, focusing on a variety of issues ranging from its phonological properties to its etymology. One aspect of the diminutive that remains a challenge to analysts is the extensive array of expressions in which it seems to be felicitously used, often with extremely distinct and even seemingly opposite senses. The present study will review data which challenges the model proposed in Jurafsky 1996, and will offer an analysis based on the decomposition of the relevant diminutivized forms. I will argue for a model of semantic precision based on semantic selection by the producer as allowed by a given basic form, and dependent on a shared cultural knowledge base and culture-wide agreement which ensures proper interpretation of the message.
For the present study, a corpus of over 120 utterances was collected in Mexico from natural discourse between native speakers. Being present and active in the conversation allowed me to gain greater insight into the interaction between the participants of a given communicative event and the essence of the message conveyed through the use of the diminutive suffix –ito. Each phrase and sentence was first analyzed following Jurafsky 1996 which posits a structured polysemy based on a RADIAL CATEGORY model (Lakoff 1987). Such model presents ‘child’ as the central sense from which 16 other pragmatic and semantic senses develop through semantic change driven by metaphor, inference, generalization, and lambda-abstraction. While roughly half the data could be accounted for by being assigned to one of the 16 senses ascribed to the diminutive in Jurafsky 1996, there seemed to be great difficulty in giving an account for the remainder. Such results prompted the need for an alternative analysis that would allow for a fair representation of the complexity of a communication system where no single factor but rather “a set of simultaneous conditions within the producer” drives the communication (Talmy 2000, p. 337).
The core of the hypothesis to be put forth here claims that in a communication system, the diminutive allows participants to select certain features of a given concept whose form is diminutivized, alter the level of attention placed on such features, and even communicate a value judgment through the selection made. The position I will present moves away from considering the diminutive suffix –ito as a semantically loaded form. Rather, it suggests assigning to it the properties of a cognitive mechanism that allows for a speaker to dive into the composition of a concept and rearrange its features in order to fine-tune a message, thereby developing the ability to express more accurately a given conceptual representation.
Bibliography
Jurafsky, Daniel. 1996. Universal tendencies in the semantics of the diminutive. Language: Journal of the Linguistic Society of America. 72 (3), 533 – 578. Washington, DC.
Lakoff, George. 1987. Women, Fire and Dangerous Things. Chicago: The University of Chicago
Press.
Talmy, Leonard. 2000. Toward a Cognitive Semantics. Cambridge, MA: MIT Press.5
Roles of pictures and native language in lexical processing for elementary Mandarin learners
Hsiang-Ting Wu, Dept. of Linguistics, UB
Second language acquisition has been an interest in the field of bilingualism. The processing of words, the basic unit of a language, is one of the major issues in this area. Many past studies have focused on word and picture processing by fluent bilinguals on tasks such as word naming, word categorization, word translation and picture naming. Most of the research has been conducted about learners or bilinguals of Indo-European languages. Few studies have involved Mandarin Chinese; particularly scarce have been those involving adult learners of Chinese in an elementary stage. Mandarin Chinese has a logographic writing system, significantly different from those of most Indo-European languages, which could be deemed as alphabetic. For a person learning a second language, the huge difference between a logographic system and an alphabetic system may result in distinctive lexical processing, relative to two languages using the same alphabetic system.
The present study tested native English-speaking adults in the beginning stage of learning Chinese. Six conditions of a translation recognition task were designed on a computer. The six blocks, each of which has fourteen pairs of items, are L1 (here, English) words to L1 words, L1 words to L2 (here, Chinese) pinyin (an alphabetic system used to represent Chinese sounds), L1 to L2 Chinese characters, pictures to L1 words, pictures to L2 Chinese pinyin, and pictures to L2 Chinese characters. The stimuli, either an English word or a picture, were followed by a fixation (here, a plus sign) and then followed by a second word. The subjects were told to decide as fast and accurately as possible if the second word was a correct translation equivalent of the first stimulus by pressing one of the two designated buttons. The accuracy and reaction time were recorded automatically by the computer.
The results showed that, first, the time from L1 to L2 was shorter than that from a picture to L2; second, within the two tested L2 components, the time from L1 to Chinese pinyin was shorter than that from L1 to Chinese characters, the logographic system. This study supports the idea that an initial-stage adult language learner uses L1 as a medium to connect a concept with L2, which is predicted by Kroll and Stewart’s (1994) revised hierarchical model. Moreover, the difference in the writing systems is a determinant for the speed of processing words.
Exposure effects for infrequent syntactic structures alter sentence comprehension
Breton Bienvenue and Gail Mauner, Ph.D., Department of Psychology, UB
The effect of repeated exposure to known but infrequent syntactic structures as a source of long term learning in adults is relatively understudied. In three experiments we show that 1) exposure to infrequent and unambiguous syntactic structures leads to faster processing of these structures 2) exposure to the less frequent interpretation of ambiguous structures leads to greater competition with the favored interpretation and thus slowed processing and 3) increased exposure to these structures in filler items speeds the onset of processing changes.
These results indicate that models of sentence processing must be expanded to reflect not just the static frequency of particular cues as measured in corpora, but also the dynamic frequency of cues due to recent exposure. This also suggests that when using low frequency structures in psycholinguistics research, greater care must be taken to attend to exposure effects over time.
The representation of consonant clusters in the mental lexicon.
Lisa Incognito, Dept. of Psychology, UB
Previous work has shown that perception of a phoneme in a syllable is influenced by the number of similar sounding words (lexical neighborhood, Newman, Sawusch & Luce, 1997). This previous work determined neighborhoods for target syllables using a one phoneme change rule. For example, bow, bath and mouth are neighbors of ?bowth.? The present work focused on how consonant clusters are represented in the mental lexicon. Nonsense syllables composed of initial consonant clusters followed by a vowel and final consonant were used as stimuli. Two rules were used to compute the neighborhood for each target syllable. One was the one phoneme change rule used in previous studies. The second treated clusters of consonants as single units in a one unit change rule. Target syllables with differential neighborhoods based on the two rules were the endpoints of the test series. Results consistently support the one phoneme change rule. These results are consistent with models of word recognition which treat consonant clusters as a sequence of phonemes. [Work supported by NIDCD grant R01DC00219 to SUNY at Buffalo.]
The Role of Predictability and Eye Movements in Linear 'Representational Momentum'
Vikranth B. Rao, Visual Perception Laboratory, UB
The idea that motion trajectories are encoded implicitly within cognitive representations of moving objects is referred to as ‘representational momentum’ - RM. A large body of experimental work examining the extent to which people perceive the disappearance of a moving object as being located further along its trajectory than was truly the case, has been interpreted as support for this idea. We investigated a linear motion RM paradigm both with and without fixation control and found reliable mislocalization consistent with RM when the eyes were free to move. When EM was restricted in the identical paradigm, RM compatible mislocalization was present only for the longest movement trajectory examined. While these data are compatible with explanations of localization governed by smooth pursuit eye movements that lead the target, gravity effects – the asymmetric localization of vertically moving objects toward the ground – were present regardless of eye movement condition. Such effects do suggest representational biases within visual system representations influence localization. In further experiments, predictability of movement trajectory was removed and apart from gravity effects, no RM-compatible mislocalization was present even when eye movements were allowed. These data suggest that predictability of stimulus motion is related to RM-compatible mislocalization.
Why Do People Think With Pictures?
Catherine Hummel, Cognitive Science
The increasing amount of time, effort, and money spent on the development of visualization systems speaks for the believed importance of these systems. In line with the work of Carroll and Campbell (1989), that importance points to the systems as expressions of an implicit theory that people think better with visualizations and representations to work with. This paper examines that theory by synthesizing cognitive research in several areas to state what we know so far about the cognitive justifications for the belief that visualizations aid in cognitive tasks. Key research discussed includes that of Zhang and Norman (1995) on external representations including visualization, that of Zacks and Tversky on graphic communication, and that of Hutchins (1995) on navigation. These and other authors have approached the question of the effect of visualization on cognition. The goal of this paper is to approach a general understanding of why visualization is perceived as such a powerful tool, by examining relevant past and current research.
Wednesday, April 15, 2003
2:00 pm - 4:00 pm
280 Park Hall
North Campus
Center for Cognitive Science
Tuesday, April 15, 2003
2:30 pm - 3:45 pm
Slee Concert Hall, North Campus
Distinguished Speaker Series 2003
PRESENTS
Philip Johnson-Laird, Ph.D.
Department of Psychology
Princeton University
"How we Reason"
A long-standing tradition postulates that human thinking is rational because it is founded on the 'laws of thought'. This talk argues to the contrary that reasoning is not based on such laws, but on the ability to envisage possibilities. A conclusion is judged to be valid if it holds in all such MENTAL MODELS of the given information, and probable if it holds in most of them. This theory is based on three main principles: each mental model represents a possibility; the structure of models corresponds to the structure of what they represent; and models normally represent only what is true. The talk outlines the evidence corroborating the theory from behavioral and brain-imaging studies. Inferences from one model are easier than inferences from multiple models. Knowledge affects the process of reasoning. And, if falsity matters, reasoners commit systematic fallacies. Humans are not always rational, but they are not intrinsically irrational, either.
Johnson-Laird was born in Yorkshire, England. He left school at the age of 15 and spent ten years in a variety of occupations until he went to University College, London to read psychology. He later gained his Ph.D. there under the supervision of Peter Wason, and he joined the faculty in 1966. In 1971, he was a visiting member of the Institute of Advanced Study, Princeton, where he began a collaboration with George A. Miller. Subsequently, he held positions at the University of Sussex (1973-1981) and at the Medical Research Council's Applied Psychology Unit (1981-1989) in Cambridge, where he was also a Fellow of Darwin College. He returned to Princeton in 1989 to be a member of the faculty at the University, where he is the Stuart Professor of Psychology. He has published ten books, and over two hundred papers. He is married and has two children. In his spare time, if he had any, he would play modern jazz piano.
Sponsored by: Department of Psychology, Samuel P. Capen Chair of Anthropology, Department of Computer Science and Engineering, The C.S. Peirce Professorhip in American Philosophy, Department of Philosophy
For a printable version of this file click here
Wednesday, April 16, 2003
2:00 pm - 4:00 pm
280 Park Hall
North Campus
Philip Johnson-Laird, Ph.D.
Department of Psychology
Princeton University
"Naive Causality:
A Theory of Causal Meaning and Reasoning"
This talk outlines a theory and computer implementation of causal meanings and reasoning. The meanings depend on possibilities, and there are four weak causal relations: A causes B, A prevents B, A allows B, and A allows not-B, and two stronger relations of cause and prevention. Individuals represent these relations in mental models of what is true in the various possibilities. The theory predicts a number of phenomena, and the talk presents experiments corroborating these predictions. Contrary to many accounts, the meaning of causation is not probabilistic, causes differ in meaning and logic from enabling conditions, and causal reasoning does not depend on schemas or rules.
For a printable version of this file click here
Wednesday, April 23, 2003
2:00 pm - 4:00 pm
280 Park Hall
North Campus
Kathryn Murphy, Ph.D.
Visual Neuroscience Laboratory,
Department of Psychology
McMaster University, Hamilton, Canada
"Seeing the Light: Optical Imaging of Function in Animal and Human Cortex"
Wednesday, September 3, 2003
2:00 pm - 4:00 pm
280 Park Hall, North Campus
Simon Liversedge, Ph.D.
Department of Psychology
University of Durham, U.K.
"Reading Disappearing Text:
Is there a Gap Effect during Reading?"
I will report data from three experiments investigating the influence of making the words of a sentence disappear during reading.
Experiment 1 investigated whether we could induce a "gap effect" (Saslow, 1967) during reading. Sentences were read under normal or disappearing text conditions (in which the word that was fixated disappeared after 60 ms). We predicted that reading speed would increase if a gap effect occurred under the disappearing text conditions. The experimental sentences also contained high or low frequency and long or short target words.
Reading speed remained constant under both presentation conditions and there was no evidence for a gap effect. However readers fixated longer on low frequency words than on high frequency words, even when the text had disappeared after 60 ms. These results indicate that while visual information is important for reading, the cognitive processes associated with understanding the fixated words drive the eyes through the text. In the second experiment we replaced each word with a mask rather than it disappearing and in the third experiment we made two words rather than one disappear. The masking experiment permits investigation of the role of iconic memory in Experiment 1. Increasing the window of disappearing text allows us to observe the impact of disappearing text on both preprocessing of the word to the right of the fixated word as well as processing of the fixated word. All three experiments will be discussed in relation to current models of eye movement control during reading.
Reference: Saslow M G (1967). Effects of components of displacement-step stimuli upon latency for saccadic eye movement. Journal of the Optical Society of America, 57, 1030-1033.
For a printable version of this file click here
Wednesday, September 17, 2003
2:00 pm - 4:00 pm
280 Park Hall, North Campus
Laurence Harris, Ph.D.
Department of Psychology and Biology
Centre of Vision Research
"Multimodal Representation of Space and Time"
Perception is a constructive process. We use our various senses to deduce or construct a mental representation of what is out there. In the case of seeing colours, it seems reasonably intuitive that colours must be only inside the head, since light rays differ only along a frequency continuum. But other aspects of the world, including the layout of space, must also be centrally constructed.
Our representation or construction of space involves many different sensory and cognitive systems: it is multimodal. Our visual and auditory systems can tell us the direction of features in the world and sometimes their distance, but it is mainly by interacting in the world that we generate our full perception of space. Interacting involves moving around which introduces other information that is measured by other systems. Thus the brain has to combine information from several different sources to produce its best guess about what it out there.
Sometimes the stories coming from different senses are not compatible with one another. This happens often in unusual environments such as when below deck on a boat. There the cabin moves back and forth with us and seems visually stable. But the vestibular system picks up the swell of the boat and gives us a different story. Which should we rely on to determine our orientation?
The senses also give different information because of the different properties of the sense organs themselves. For example, it takes longer for the retina to convert light energy into nervous activity than it does for the ear. Which sense should we rely on to determine when things happen?
I will talk about how the brain tries to resolve these conflicts. I will illustrate this talk with experiments that my research team have done in which we have given the brain the hardest time by giving it bigger conflicts than usual. We have separated auditory and visual timing cues by looking at the perception of distant events when the sound takes appreciably longer to reach the observer than the light, and we have separated visual from other cues to space and movement by using virtual reality systems. These experiments reveal some interesting strategies that our brain adopts while trying to make a multimodal representation of what's out there.
For a printable version of this file click here
Wednesday, October 1, 2003
2:00 pm - 4:00 pm
280 Park Hall, North Campus
Gregory Ward, Ph.D.
Department of Linguistics
Northwestern University
"Who's the Ham Sandwich?
A Pragmatic Analysis of Deferred Equatives"
Previous accounts of DEFERRED REFERENCE (e.g. Nunberg 1995) have argued that all (non-ostensive) deferred reference is the result of MEANING TRANSFER, a shift in the sense of a nominal or predicate expression. An analysis of deferred equatives (e.g. I'm the ham sandwich) suggests an alternative account based on the notion of PRAGMATIC MAPPING: a contextually licensed mapping operation between (sets of) discourse entities neither of which undergoes a transfer of meaning. Moreover, the use of a deferred equative requires the presence of a contextually licensed OPEN PROPOSITION whose instantiation encodes the particular mapping between entities, both of which remain accessible to varying degrees within the discourse model. Finally, it is shown how a complete account of deferred reference must provide for transfers of reference as well as meaning.
For a printable version of this file click here
Wednesday, October 8, 2003
2:00 pm - 4:00 pm
280 Park Hall, North Campus
Phillips Stevens Jr., Ph.D.
Department of Anthropology
University at Buffalo
"The Principles of Magic
and Magical Thinking"
Ethnology allows us to identify 6 components of what we can call a magical woldview or magical thinking. The conclusion that at least some of these are universal allows us to look beneath the level of culture for an explanation, as the doctrine of cultural universals has presumed to be reasonable for several decades. Now neuroscience may have provided confirmation for this presumption. Discoveries at UCLA in 1999 and 2001 suggest that one component of magical thinking, the principle of similarity, is fundamental to human cognition, even rooted in neurobiology. Illustrated with slides.
Phillips Stevens, Jr., is Associate Professor of Anthropology. He has conducted fieldwork in West Africa, the Caribbean, and urban areas of North America. He has authored or edited several books and numerous articles in cultural anthropology and African studies, particularly in areas of religion, folklore, and cultural change. In 1993 he received a SUNY Chancellor's Award for Excellence in Teaching, and in 2000 the UB Student Association gave him a Milton Plesur Award. He is currently working on a major book on the anthropology of magic, sorcery and witchcraft.
For a printable version of this file click here
Wednesday, October 15, 2003
2:00 pm - 4:00 pm
280 Park Hall, North Campus
Mary Hare, Ph.D.
Department of Departnment of Psychology
Center for Neuroscience, Mind, and Behavior
Bowling Green State University
"Admitting that admitting sense
into structural analyses makes sense"
Linguistic, developmental, and psycholinguistic research has documented a close relationship between the meaning of a verb and the syntactic structures in which that verb occurs, and has also shown that learners and comprehenders take advantage of this relationship in acquisition and processing. I will address implications of these facts for issues in structural ambiguity resolution. It has been shown that readers are sensitive to the statistical bias of a verb towards particular syntactic structures. I will argue that verb bias effects reflect the comprehender's awareness of meaning-structure correlations, so that structural biases are based not on the verb itself (as is generally assumed in this literature) but on specific verb senses. I will first demonstrate that individual verbs show significant differences in their subcategorization profiles across parsed corpora, but that the cross-corpus bias estimates are much more stable once sense is taken into account. I will then show that consistency between sense-contingent subcategorization biases and experimenters' classifications distinguish recent experiments which found effects of verb bias from those that did not. Finally, I will present results of a self-paced reading experiment showing that comprehenders expect different structural continuations after identical verbs, depending on what sense of the verb was used. These results argue that comprehenders learn and exploit meaning-form correlations at the level of individual verb senses, rather than the verb in the aggregate.
For a printable version of this file click here
Wednesday, October 29, 2003
2:00 pm - 4:00 pm
280 Park Hall, North Campus
Michael Weliky , Ph.D.
Department of Brain and Cognitive Science
Center for Visual Research
University of Rochester
"Population Coding of Natural Scenes in Primary Visual Cortex"
We investigated the coding of natural scenes in primary visual cortex of the ferret using a new technique for multi-site recording of neuronal activity. This method allowed the simultaneous recording of neural activity from up to 60 separate sites on the cortical surface, with fidelity equivalent to a layer 2/3 recording, without penetrating the brain. At individual sites, evoked activity to natural scenes was only weakly correlated with the local image contrast structure that fell within the cells' classical receptive field and orientation/spatial frequency tuning. However, a population code, derived from activity integrated across cortical sites having retinotopically overlapping receptive fields, correlated strongly with the local image contrast structure. Center-surround interactions did not significantly alter these correlations. Cell responses demonstrated high lifetime and population
sparseness as well as high dispersal values, implying that the neural coding is highly efficient in terms of information processing. These results indicate that while cells at an individual cortical site do not provide a reliable estimate of the local contrast structure in natural scenes, the integrated response of cells across distributed, but retinotopically overlapping, cortical sites is closely related to this structure in the form of a sparse and dispersed code.
For a printable version of this file click here
Wednesday, November 5, 2003
2:00 pm - 4:00 pm
280 Park Hall, North Campus
Daniel Jurafsky , Ph.D.
Department of Department of Linguistics and
Center for Cognitive Science
University of Colorado
"Probabilistic language processing by humans (mainly) and machines (briefly)
This talk summarizes a number of results from our lab on the role of probabilistic and statistical knowledge in comprehension, learning, and production of language, at many levels (phonological, syntactic, semantic, pragmatic) by humans (mainly) and machines (too). In comprehension, I'll talk about ambiguity at many levels (lexical, syntactic, semantic, and discourse) and how various probabilistic models can be used i) cognitively to account for psycholinguistic results on human ambiguity processing, and ii) engineeringly to build shallow semantic and pragmatic understanders for sentences. In production I'll present our experiments on lexical production which suggest that humans compute the probability of each word they say to help determine the surface form the words should take. In learning I'll talk about how some kinds of linguistic structure can be viewed as a `learning bias' and combined with empirical, distributional learning to attack the problem of learning phonological and morphological structure. This talk describes joint work with all sorts of really smart people.
For a printable version of this file click here
Wednesday, November 12, 2003
2:00 pm - 4:00 pm
280 Park Hall, North Campus
Maureen Donnelly , Ph.D.
Institute for Formal Ontology and Medical Information Science
University of Leipzig and
University at Buffalo
"A Layered Theory of Places"
In his Scholium to the Principia, Newton distinguishes between absolute places and relative places. He points out that in ordinary spatial reasoning, we use relative places, not absolute places, to locate objects and track their movements. A relative place is any location, such as the interior of a ship, whose boundaries are determined by its standing in a fixed spatial relation to some material object. Taken together, the collection of all relative places is a jumble of locations that may move through or around one another. The purpose of this talk is to present a method for infusing order into this mess by partitioning the set of all relative places into separate collections, called "layers", in such a way that all members of a single layer stand in a fixed relation to one another. This approach is presented as a formal theory (in standard first-order predicate calculus) intended to support reasoning about and representation of places.
For a printable version of this file click here
Wednesday, December 3, 2003
2:00 pm - 4:00 pm
280 Park Hall, North Campus
Eric Little PhD.
Center for Multisource Information Fusion (CMIF)
Department of Industrial Engineering
University at Buffalo
"Theoretical Foundations of Threat Ontology (ThrO)"
The study of metaphysics can be divided into two distinct branches: the first being ontology and the second being epistemology. Ontology is the study of what is, what exists, what can be logically categorized. Ontologists attempt to capture the most basic structures of reality by developing accurate and comprehensive formal systems that transparently model existing places, times, entities, properties, and relations. An ontologist should operate like an empirical scientist, meaning they should attempt to provide an accurate, third person-based, independently observable description of the world, apart from cultural, linguistic, or other types of cognitive biases. Epistemology, conversely, deals with theories of knowledge and the mental operations of agents (i.e., knowers) who are their bearers. Epistemology is unlike ontology, in that it is unable to provide a third person-based, independently observable description of the world. Knowledge is always tied to some agent who possesses it, since it is the product of the neurological functions of that agent's brain, and the latter are part of the objective, ontological furniture of the world. Threats are special kinds of items which possess both ontological (objective, veridical) and epistemological (subjective, perceived) components. This talk will attempt to elucidate some of the relations between the ontological and epistemological components of threats and threat conditions.