Wednesday, February 1, 2006
2:00 pm - 4:00 pm
280 Park Hall, North Campus
Craig Chambers, Ph.D.
Dept. of Psychology
University of Toronto
"Referential anticipation in
real-time language comprehension"
Studies of spoken language comprehension have shown that listeners often anticipate upcoming referents as sentences unfold. For example, predicate terms such as "eat?" or "inside..." are used to rapidly narrow the domain of referential candidates to edible things/containers in the contextual environment. To a large degree, these outcomes can be accounted for by embodied theories of language understanding, e.g., where linguistic meaning is grounded in mental simulations of perception and action. In this talk, I will present a series of studies that evaluate whether anticipatory processes can be explained without appealing to aspects of linguistic or conceptual knowledge that are not naturally captured in embodied approaches. The outcomes reveal that this knowledge strongly constrains the use of perceptual and action-based information in referential predictions, even with predicate terms that denote concrete and perceptible physical events.
For a printable version of this file click here
Wednesday, February 8, 2006
2:00 pm - 4:00 pm
280 Park Hall, North Campus
Jeri Jaeger , Ph.D.
Department of Linguistics
University at Buffalo
"Putting the 'psycho' in
developmental psycholinguistics"
For a printable version of this file click here
Wednesday, February 15, 2006
2:00 pm - 4:00 pm
280 Park Hall, North Campus
Ann Pier Salverda, Ph.D.
Department of Brain and Cognitive Science
University of Rochester
"Eye movements reveal sensitivity to prosodically conditioned detail in spoken-word recognition"
In the first part of my talk, I will discuss data from a number of eye-tracking experiments that examined the role of fine-grained prosodically-conditioned detail in the recognition of spoken words. In these experiments, participants followed spoken instructions to manipulate objects in a visual display. Eye movements to the objects provide a sensitive measure of the dynamics of the lexical competition process as speech unfolds, showing that prosodically conditioned detail has systematic effects on the lexical competition process.
In the second part of my talk, I will present some data that are concerned with the nature of language-mediated eye movements. I will discuss two experiments that examined how spoken language influences visual attention in the context of a visual world. The findings show that objects referred to by spoken language can capture visual attention, suggesting cross-talk between language and vision.
For a printable version of this file click here
Wednesday, February 22, 2006
2:00 pm - 4:00 pm
280 Park Hall, North Campus
Student Presentations (1)
For a printable version of this file click here
Wednesday, March 1, 2006
2:00 pm - 4:00 pm
280 Park Hall, North Campus
Student Presentations (2)
For a printable version of this file click here
Wednesday, March 8, 2006
2:00 pm - 4:00 pm
280 Park Hall, North Campus
Douglas Roland , Ph.D.
Department of Linguistics
University at Buffalo
Frequency of Basic English Grammatical Structures:
A Corpus Analysis
Many recent models of language comprehension have stressed the role of distributional frequencies in determining the relative accessibility or ease of processing associated with a particular lexical item or sentence structure. However, there exist relatively few comprehensive analyses of structural frequencies, and little consideration has been given to the appropriateness of using any particular set of corpus frequencies in modeling human language. We provide a comprehensive set of structural frequencies for a variety of written and spoken corpora, focusing on structures that have played a critical role in debates on normal psycholinguistics, aphasia, and child language acquisition, and compare our results with those from several recent papers to illustrate the implications and limitations of using corpus data in psycholinguistic research.
For a printable version of this file click here
Wednesday, March 29, 2006
2:00 pm - 4:00 pm
280 Park Hall, North Campus
Barbara Landau, Ph.D.
Department of Cognitive Science
John Hopkins University
"Starting at the end: The importance of goals
in spatial language and spatial cognition"
A hallmark of human cognition is our capacity to talk about what we see. How is this accomplished? Given that language and spatial representations are likely to have quite different kinds of structures, the challenge is to understand how such apparently different systems of knowledge map onto each other, and how these mappings are learned. In my talk, I will discuss this problem with respect to the language of events, including manner of motion, change of possession, attachment/ detachment, and change of state events. I will focus on evidence from normally developing children and children with Williams’ syndrome a rare genetic deficit that gives rise to an unusual cognitive profile of profoundly impaired spatial representations together with spared language. The evidence shows that a fundamental property of event semantics an asymmetry between source and goal expressions is a pervasive fact about the linguistic description of events. Ancillary evidence suggests that this asymmetry is also a part of our non-linguistic representations, appearing in non-linguistic tasks among infants, children, and adults. As a whole, the results suggest a homology between spatial language and spatial representation, thereby providing a partial solution to the problem of mapping dissimilar domains onto each other.
For a printable version of this file click here
Wednesday, April 5, 2006
2:00 pm - 4:00 pm
280 Park Hall, North Campus
Josephine Anstey , Ph.D.
Department of Media Studies
University at Buffalo
Interactive Fiction: Dreams and Realities
Science Fiction ("The Velt", Neuromancer, the Holodeck) has promised us fully immersive interactive narrative for decades. In the 1980s and 90s, hype about artificial intelligence and virtual reality; the eagerly awaited marriage between video gaming and Hollywood; and hypertext, suggested the promise was about to be fulfilled. But killer interactive fiction has not emerged. Story been a driver/colonizer of literary forms (poetry, drama, the novel) and mass media (print, radio, film, TV), but is it failing to conquer the interactive and procedural realm of the computer? This talk looks at what an interactive story form could or should be. It explores the roles of the participant, the author, and the computer within such a form, and is informed by my art/literary practise which uses immersive virtual reality and intelligent agents to create dramatic experiences for a participant.
For a printable version of this file click here
Wednesday, April 12, 2006
2:00 pm - 4:00 pm
280 Park Hall, North Campus
Fabian Neuhaus, Ph.D.
Department of Philosophy, UB
National Center for Ontological Research
A Formal Theory of Family Resemblance
One of Wittgenstein's major insights is about the nature of meaning: According to him it is often impossible to link the meanings of words to necessary and sufficient conditions. His prime example is the word game: Soccer, chess, table tennis, and Pac Man are very different. It seems to be futile to look for (non-trivial) necessary and sufficient conditions that allow us to categorize them as games. Hence, according to Wittgenstein, we should reject any theory which ties meaning to strict conditions. Meaning arises from the praxis of speakers, it is the result of 'language games'.
'Language games', 'family resemblance' and similar metaphors are not really helpful, if one wants to build a formal semantics. My aim is to present a rigorous account of Wittgenstein's notion of family resemblance. By the very nature of formal semantics my proposal will involve crystal clear truth conditions for propositions like 'This is a game'; however it will do it in a way which is consistent with Wittgenstein's insights.
For a printable version of this file click here
Wednesday, April 19, 2006
2:00 pm - 4:00 pm
280 Park Hall, North Campus
John Ohala, Ph.D.
Department of Linguistics
University of California at Berkeley
"The phonetics-phonology interface ... again."
In 1990 I argued against the notion that there is phonetics-phonology interface (“There is no interface between phonetics and phonology: a personal view” J. Phonetics 18.153-171). I return to this argument and give added evidence that phonology can do its best job of explaining sound patterns in language by incorporating phonetics. Among the examples treated are: (a) There are asymmetries in the direction of change and alternation between certain sounds, e.g., palatalized velars and apicals and palatalized labials and apicals. The explanation lies in the acoustics of these sounds and their perception. (b) Vowel nasalization occasionally arises when the conditioning environment is not a nasal, e.g., near [h] and other voiceless fricatives. The explanation lies is the acoustic effects these sounds have on adjacent vowels. (c) What have been called “epenthetic” stops are found in a variety of contexts, e.g., nasal C ___ fricative: warm[p]th, and lateral___fricative: el[t]se. A somewhat rare instance involves the context labial nasal ___ apical nasal: Old English nemna ~ nempne “name. Physiological phonetics can give a unified account of these cases. Phonology without phonetics (and without sociological and psychological causal factors) reduces to simple description no matter how much arcane formalism is added to it.
September
9/6
Business Meeting
9/13
Gail M. Seigel, Department of Ophthalmology, University at Buffalo
The Retina: From Bench to Bedside
Part 1 Evidence for Cancer Stem Cells in Retinoblastoma (40 minutes)
Retinoblastoma is the most common intraocular tumor of childhood, occurring in both germline (40%) and sporadic (60%) forms. Retinoblastoma treatment presents a clinical challenge with the incidence of metastatic or secondary tumors that impact life span and quality of life. The cellular origin of retinoblastoma has remained elusive for the past 20 years. In Part 1 of my presentation, I will show evidence for the presence of cancer stem cells in retinoblastoma that may be responsible for chemo-resistance and tumor progression.
Part 2 Non-invasive, high resolution retinal imaging and autofluorescence in Batten Disease patients and carriers (20 minutes)
Visual loss due to retinal degeneration is an early manifestation in neuronal ceroid lipofusinosis, (NCL, Batten Disease), a fatal neurodegenerative lysosomal storage disease. Delays in diagnosis are common as retinal changes are subtle in early stages of NCL. We are collecting high-resolution infra-red and autofluorescence (AF) images from NCL patients, carriers and normal controls in order to develop more reliable diagnostic measures. One detail, exclusive to the NCL subjects, is a fine fingerprint pattern with an orientation following Henle's nerve fiber layer. No such fingerprint patterns were seen in carriers or normal controls. We hypothesize that these retinal fingerprint patterns are consistent with sparse density of nerve fiber layer bundles with optic atrophy. With the use of non-invasive, high resolution retinal imaging, we hope to hasten and improve the diagnosis of Batten Disease.
Host: Bill Rapaport
Back to Top
9/20
J�rgen Bohnemeyer, Department of Linguistics, University at Buffalo
The Macro-Event Property: The Segmentation of Causal Chains
Languages vary in the constraints they impose on the segmentation of complex event representations across units of syntax. Previous studies of this variation have used syntactic (Pawley 1987) or intonational units (Givón 1991) as criteria.The principal limitation of these approaches comes from the lack of a universally valid correlation between syntactic or intonation units and semantic/conceptual event representations. I propose that the 'macro-event property' (MEP) is a better typological criterion of event segmentation: a construction has the MEP if it packages event representations in such a way that temporal operators necessarily have scope over all subevents. This definition spells out a number of readily administered semantic tests. I introduce this approach drawing on research in the domain of complex motion events, to then move on to a study of the segmentation of causal chains in four languages: Ewe, Japanese, Lao, and Yukatek Maya. In both studies, an unpredicted amount of crosslinguistic variation emerges, driven by differences in lexicalization and the availability of constructions.
Host: Bill Rapaport
Back to Top
9/27
Chris Welty, IBM Watson Research Center (Hawthorne)
Ontology Quality and the Semantic Web
One of the guiding principles of the web and its machine interpretably successor the semantic web is to "let a million flowers bloom." HTML was based on technology nearly two decades old at the time (Hypertext), for which a research community concerned mainly with Human-Computer interaction was investigating what the "right way" to use hypertext for effective communication was. The vast majority of early HTML pages completely ignored this and yet the web thrived. Still, as the web became a serious medium for dissemination, insititutions for whom effective communication was critical did begin to take this research seriously and today's highly visible web pages are designed by people with experience and training on how to "do it right". The progress and evolution of the semantic web should follow the same path - the semantic web standards (RDF, OWL, and RIF) are based on decades-old technology from Knowledge Representation and Databases, and there has been for about 15 years a research community associated with this field that has studied what the "right way" to use these systems is. This field, which I will call "ontology engineering" for this talk, is concerned among other things with ontology quality and its impact. In this talk I will discuss research on characterizing ontology quality and measuring the impact of quality on
knowledge-based systems.
Host: Stuart Shapiro
Back to Top
October
10/4
Neil Feit, Department of Philosophy, SUNY Fredonia
Individualism, Physicalism, & Belief about the Self
According to the traditional view in the philosophy of mind, when you believe or desire something, the content of your belief or desire is a proposition. Propositions have truth values (true or false) in an absolute sense, e.g., a proposition cannot be true for me but false for you. I argue against this traditional view. My arguments have to do with Individualism (roughly, the view that what you believe and desire is determined only by what is going on inside your head) and Physicalism (roughly, the view that the physical facts of the world determine all the facts, including the mental ones). A special kind of belief plays a central role in my arguments. This is belief about the self, which consists in the beliefs that we express using the first-person pronoun 'I'. I argue that we should replace the traditional view with a theory designed for this kind of belief.
Host: Bill Rapaport
Back to Top
10/11
Barry Smith, Department of Philosophy, University at Buffalo
"Why I Am No Longer A Philosopher "
Picture illustrating Barry Smith's talk
Host: Bill Rapaport
Back to Top
10/18
Rolf Zwaan, Department of Psychology, Florida St. U.
Perceptual and Motor Representations in Language Comprehension:
Essential or Ornamental?
Current theorizing in various fields, including neuroscience, psychology, philosophy, computer science, and linguistics, suggests that cognition involves the reactivation of perceptual and motor representations acquired during interaction with the world. This reactivation is viewed as a mental simulation. Most of the evidence for the simulation view comes from studies on motor control, action observation, and imitation, but simulation theories have also been proposed with regard to language comprehension. These theories conceptualize language comprehension as the mental simulation of the referential situation. I will review some work from my laboratory, as well as from other labs, that has examined the role of perceptual and motor representations in language comprehension. For example, I will show how incidentally acquired visual information infiltrates reading comprehension. I will also review experiments that reveal interactions between linguistic content and motor responses. I will then ask whether this ensemble of findings provides sufficient evidence for a mental simulation view of language comprehension. I will consider how language comprehension is similar to motor control, action observation, and imitation, but also how it differs from these cognitive/motor skills. In doing so, I hope to shed light on the question posed by the title of this talk.
Host: Gail Mauner & Jean-Pierre Koenig
Back to Top
10/25
Steve Petersen, Department of Philosophy, Niagara U.
The Ethics of Robot Servitude
Assume we could someday create artificial creatures with intelligence comparable to our own. Could it be ethical use them as unpaid labor? There is very little philosophical literature on this topic, but the consensus so far has been that such robot servitude would merely be a new form of slavery. Against this consensus I defend the permissibility of robot servitude, and in particular the controversial case of designing robots so that they want to serve (more or less particular) human ends. A typical objection to this case draws an analogy to the genetic engineering of humans: if designing eager robot servants is permissible, it should also be permissible to design eager human servants. Disanalogies and unexpected difficulties from population ethics block this objection, however. The tentative conclusion is that we have no good reason to think robot servitude impermissible.
Host: Bill Rapaport
Back to Top
November
11/1
John Kearns, Department of Philosophy, University at Buffalo
Restoring the Balance in Logic
Historically, the study of logic, and investigations in logic, initially had an epistemological, or epistemic focus. Logic was concerned to characterize, and to understand, arguments, deductions, and proofs. But traditional Aristotelian logic investigated these matters for a highly restricted language, a language inadequate for saying most of the things we say and want to say. It was inadequate for stating and proving mathematical results. Late in the nineteenth century, logic had a new beginning. More adequate logical languages were developed and investigated. But the focus of this renewed logic was no longer epistemic. Instead the new logic was primarily concerned with the ontological, or the ontic. It tried to develop languages that are adequate to things and structures that we encounter, it investigated truth conditions of sentences in these languages, and developed methods to establish that certain logical laws are true in any circumstances, or world, at all, as well as methods for showing that certain sentences are logical consequences of others. A more recently developed field of logic, illocutionary logic, which is the logic of speech acts, or language acts, accommodates both the ontic and the epistemic concerns of logic, by enriching (expanding) the languages of logic, and extending the semantics to incorporate both ontic and epistemic elements. Systems of illocutionary logic provide the resources needed to characterize and understand arguments, deductions, and proofs. These systems also provide what is needed to explain certain phenomena about language, and resolve certain puzzles that have proved difficult for standard logic to handle. In this talk, a simple system of illocutionary logic will be sketched, and then used to clear up Moore's Paradox and to provide an account of conditional assertions.
Host: Bill Rapaport
Back to Top
11/8
Albert Goldfain , Department of Computer Science & Engineering , University at Buffalo
Reasoning about Effect-Equivalent Acts
Aspects of a Computational Theory of Early Mathematical Cognition
The primary focus of my research is a computational characterization of mathematical cognition and mathematical understanding. This involves an interdisciplinary investigation into the mechanisms underlying mathematical ability. Significant contributions in this emerging field have come from research in psychology, education, linguistics, neuroscience, and the philosophy of mathematics. The impact of computer science (and symbolic artificial intelligence in particular) has yet to be felt. Although computers have irrevocably changed the face of modern science and mathematics, very few sustained efforts have been undertaken to computationally model everyday human quantitative abilities. Such a computational characterization should serve two purposes: (1) It should provide a cognitively plausible theory of the human activities associated with mathematical understanding, and (2) it should serve as a suitable model on which to base the computational implementation of math-capable cognitive agents.
In this talk, a formalization is given for the following common-sense reasoning problem: How can a computational cognitive agent determine that two of its procedures do the "same thing". I examine two related problems in particular: (1) If an agent knows that P is a procedure for doing act A, how might it recognize that P' is also a procedure for doing act A, and (2) if an agent knows that P is a procedure for doing act A, how might it generate another way of doing P that is not trivially similar to P? I will discuss the relevance of these problems to a theory of computational math cognition.
Host: Bill Rapaport
Back to Top
11/15
Alan Lockwood, Department of Neurology, University at Buffalo
To be, or not to be, that is the question
A PET and electrophysiological study of an auditory oddball task
We studied responses to an auditory continuous performance task using positron emission tomography (PET) and a newly developed method for making statistically-rigorous comparisons of current density reconstructions (CDRs) derived from event-related potentials. This combination yielded high-resolution spatial and temporal data. PET data showed activation of auditory, left posterior frontal, supplemental and primary motor cortex and the anterior cingulate (AC) with suppression of activity in occipital and posterior cingulate cortices. The CDRs for targets and non-targets in the pre-250 ms interval were indistinguishable, suggesting similar neural sites mediated the target versus non-target decision. No significant CDs were observed after 250 ms in response to non-targets. In response to targets, a prominent P300 wave was observed, as expected, in the post-250 ms interval. CDRs in this interval showed that cingulate cortices were the first to be reactivated, followed by reactivation or deactivation of the same sites observed in the early response to the stimuli. Right cingulate cortex remained active up to and including the final waveform recorded at 672 ms. This suggests that the cingulate plays an important role in post-decisional activity and may control neural activity at other sites involved in post-decisional cognitive processing, including active suppression of the right posterior cingulate and occipital cortices. Our data support a multi-step model mediated by phasic activity in various brain regions, culminating in context updating and a revision of the expectation that a target stimulus occurs with a given probability.
Host: Bill Rapaport
Back to Top
11/29
Ann Bisantz, Department of Industrial Engineering, University at Buffalo
Assessing the Impact of Computerization on Work Practice
A study of Information Technology in Emergency Departments
Hospital emergency departments (EDs) commonly use status boards as tools for managing their clinical work. These are typically large format, manually updated “whiteboards” which use a grid format to display bed locations / patients as rows and a variety of important information regarding patient demographic and medical information in columns. A typical ED could not function today without a whiteboard (or some facsimile of one). Interestingly, despite their central importance to the work of the ED and the safety of patients, they have hardly been studied. For a variety of reasons, some EDs have replaced their manual status boards with electronic, computer-supported versions, and many others are contemplating such a change. The motivations for this change are complex, and typically involve improving accuracy of information, providing better information to management or other “back end” processes and improving patient safety. However, insertion of technology into a complex work place like the ED is never a straightforward substitution of one for the other. New technologies sometimes increase workload instead of decreasing it, and the change in tools also induces changes in work practices, in the relationships among workers, and the relationship between the affected unit (the ED) and the rest of the organization.
This talk will describe preliminary analyses of a field study that is being conducted to document and analyze the impacts of the shift from manual to electronic whiteboards in the emergency departments of two hospitals.
Host: Bill Rapaport
Back to Top
December
12/6
Susan Udin, Department of Physiology and Biophysics , University at Buffalo
Early Visual Experience Shapes Connections in the Brain
For creatures with two eyes, the left eye views the world and the right eye shares much of the same view. Each eye's optic nerve sends messages about those two views into the brain, where those separate messages are combined into a unified view. This phenomenon requires proper wiring of the connections in the brain. Multiple mechanisms enable correct during brain development, and one crucial mechanism is early visual experience. This factor is especially prominent in the frog Xenopus, and this seminar will focus on how Xenopus neurons change their connections during normal development, how they respond to altered visual input, and how their development can be changed by altering the neurochemical environment. The underlying circuitry will also be discussed.