Wednesday, January 14, 2004
2:00 pm - 4:00 pm
280 Park Hall, North Campus
Stephen Goldinger Ph.D.
Department of Psychology
Arizona State University
"Attributions of Memory: True and
False Recognition of Words, Pictures, and Faces"
Over the past several years, the profound effects of cognitive heuristics have generated renewed interest among researchers. Along with classic domains (e.g., decision making), heuristic processes apparently affect behaviors that are often considered more modular or automatic. One such domain is recognition memory: Although theories rarely specify processes beyond matching input stimuli to stored traces, some researchers (most notably Jacoby and Whittlesea) have suggested that many other factors can affect a person’s decision to answer “old” or “new.” In this presentation, I will briefly review our recent studies examining the fluency heuristic in recognition memory for words and faces. I will then present new data on two other heuristics, generation and resemblance, in memory for Asian and Caucasian faces.
Finally, I will describe three new experiments, testing memory-attribution theory using a subliminal priming. Our subliminal priming technique decouples stimulation from both perception and memory, thereby allowing an unusually stringent test of the attribution framework. Taken together, the results – especially those involving false memories – underscore the flexible decision processes involved in recognition memory.
For a printable version of this file click here
Wednesday, January 21, 2004
2:00 pm - 4:00 pm
280 Park Hall, North Campus
Phillips Stevens Jr., Ph.D.
Department of Anthropology
University at Buffalo
"Cognitive Implications of Diabolical Witchcraft Beliefs"
The term "diabolical witch" is used here to distinguish this belief from the several other meanings of the loaded words "witch" and "witchcraft." This is the nearly universal complex of beliefs in an evil supernatural being that flies through the night, steals children, and engages in the most despicable acts imagined by people. Allegations of certain of these behaviors have been made in all cultures and throughout history, most recently in the satanism scares of the 1980s and early 1990s. They seem to represent fears deeply-rooted in human nature, like Jung's "archetypes" or Rodney Needham's "primordial characters."
Illustrated with slides.
Phillips Stevens, Jr., is Associate Professor of Anthropology. He has conducted fieldwork in West Africa, the Caribbean, and urban areas of North America. He has authored or edited several books and numerous articles in cultural anthropology and African studies, particularly in areas of religion, folklore, and cultural change. In 1993 he received a SUNY Chancellor's Award for Excellence in Teaching, and in 2000 the UB Student Association gave him a Milton Plesur Award. He is currently working on a major book on the anthropology of magic, sorcery and witchcraft.
For a printable version of this file click here
Wednesday, February 11, 2004
2:00 pm - 4:00 pm
280 Park Hall, North Campus
Matthew Dryer, Ph.D.
Department of Linguistics
University at Buffalo
"Why do languages have nouns and verbs"
The question to be addressed is why most if not all human languages distinguish two word classes (or parts of speech) that we can call 'nouns' and 'verbs'. An initial hypothesis is that the distinction corresponds to a basic ontological or conceptual distinction between things and events. I argue that the view that nouns denote things is seriously confused. Rather nouns denote what I will call 'kinds'; it is noun phrases, not nouns, that denote things. However, I will also argue against an alternative hypothesis that the noun-verb distinction corresponds to a basic ontological or conceptual distinction between kinds and events. I will propose instead that the noun-verb distinction reflects the different frequencies with which different sorts of words are used in different syntactic functions. In particular, words that are used more frequently as arguments group together into nouns while words that are used more frequently as predicates group together into verbs. The point is made clearer in languages with a weak noun-verb distinction, in which both nouns and verbs can freely be used as either predicates or arguments. The general idea is that the linguistic categories of noun and verb are due to different frequencies of usage and not to any ontological or conceptual categories.
The kind of explanation I offer challenges a popular view in linguistics that language is a "window into the human mind". Linguists often make claims about the human mind on the basis of the nature of language, assuming that we can make inferences from the nature of human language to the nature of the mind. But usage-based explanations of the sort proposed here provide an alternative kind of explanation for the nature of language, without making assumptions about the human mind.
For a printable version of this file click here
Wednesday, February 18, 2004
2:00 pm - 4:00 pm
280 Park Hall, North Campus
Peter Wiemer-Hastings, Ph.D.
School of Computer Science, Telecommunication,
and Information Systems
DePaul University
"From Turing to Tutoring:
Latent Semantic Analysisas Cognitive Model
and Budget Natural Language Understanding"
This talk will center on Latent Semantic Analysis (LSA), a vector-based technique for representing and comparing texts. Following a brief history of LSA and description of the process by which its representations are built, LSA will be discussed as a model for human language learning and representation. Despite the fact that it ignores syntax altogether, LSA has neared or matched human performance on a variety of tasks. For single-sentence texts, however, LSA does not perform well, presumably due (at least in part) to its ignorance of syntax. Research by Dennis et al, Kanejiya et al, and myself has explored augmenting LSA with various types of structural knowledge. This work parallels psychological studies on the effects of structure on similarity judgments. The talk will conclude with descriptions of applications of LSA as an expectation-based natural language understanding mechanism.
For a printable version of this file click here
Wednesday, February 25, 2004
2:00 pm - 4:00 pm
280 Park Hall, North Campus
Jürgen Bohnemeyer , Ph.D.
Department of Linguistics
University at Buffalo
"When going means becoming gone:
Framing motion as state change in Yukatek Maya"
This presentation discusses the framing of motion as change of location in Yukatek Maya and crosslinguistically. Jackendoff (1983, 1990) advances a number of arguments to the effect that representations of motion events at Conceptual Structure cannot be reduced to state change functions - as suggested by Dowty 1979 and Miller & Johnson-Laird 1976, inter alia - but require primitive conceptual functions representing translational motion and path relations (i.e., relations of motion to, from, past, into, out of a ground, etc.). Jackendoff builds his case in particular on the encoding of 'route' paths, defined wrt. grounds in between source and goal, and the use of path functions with state descriptions in metaphors of 'fictive motion' (Talmy 1996, 2000). However, evidence from motion event descriptions in Yukatek suggests that Jackendoff's rationale is not universally valid.
In Yukatek, path relations are not expressed in satellites or adjuncts. These semantic distinctions are derived instead from the event structure of verbs of location change with English glosses such as 'enter', 'exit', 'ascend', and 'descend'. That path relations are not lexicalized in these verbs either, and thus not encoded at all in Yukatek, is suggested by the fact that descriptions headed by such verbs are applicable to events that involve location changes coming about without the figure moving, e.g., by the ground moving instead of the figure. Such uses of location change verbs were first documented by Kita (1999) in Japanese. Yukatek shows similar phenomena at a broader scale. The absence of path lexicalization has a number of secondary reflexes, including the necessity to break down multi-ground motion events into sequences of single-ground location changes, each encoded in a separate clause. Fictive motion metaphors are unavailable, and Yukatek likewise lacks temporal connectives with meanings such as 'after' and 'before' - and these have often been argued to draw on motion metaphors as well. And motion events involving route paths receive a highly under specified treatment, abstracting away from the distinction among 'over', 'across', 'through', etc., and reducing encoding to a single verb meaning 'pass'. All of these sources of evidence suggest that motion is indeed framed linguistically as location change in Yukatek. Typological evidence will be support spatial reasoning.
References
Dowty, D. R. (1979). Word meaning and Montague Grammar. Dordrecht,
Netherlands: Reidel.
Jackendoff, R. (1983). Semantics and cognition. Cambridge, MA: MIT Press.
1990). Semantic structures. Cambridge, MA: MIT Press.
Kita, S. (1999). Japanese ENTER/EXIT verbs without motion semantics.
Studies in language 23: 307-330.
Miller, G. A. & Johnson-Laird, P. N. (1976). Language and perception.
Cambridge, UK: Cambridge University Press.
Talmy, L. (1996). Fictive motion in language and "ception". In P. Bloom, M. A. Peterson, L. Nadel, & M. F. Garrett (Eds.), Language and space. Cambridge, MA: MIT Press. 211-276. (2000). Toward a cognitive semantics. Cambridge, MA: MIT Press.
For a printable version of this file click here
Back to Top
Wednesday, March 3, 2004
2:00 pm - 4:00 pm
280 Park Hall, North Campus
James Magnuson , Ph.D.
Department of Psychology
Columbia University
"Interaction in language processing:
Pragmatic constraints on lexical access"
Everyday language use is rich and textured. Conventional psycholinguistic laboratory tasks abstract away from natural complexity in order to isolate information relevant at different levels of linguistic description. Such simplifications reduce language use to smaller, tractable problems, and allow fine-grained chronometric processing measures. I argue that this approach paradoxically overestimates the complexity and modularity of language processing, as natural contexts provide layers of constraints that reduce the burden on bottom-up and within-level processing.
I will address two primary issues. The first is how we can study language in naturalistic contexts without sacrificing fine-grained measures and precise stimulus control. I will describe an eye tracking measure that is closely time-locked to spoken instructions in naturalistic tasks and that can be transparently linked to computational models, and an artificial lexicon paradigm that provides precise control over lexical characteristics. I will discuss how we have used both techniques to address debates in adult and developmental word recognition.
The second issue is whether lexical access - a process typically assumed to be encapsulated from higher levels of linguistic representation - is constrained by pragmatic context. Subjects learned to recognize an artificial lexicon of names of novel objects ("nouns") and textures that could be applied to them ("adjectives"). Each word had phonological competitors in both form classes. We compared competition effects given visual displays that required adjective use or made adjectives infelicitous. Consistent with the hypothesis that language processing makes use of reliable contextual constraints, we found an immediate impact of pragmatic visual cues: similar-sounding words competed when they were from the same class, but not when they were from different classes. This result adds to growing evidence that language processing is highly interactive, and the approach provides a foundation for the development of integrated theories of language use in natural contexts.
For a printable version of this file click here
Wednesday, March 10, 2004
2:00 pm - 4:00 pm
280 Park Hall, North Campus
Chrysanne Di Marco, Ph.D.
Department of Computer Science
University of Waterloo, Canada
"Computational Models of Natural Language Pragmatics"
Current natural language processing (NLP) systems are, almost without exception, still able to deal only with restricted, simplified, language. While researchers in natural language are now beginning to produce systems with real-world utility, NLP systems are still challenged by basic problems associated with analyzing syntax and determining semantic content. A major component of language, the pragmatics of human communication, remains understudied and under-represented in current computational systems. But, in the real world, the pragmatics of natural language---complex nuances of language involving exact choices of words, syntactic arrangement, and discourse structure---carry a good deal of the meaning of a text or utterance. If NLP systems are to be truly effective in everyday use, they must be able to handle much more of these complexities of real-world language.
In this talk, I will describe three stages of problems that we have addressed involving aspects of pragmatics in natural language systems: preserving style in machine translation, generating finely tailored documents, and classifying the rhetorical purpose of citations in scientific writing. Through this progression, various views of natural language pragmatics will be highlighted, together with the research issues raised in Computational Linguistics and Artificial Intelligence.
For a printable version of this file click here
Wednesday, March 24, 2004
2:00 pm - 4:00 pm280 Park Hall, North Campus
Eric Dietrich , Ph.D.
Department of Philosophy
SUNY Binghamton
"Discrete Thoughts:
Why cognition must use discrete representations"
Advocates of dynamic systems have suggested that higher mental processes are based on continuous representations. In order to evaluate this claim, we first define the concept of representation, and rigorously distinguish between discrete representations and continuous representations. We also explore two important bases of representational content. Then, we present seven arguments that discrete representations are necessary for any system that must discriminate between two or more states. It follows that higher mental processes require discrete representations. We also argue that discrete representations are more influenced by conceptual role than continuous representations. We end by arguing that the presence of discrete representations in cognitive systems entails that computationalism (i.e., the view that the mind is a computational device) is true, and that cognitive science should embrace representational pluralism.
For a printable version of this file click here
Wednesday, March 31, 2004
2:00 pm - 4:00 pm
280 Park Hall, North Campus
Judy Kroll , Ph.D.
Department of Psychology
Program in Linguistics and Applied Language Studies
Pennsylvania State University
Reading and Speaking Words in Two Languages:
A Problem in Representation and Control
Research on bilingual word recognition and production suggests that even when a bilingual intends to read or speak in one language only, information in the other language is available. In the present work we examined the course and consequence of this unintended activation by comparing performance on production tasks which differed in the degree to which words in both language were required to be prepared. The results suggest that in the absence of language-specific cues, words in both of the bilinguals languages compete for selection well into the process of lexicalizing concepts into spoken words. We consider the implications of these results for models of language production and for the development of second language proficiency.
For a printable version of this file click here
Wednesday, April 7, 2004
2:00 pm - 4:00 pm
280 Park Hall, North Campus
Leslie Ungerleider , Ph.D.
National Institutes of Health
"How the brain pays attention"
Not all objects in a visual scene can be analyzed simultaneously due to the limited processing capacity of the visual system. As a consequence, attention is used to selectively process relevant objects at the expense of irrelevant ones. Brain imaging studies using fMRI reveal how attention modulates the processing of relevant objects in human extrastriate visual cortex. This modulation appears to be generated via top-down control from a network of areas in parietal and frontal cortex.
For a printable version of this file click here
Wednesday, April 14, 2004
2:00 pm - 4:00 pm
280 Park Hall, North Campus
Student Poster Session
Abstracts to be posted
For a printable version of this file click here
Wednesday, April 21, 2004
2:00 pm - 4:00 pm
280 Park Hall, North Campus
Eve Clark , Ph.D.
Department of Linguistics
Stanford University
"Grounding and attention in word acquisition"
In communicative exchanges, participants begin by each paying attention to the other and to what is said. And they ground each piece of new information. That is, they add new information to common ground. With adults, this is done by asserting, displaying, presupposing, or exemplifying understanding as the exchange progresses. In this talk, I focus (a) on how participants in adult-child exchanges establish joint attention, and (b) on the evidence young children offer of grounding new information--in particular new words and information about those words. I will draw first on analyses of gesture, gaze, and talk from video-taped sessions of parent-child dyads in which the parent introduces a one- or two-year-old child to a set of unfamiliar objects; and second, on analyses of longitudinal data from the CHILDES archive where I focus on adult offers of new words and children's responses to such offers.
For a printable version of this file click here
Distinguished Speaker Series 2004
PRESENTS
Thursday, April 22, 2003
3:30 pm - 5:00 pm
Screening Room, Center for the Arts
North Campus
Eve Clark , Ph.D.
Department of Linguistics
Stanford University
"Conceptual perspective and speaker choices"
Adult speakers choose among perspectives when they talk; they use different terms to pick out different perspectives (e.g., the dog, our pet, that animal). The perspectives adult speakers adopt affect how they both categorize and remember events. Yet studies of lexical acquisition in young children have often proposed a single-perspective view that assumes children can at first use only one term for talking about a referent object or event: a cat can only be called "cat", not "animal" or "Siamese" as well. But since children are exposed to multiple perspectives by the adults around them, it seems reasonable that they too should adopt alternative perspectives from an early age--the many-perspectives view. Moreover, adults offer children pragmatic directions about the meanings of new words and hence about new perspectives. Evidence for this many-perspectives account comes from a range of sources: children spontaneously use more than one term for the same object; they construct novel words to mark alternate perspectives; they shift perspective when asked; and they readily learn multiple labels for the same referent.
Eve V. Clark, Professor of Linguistics & Symbolic Systems at Stanford University, grew up and was educated in the UK and France. After completing her PhD in Linguistics with John Lyons at Edinburgh, she worked on the Language Universals Project at Stanford with Joseph Greenberg, and two years later, joined the Linguistics Department at Stanford University. She has taught there since, aside from several years 'off' in the UK and the Netherlands. She has been a Fellow at the Center for Advanced Study in the Behavioral Sciences, Stanford (979-1980) and a Guggenheim Fellow (1983-1984); she is a Fellow of the American Association for the Advancement of Science, and a member of the Royal Netherlands Academy of Sciences. Her research has focussed on first language acquisition, in particular on the acquisition of meaning, where she has done extensive observational and experimental research; she has also worked the acquisition and use of word-formation, with detailed comparative studies of English and Hebrew in children and adults, and she has explored the pragmatics of word-coinage, applying the principles of conventionality and contrast to language use as well as to the process of acquisition. In her most recent work, she has been looking at the kinds of information adults offer children about unfamiliar words and their meanings, at the amount of negative evidence children may receive in the course of conversation, and at the relative contributions of gesture and gaze vs. language in adult exchanges with one- and two-year-olds. She has published numerous articles and chapters in linguistics and psycholinguistics. She is co-author of Psychology and Language (1977), and author of The Ontogenesis of Meaning (1979), Acquisition of Romance, with special reference to French (1985), The Lexicon in Acquisition (1993), and, most recently, First Language Acquisition (2003).
Wednesday, September 15, 2004
2:00 pm - 4:00 pm
280 Park Hall, North Campus
Ingvar Johansson.
Department of Philosophy
Umea University, Sweden
"Concepts and Classifications in the Gene Ontology"
Since a couple of years there exists on the web the Gene Ontology (http://www.geneontology.org/index.shtml). The Consortium behind it says that its goal is "to produce a structured, precisely defined, common, controlled vocabulary for describing the roles of genes and gene products in any organism." In my talk, I will show that this classificatory effort displays two features of general importance for cognitive science. First, it shows implicitly how hard it is to keep - in practice - a concept and its extension distinct. Second, and more importantly, it deviates from traditional Aristotelian-Linnaean non-evolutionary taxonomies of animals, plants and dead matter. In the latter, it is required that subordinate concepts are subsumed under only one superordinate concept on the overlying level, but the Gene Ontology allows explicitly a concept to be multiply subsumed; it allows so-called "multiple inheritances." I will argue that this fact shows the need to analyze in more detail a seldom noted distinction between "subsumption" and "specialization." From a purely linguistic-sentential point of view, this distinction corresponds to the fact that sentences of the form "P is f?ing" (example: "Paul is running") can have to two different kinds of relations to sentences that are more precise and specific. When "P is f?ing" has been specified by means of an adverb, as in "P is f?ing fast," the relation between the sentences corresponds to that of subsumption, whereas the relation between "P is f?ing" and specifications such as "P is f-ing on Y" ("Paul is running on the road") and "P is f-ing at midnight" are claimed to correspond to that of specialization.
For a printable version of this file click here
Wednesday, September 22, 2004
2:00 pm - 4:00 pm
280 Park Hall, North Campus
Greg Carlson .
Department of Linguistics
University of Rochester
"Meaning and Concepts"
Meanings of lexical items are often identified with concepts by many psychologists and linguists alike. I will agree with this. I will also agree that meanings of entire utterances are not concepts (at least, not of the same type). This appears to create a fundamental conflict about the nature of meanings expressed by language. I am going to argue that the conflict is apparent. Using evidence from linguistic theory, this talk is aimed at squaring the two conceptions of meaning by proposing an integrated system of semantic interpretation which takes meanings of lexical items to be concepts exactly of the sort (some) psychologists and linguists say they are and mapping them into meanings of the type (some) linguists, chiefly formal semanticists, say they are. The linguistic structures examined most closely are "weak indefinites" found, to my knowledge, in all languages, and object-incorporation structures found in many languages.
For a printable version of this file click here
Wednesday, September 29, 2004
2:00 pm - 4:00 pm
280 Park Hall, North Campus
Michelle Gregory
Department of Linguistics
University at Buffalo
"Speech production in humans and machines:
Using models of human performance to
improve machine performance"
For a printable version of this file click here
Wednesday, October 6, 2004
2:00 pm - 4:00 pm
280 Park Hall, North Campus
Morten Christiansen
Department ofPsychology
Cornell University
"The Role of Phonology in the
Acquisition and Processing of Syntax"
When learning their language children face a difficult ?chicken-and-egg? problem. Discovering the syntactic constraints governing their native language requires being able to assign individual words to lexical categories, such as nouns and verbs. Lexical categories, on the other hand, are only useful for acquisition insofar as they support syntactic constraints. In this talk, I consider how phonological cues in combination with distributional information may be used for solving this "bootstrapping" problem in language acquisition, and the possible consequences that such multiple-cue integration has for adult processing. I report on computational analyses of child-directed speech and connectionist simulations, quantifying the usefulness of phonological and distributional cues, and showing that there are learning mechanisms that can integrate them efficiently. On a theoretical level, these results suggest that multiple-cue integration becomes a crucial part of the child's emerging language system, and thus should also affect adult processing as well. I present results from on-line sentence processing experiments to corroborate this prediction by demonstrating the impact of phonological cues on adult language processing. I conclude that the integration of phonological cues with other types of information is integral to the computational architecture of our language system both in acquisition and adult processing.
For a printable version of this file click here
Wednesday, October 13, 2004
2:00 pm - 4:00 pm
280 Park Hall, North Campus
Michael Webster
Department of Psychology
University of Nevada
"Adaptation and the phenomenology of perception"
To what extent do we have shared or unique perceptual experiences? I will discuss how the answer to this question is constrained by known processes of sensory adaptation. Adaptation continuously renormalizes visual coding according to the stimuli currently before us. These adjustments have a large influence on how the world looks, and thus should influence whether it looks the same or different to others. If two individuals are exposed to and thus adapted by different environments, then their perception will be normalized in different ways and their subjective experiences will differ. This will be illustrated through examples of the effects of adaptation on color perception and face recognition. When intrinsically different individuals are exposed to a common environment, their perception will instead be normalized in common ways and their subjective experiences will be similar. This will be illustrated through examples of the influence of adaptation on the perception of image blur. Adaptation may partly serve to highlight how the present scene differs from the history of scene properties we have adapted to, and thus much of what we notice about the world may be a visual aftereffect.
For a printable version of this file click here
Wednesday, October 20, 2004
2:00 pm - 4:00 pm
280 Park Hall, North Campus
Constance Clarke
Department of Psychology
Language Perception Laboratory
University at Buffalo
"Adapting to foreign-accented speech: Implications for
theories of spoken word recognition"
In contrast to written language, speech contains a great deal of variability. The acoustic signal corresponding to a particular word or phoneme can vary tremendously when different people say it. The sources of this variability include, but are not limited to, the size and shape of the speakers vocal tract, the rate of speech, and the speaker's dialect or accent. Because there are few, if any, consistent acoustic markers that reliably signal a particular phoneme such as /d/ or /a/, it is not clear how listeners so easily recover the intended linguistic information. One way listeners may deal with variability in speech is by learning about the speech characteristics of the people they encounter. There is now good evidence that listeners are better able to understand the speech of someone whose voice they are familiar with. Listeners also seem to tune their perceptual criteria to the current talkers characteristics within just seconds of speech. Unfortunately, theoretical models of spoken word recognition are lagging behind the experimental findings. Current popular models assume constant, abstract representations of phonemes and words, and are unable to explain rapid perceptual learning. In this talk I will present evidence of rapid adaptation to speech characteristics, explore some fundamental problems with traditional models of spoken word recognition, and propose some new directions that may be fruitful in accommodating both the stable and the flexible aspects of human speech perception.
For a printable version of this file click here
Wednesday, October 27, 2004
2:00 pm - 4:00 pm
280 Park Hall, North Campus
David Mark, Ph.D.
Dept. of Geography
National Center for Geographic Information and Analysis
University at Buffalo
and
Andrew F. Turk, Ph.D.
School of Information Technology
Murdoch University, Perth, Australia
"Ethnophysiography:
An Ethnoscience of the Landscape"
Recently, ethnophysiography has been defined as an ethnoscience of landscape. Ethnophysiography explores the meanings of terms used in various languages and cultures to refer to the landscape and its components. Ethnophysiography has objectives similar to ethnobiology, which studies folk names and categories for plants and animals, but differs in important ways. Ethnobiology often uses scientific taxonomy as a baseline for assessing folk categories for plants and animals; however, variation in landforms and waterbodies is not constrained by mind-independent natural kinds in any obvious way. Thus ethnophysiography could contribute to the understanding of categorization in general by examining categorization of an inorganic natural domain. Ethnophysiography also can provide a valuable basis for multilingual access to geographic databases. Examples of differing language-specific conceptualizations will be presented. These are mainly drawn from a comparison of arid-landscape terms in an Australian aboriginal language (Yindjibarndi) with terms and their definitions in English. We will conclude with a description of plans for extending the work to Native American languages in Arizona and New Mexico, and with a listing of open problems for future research.
For a printable version of this file click here
Wednesday, November 17, 2004
2:00 pm - 4:00 pm
280 Park Hall, North Campus
Jay Atlas, Ph.D.
Department of Linguistics and Cognitive Science
Pomona College
"What Do Reflexive Pronouns Tell Us about Belief? --
a New Moore's Paradox de se,
Rationality, and Privileged Access"
Commonsense intuitions about folk-psychological concepts, linguistic intuitions about the semantics and pragmatics of 'believes' sentences, and philosophical views about mental states have resulted in strong philosophical theses about the nature of belief, conscious belief, and a Cartesian "transparency" of the mind. I shall look at logico-linguistic evidence for 'believes' sentences that refutes a basic principle of self-awareness of one's belief-states that is defended by philosopher of mind Sydney Shoemaker and many others. Then I give a philosophical critique that shows why Shoemaker's arguments for that view should never have been expected to succeed in the first place. And I suggest some implications for Richard Moran's use of Moore's Paradox sentences to defend the metaphysical "inner"/"outer" and the epistemological "objective"/"subjective" distinctions and to support the transcendence of the "inner."
For a printable version of this file click here
Wednesday, December 8, 2004
2:00 pm - 4:00 pm
280 Park Hall, North Campus
William Badecker , Ph.D.
Department of Cognitive Science
Johns Hopkins University
"Do we process grammatical agreement by
tracking words or syntactic features? "