2007 Events

Spring Semester


Randall Dipert, Department of Philosophy and Center for Cognitive Science, University at Buffalo


Game Theory for Philosophers and Cognitive Scientists


Advanced mathematical results in its various branches have not generally had much impact in philosophy.  (The story with cognitive science is much more complicated.) The notable exceptions are geometry, number theory, and the theory of computation (up to 1940).  One simply does not know how most other, sometimes deep, results would affect various philosophical questions, if at all.  One can guess that various results in complexity theory (e.g., NP Completeness), information theory, and combinatorics (especially graph theory) might have some philosophical impact.  A minor exception is the use of game theory in David K. Lewis theory developed in his widely acclaimed book, Convention (1969).


       I want to argue that certain game-theoretic results of Robert Axelrod, Thomas Schelling, and Robert Aumann almost certainly have startling implications for philosophy and cognitive science, namely in the theory of rationality and in social and political philosophy.  Game theory may also indirectly impact philosophy by demonstrating the advantageousness of certain biological or cultural traits (e.g., in work by John Maynard Smith and popularized by R. Dawkins and others).  These must then be factored into a robust theory of human nature.  I will attempt to give evidence for this by using as a suggestive example results from the Iterated Prisoners’ Dilemma that obviously affect moral principles in war and international affairs.  One startling implication of my own computational-empirical research in simulations (original so far as I know) is that the widely maligned Preventive (“preemptive”) War strategy of the Bush Administration has some foundation demonstrable within the framework of the rationality of strategies over the long term.  Various other social, linguistic, and broadly cognitive characteristics may actually owe their apparently arbitrary nature to their as-yet-unanalyzed satisfaction of game-theoretic equilibria.

Host: Bill Rapaport


Back to Top




David Hunter, Department of Philosophy, Ryerson University


Dispositions and Psychological Explanation


It seems clear that belief makes a difference to behavior; that agents differ in what they believe only if they differ in their behavioral dispositions. But certain cases of first-person belief raise a serious puzzle for this view. They are cases where agents with the very same beliefs, or at least who agree on all the facts, nonetheless act differently. Some theorists have responded by proposing to individuate beliefs more finely than by agreement on the facts, concluding that agents who agree on everything might nonetheless differ in their beliefs and consequently in their behavioral dispositions. I will argue that this is the wrong lesson to draw from those puzzling cases, and will sketch what I think is a better lesson.


Host: Bill Rapaport


Back to Top



Peter Q. Pfordresher, Department of Psychology, University at Buffalo


Why can't Johnny sing?


Mechanisms for vocal imitation of pitch


Despite the seeming prevalence of "poor pitch" singing, basic questions regarding its cause, prevalence, and symptoms remain unanswered. I will discuss the results of recent research designed to form an objective description of "bad" singing and to test hypotheses regarding its origin. In the experiments I report, participants (typically non-musicians and non-singers) complete a variety of tasks that involve vocal production or perception of musical pitch. Contrary to the implications some recent research, bad singing does not seem to result from "tone deafness" literally speaking: Bad singers are as good as good singers on basic perceptual tasks. In addition, it seems that bad singing cannot be exclusively related to control of laryngeal muscle. Instead, our data suggest that inaccurate singing results from inappropriate mapping of pitch information onto motor gestures, potentially in neural systems reserved for the mental simulation of action.


Host: Bill Rapaport


Back to Top





J. David Smith, Department of Psychology and Center for Cognitive Science, University at Buffalo




and the Emergence of Cognitive Systems for Categorization


I will explore--from a utility perspective--the role of prototypes and exemplars in learning natural-kind or family-resemblance categories. The organization of the psychological spaces containing these categories determines the felicity of categorizing using different representations. Humans and nonhuman animals will receive more variable and less useful signals about category belongingness if, as exemplar theory supposes, they store individual exemplars separately and use these as the| comparative standard for categorization. Humans and animals will receive more stable and useful signals about category belongingness if they average their exemplar experience, abstract the prototype--as prototype theory supposes--and use this as the comparative standard for categorization. Thus, category members and non-members--to be approached and avoided, perhaps--will be more discriminable when referred to a prototype and therefore categorized more accurately and adaptively. There are strong reasons for the cognitive systems that serve categorization to have come to include prototype abstraction among their capabilities.


Host:Bill Rapaport


Back to Top



Eric Dietrich, Department of Philosophy, Binghamton University


The Bishop and Priest


On The Epistemology and Psychology of True Contradictions


True contradictions are increasingly taken seriously by philosophers and logicians. Yet, that contradictions are always false remains deeply intuitive. This paper presents a psychological model explaining how certain contradictions can be seen to be true. The contradictions in question are called "limit contradictions," and have been systematically studied by Graham Priest. The specific limit contradictions we will look at are those associated with Bishop George Berkeley's idealism. The paper's goal is to weaken the intuition that contradictions are always false, while planting an intuition that some contradictions are true. If philosophers could psychologically understand how limit contradictions can be true, beyond just saying that the contradictory conclusions follow validly from true premises, then perhaps that they are true would become more tolerable.


Host: Bill Rapaport


Back to Top




Barbara Landau, Department of Cognitive Science, John Hopkins University


Starting at the End


The importance of goals in spatial language and spatial cognition


A hallmark of human cognition is our capacity to talk about what we see. How is this accomplished? Given that language and spatial representations are likely to have quite different kinds of structures, the challenge is to understand how such apparently different systems of knowledge map onto each other, and how these mappings are learned. In my talk, I will discuss this problem with respect to the language of events, including manner of motion, change of possession, attachment/ detachment, and change of state events. I will focus on evidence from normally developing children and children with Williams syndrome, a rare genetic deficit that gives rise to an unusual cognitive profile of profoundly impaired spatial representations together with spared language. The evidence shows that a fundamental property of event semantics, an asymmetry between source and goal expressions, is a pervasive fact about the linguistic description of events. Ancillary evidence suggests that this asymmetry is also a part of our non-linguistic representations, appearing in non-linguistic tasks among infants, children, and adults. As a whole, the results suggest a homology between spatial language and spatial representation, thereby providing a partial solution to the problem of mapping dissimilar domains onto each other.


Hosts: Gail Mauner, Dept. of Psychology, UB, and Jean Pierre Koenig, Dept. of Linguistics, UB


Back to Top



J. Leo van Hemmen, Physik Departments, Technische Universität München




Host: Susan Udin


Back to Top



Steve Petersen, Department of Philosophy, Niagara U.


Software Intelligence


These days we are happy to say at least figuratively that some bits of software are intelligent. Chess programs, spam filters, bots on the emerging semantic web, and villains in computer games are all getting "smarter". Though merely metaphorical now, we might wonder if it is possible, someday, for a piece of pure software to be /literally/ intelligent--intelligent in the same important sense normal humans are.


Much hangs on this question. According to a growing consensus, intelligence has to do with adaptability in the face of environmental goals--and this notion has motivated much of the "embodied" approach to artificial intelligence. The possibility of software intelligence complicates this consensus, though, and the accompanying embodied robotics program. If software intelligence is possible, for example, probably it will be significantly easier and cheaper to engineer than "real-world" robots. There are ethical implications to the thesis as well, since presumably anything with genuine intelligence carries at least some moral significance. A simulation involving virtual agents in a natural disaster could someday be just as horrific as engineering a natural disaster in the "real world".


I argue that even on the consensus view of intelligence, such "software intelligence" is indeed possible.


Host: Bill Rapaport


Back to Top





Jim Swan, English Department, University at Buffalo


The Long and Short of It

Speech Rhythms, Prosody,

and Cross- Linguistic Effects


The fundamental place of rhythm in speech is well understood. It has been shown, for instance, that people in conversation attune themselves to one another’s rhythm and intonation; that neonates, only a few days old, alert to the sound of their mother’s language and not to others; that language use, the production of meaning, is inescapably rhythmical.


This talk is the beginning of a study of prosody in poetry, specifically of the effect that translating or adapting work in another language has on a writer’s practice in her own language. Here I will be focusing on an important historical moment when English poetry finds its early-modern footing after medieval English and its strongly French-influenced pronunciation falls from use. English writers in the late 16th century grow up learning to read and write Latin poetry and prose. Prosody in classical Latin is markedly different from prosody in English, and when a group of poet- experimenters try their hand at writing poems in English using classical meters, the result is far-reaching and profound. My argument is that, through these experiments in the formal properties of another language, poets discovered the deep prosodic potential of their own language. In a longer version, I would extend the analysis forward to the beginnings of 20th-century modernism, when Ezra Pound experimented with translations of Old (i.e., pre-medieval) English and Homeric Greek poems.


In my analysis of lines and phrases, I will surely reveal my naiveté about matters phonological, and I look forward to comments and help from the colloquium.


Host: Bill Rapaport


Back to Top



Eduardo Mercado III, Department of Psychology and Center for Cognitive Science, University at Buffalo


Learning and Representation in Auditory Perception


Learning to discriminate and identify sounds can change how an organism’s brain responds to those sounds, and how easily differences between similar sounds are recognized. Recent efforts to rehabilitate children with language processing difficulties have attempted to take advantage of this phenomenon by using progressive auditory training programs to improve individuals’ perceptual sensitivity to differences in speech sounds. However, a current lack of understanding about how perception and learning interact makes it difficult to predict: (1) how effective such strategies might be for a particular individual, (2) to what extent developmental factors that constrain the effects of learning on perception can be circumvented, or (3) whether different percepts within or across modalities might be more or less plastic. In this talk, I will discuss recent comparative studies of auditory perceptual learning in rats and humans that shed some light on the mechanisms that constrain how learning experiences impact perceptual abilities. Studies with adult rats show that some auditory distinctions are extremely difficult or impossible for rats to learn to make unless they have prior experience with less challenging distinctions, suggesting that the fidelity or utility of auditory 

| representations is experience-dependent. Electrophysiological measures of cortical responses from naïve and experienced rats suggest that auditory training increases the selectivity of cortical neurons for particular acoustic features experienced during training. Behavioral experiments with humans also indicate that different auditory training experiences can differentially impact perceptual sensitivities. Scalp recordings collected in parallel with this behavioral task suggest that behavioral improvements parallel changes in cortical response properties. Interestingly, the largest changes in cortical responses occurred for sounds that were already easily distinguished before training, suggesting that changes in cortical representations continue to occur even when behavior is stable. Additionally, the best neural predictor of an individual’s ability to distinguish two sounds was not the response evoked after repeated presentations of one sound followed by the presentation of the second sound (the ubiquitous oddball task), but instead the response evoked when only one of the sounds was repeatedly presented, again suggesting that representational selectivity constrains performance. Cortical correlates of auditory perceptual learning point to adaptive spatial isolation in cortical networks as a primary mechanism of perceptual differentiation.


Host: Bill Rapaport


Back to Top



Myrna Schwartz, Moss Rehabilitation Research Institute


Beyond double dissociation


The graceful degradation and recovery


of language production in aphasia



Cognitive neuropsychology has traditionally sought information about the modular mind/brain systems that support cognition, using evidence from double dissociations. A complementary question is whether brain systems, modular or not, display the property of graceful degradation, meaning the ability to approximate the desired response in the face of imperfect information or when damaged. This talk has three aims: (a) to explain the graceful degradation hypothesis in relation to connectionist models of language production; (2) to present data from patient studies that supports its validity; and (3) to show that double dissociations and graceful degradation together place strong constraints on models of language processing.


Host: Gail Mauner, Dept. of Psychology, UB, and Jean Pierre Koenig, Dept. of Linguistics, UB


Back to Top



Eva Juarros-Daussà, Department of Romance Languages and Literatures, University at Buffalo



Lack of Recursion in the Lexicon

The Three-Argument Restriction


It is a universal property of argument structure that verbs cannot take more than three arguments –one external and two internal--, as in the English verbs give or put, unless if introduced either by an extra lexical preposition (1a), an applicative or causative morpheme (1b), or the additional head of a serial verb construction (1c) (Dixon and Aikhenvald 2000, Peterson 2007). In any case, the valency of the resulting predicate is restricted to that of a ditransitive verb. I call this the Three-Argument Restriction (TAR, see 1d), which remains one of the mysteries of human language. Since there is no known processing reason not to lexically associate more than three participants to a predicate, I claim that the TAR is syntactic in nature, and is one of a family of architectural constraints that determine and limit possible attainable languages (in this case possible argument structures). In this talk I examine the TAR and its challenges, including a case study involving Romance clitics. I further examine how such generalization could be derived within the framework of lexical syntax put forth by Hale and Keyser 2002. According to this proposal, deriving the TAR crucially involves negating the existence of a recursive function in the domain of argument structure, relegating such functions to the domain of sentences.


1a. Erika gave Alexandra a present *(through/on behalf of) Frances


1b. wa’-khe-yó:nye’        Meri  a:y-e-yo-h          gwa’yea’

    FACT-1sg.NOM.3ACC- make-PUNC Mary OPT-3.SG.NT.NOM-kill-PUNC rabbit


“I made Mary kill the rabbit”


(Baker 1996)


1c. Olú ti       omo   náà  subú


      Olu push child  the   fall


“Olu pushed the child down; Olu caused the child to fall by pushing him”


(Baker 1996)


d. “A single predicate must have at most two internal arguments and one external”

Fall Semester


J. Leo van Hemmen, Physik Departments, Technische Universität München


Neurobiology of Snake Infrared Vision

How It Can Be Precise Despite Bad Optics


Host: Susan Udin


Back to Top



Jürgen Bohnemeyer, Department of Linguistics, University at Buffalo


How to Hammer a Shirt Apart

(and Talk About It)


We examine the treatment of atypical instrument-theme configurations in events of caused separation in material integrity – i.e., “cutting” and “breaking” (C&B) events - across languages. We focus on four languages which offer an alternative between monomorphemic verb roots and various kinds of complex predicates in the C&B domain: serial verb constructions in Lao and Sranan, compound verb stems in Yucatec Maya, and prefix verbs, particle verbs, resultative constructions, and light verb constructions in German. We test the hypothesis that in accordance with Grice’s Manner maxims, across languages, complex predicates are preferred over simplex predicates for reference to atypical instrument-theme configurations. We draw on descriptions of the “Cut and Break Clips” from five speakers per language. We treat the typicality of a scene as the degree to which it matches the prototype of any linguistic description, and rely primarily on inter-speaker variation as a measure of (a-)typicality in this sense. In line with our hypothesis, we found that the higher the amount of inter-speaker variation a scene elicited in a given language, the more likely the speakers of that language were to prefer complex over simplex predicates. However, though highly significant for the other three languages, this correlation is not significant for German. In German, simplex verbs play no more than a marginal role in the C&B domain; we argue that this upsets the division of labor between simplex and complex expressions underlying the relation between Gricean stereotype and manner implicatures.


Host: Bill Rapaport


Back to Top



Mathias Brochhausen, Department of Philosophy, Unversity at Buffalo & Institute for Formal Ontology and Medical Information Science, Universität des Saarlandes


Building a Cancer Ontology

Bringing Together Clinical Practice and Ontological Theory


The talk will focus on the central question of how practice and ontological theory have to interact in order to provide a coherent and realist account of a complex domain of reality in a way which will allow diverse communities to share and manage data.


We will describe a project to develop an ontology to manage biomedical data concerning cancer research and management throughout the European Union. This provides an excellent example of the techniques used in ontology construction, and also of some of the problems faced in using ontologies to support cross-linguistic integration of data. How do we categorise a domain containing aspects as diverse as molecular biology, patient management, and medical ethics without losing information and yet in such a way as provide a system that can be understood and used by practice-oriented professionals?




Host: Bill Rapaport


Back to Top






Doug Roland, Department of Linguistics, University at Buffalo


Discourse Context and English Object Relative Clauses



While object relative clauses (1) are generally more difficult to process than subject relative clauses (2), a variety of factors such as the pronominal status and givenness of the embedded NP have been shown to reduce or eliminate this difficulty. We argue that this reduction in difficulty is not the result of the pronominal status of the embedded NP per se, but is the result of changes in the relationship between the object relatives and the larger discourse context in which they appear.


In normal language use, object relatives tend to be used to ground the modified NP in the discourse context, while subject relatives are more likely to be used for other purposes. Thus, (2) would be likely to occur whether or not the banker had been mentioned already, while (1) would be likely to occur only if the banker had been the discourse topic. If discourse factors rather than pronominal status or frequency factors were responsible for previous results, the difficulty of object relative clauses with full NPs should be reduced when they are embedded in an appropriate discourse context. 


A participant-paced moving window paradigm was used to examine the influence of discourse context on the processing of relative clauses. Preceding context sentences either provided a discourse context where the embedded NP of the relative clause was the topic (e.g., 3), or was related to the target sentence, but did not topicalize the embedded NP (e.g., 4). We found that when an appropriate discourse context was provided, object relatives with full NPs were read as quickly as subject relatives in the regions containing the embedded NP and relative clause verb. However, we also found that, regardless of discourse context, the main clause verb was read more slowly for object relatives than subject relatives. 


While providing an appropriate discourse context did not completely eliminate processing difficulties associated with object relatives, given the reading time results at the main verb, it did eliminate previously observed processing differences between subject and object relatives in the relative clause itself. Thus, our results suggest that the pronominal object relative effect may be due, in part, to the better fit between the typical discourse use of object relatives and experimental contexts in which they appeared.


(1) The lady that the banker visited enjoyed the dinner very much.

(2) The lady that visited the banker enjoyed the dinner very much.

(3) The banker was very friendly. The lady that .

(4) There was a dinner party on Saturday night. The lady that .


Host: Bill Rapaport


Back to Top



David Temperley, Eastman School of Music, University of Rochester


Communicative Pressure and Music Cognition


Communicative pressure refers to the pressure on communicative systems to evolve in ways that facilitate the conveyance of crucial information. I begin by presenting some well-documented examples of communicative pressure in language (syntax and phonology). I then examine the relevance of this idea to music. First I explore what I call the syncopation-rubato trade-off: The fact that musical styles with a high degree of rubato (expressive tempo fluctuation) tend to have a low degree of syncopation (misalignment between accented notes and strong beats), and vice versa. I present some recent experimental work exploring the role of the syncopation-rubato trade-off in spontaneous music performance. I then consider another general phenomenon that seems to reflect communicative pressure: Uniform Information Density (UID). The UID principle, first proposed with regard to language Levy & Jaeger 2006), states that producers of communication tend to maintain a relatively even flow of information so as to facilitate processing. I explore several applications of this idea to music, including the correlation between melodic interval size and note length and the effect of harmony on expressive performance. Finally, I consider how the syncopation-rubato trade-off might be explained from the point of view of Uniform Information Density. This talk is geared to a general cognitive-science audience and does not require extensive knowledge of music.


Host: Bill Rapaport


Back to Top



Werner Ceusters, Department of Psychiatry, University at Buffalo


Referent Tracking


Research topics and applications


Referent Tracking (RT) is a paradigm introduced in 2005 intended to provide a means of ensuring unambiguous reference to the particulars in reality that are mentioned in statements. Central to this paradigm is the use of globally unique singular identifiers - called IUIs (for Unique Instance Identifiers) - that stand proxy for the entities in reality to which they refer. For an identifier (ID) to be a IUI, it must refer to one and only one particular, and this tight connection between the particular and its IUI must be asserted by an author in an RT system (RTS). One purpose of the RTS is to give agents who wish to make statements about entities in reality a means to retrieve IUIs for particulars to which identifiers have already been assigned, and to create IUIs in other cases. Another purpose is to provide an efficient way to store data about particulars in terms of their relations to other particulars and to the universals which they instantiate. The applicability of RT has since its inception been assessed in areas such as electronic healthcare record keeping, digital rights management, automated decision support, corporate memories in enterprises, intelligence and outcome assessment. In a recently started project, it is used to bridge the gap between dimensional versus categorical approaches to psychiatric diagnoses.

Of course, RT presents some challenges of its own. One specific problem is how to represent phenomena commonly expressed by statements such as: "no history of diabetes", "hypertension ruled out", "absence of metastases in the lung", and "abortion was prevented". Such statements seem at first sight to present a problem for RT, since there are here no entities on the side of the patient to which unique identifiers can be assigned. Another challenge is keeping track of the different kinds of changes, reflecting for example: (1) changes in the underlying reality, either in a specific patient's condition or the world in general; (2) changes in our understanding; (3) reassessments of what is considered to be relevant for inclusion in a referent tracking database, or (4) encoding mistakes introduced during data entry.



The presentation aims to give an overview of the state of the art in RT. One specific goal is to find amongst the audience researchers that are either interested to contribute to the research, or to apply referent tracking in their research such that we can set up collaborative grant proposals.


Host:Bill Rapaport


Back to Top



James R. Sawusch, Department of Psychology, University at Buffalo


Signal Variability and Perceptual Constancy in Speech

How Listeners Accommodate Variation in Speaking Rate


Human speech is a variable signal because of differences between talkers (dialect, vocal-tract length, habits) and differences within a talker (variable speaking rate, influences of coarticulation). In spite of signal variability, listeners exhibit "phonetic constancy" in recovering the sequence of sounds (phonemes) and words intended by the talker.


The focus here will be on the perceptual mechanisms that listeners use to deal with variability in the signal produced by variation in a talker's speaking rate. A series of studies will be described that examine what type of information talkers embed in the speech signal and the nature of the computations in perception that enable listeners to normalize for variation in speaking rate.


A second series of studies will further examine listeners' normalization in a context of two voices. These studies focus on if/how a listener separates sounds (voices) into different sources, and how the processes that influence sound source separation interact with the listener's adjustment to the speaking rate of the talkers. The behavioral evidence reveal that the perceptual adjustment for speaking rate is an obligatory, autonomous process in perception and is readily influenced by information by voices other than the target talker.


Host: Bill Rapaport


Back to Top





Liz Stillwaggon, Philosophy Department, Univ. of South Carolina & Center for Inquiry, Amherst, NY


Are We Practicing What We Preach?

Methodological Continuity in Cognitive Science


Philosophers are well aware that our beliefs are often out of sync with how we arrive at them. In this talk, I explore one example of how this philosophical problem manifests in cognitive science. A little over a decade ago, the philosopher of mind Peter Godfrey-Smith introduced the related notions of weak continuity (WC) and methodological continuity (MC), i.e., that cognition is an activity of living systems only, and that cognition ought to be investigated only in the context of living systems, respectively. Although many of us might be willing to grant WC, we would be wary of advocating MC because doing so constitutes a fundamental rejection of the project of artificial intelligence. I examine some investigations into the nature of cognition from analytic philosophy, (soft) artificial life, and robotics to determine to what extent they respect (or breach) the principle of MC, and conclude by proposing a methodological guideline that is intended to do justice to both WC and MC


Host: Bill Rapaport


Back to Top






James Collins & Jaekyung Lee, Graduate School of Education, University at Buffalo


Using Writing to Improve Reading Comprehension


This colloquium will discuss findings from the Writing Intensive Reading Comprehension (WIRC) study currently underway in UB's Graduate School of Education. In the WIRC research more than 2000 fourth and fifth grade students in ten low-performing Buffalo schools have taken part in two year-long experiments to test the efficacy of using focused, assisted writing practice to improve reading comprehension. Students enrolled in the WIRC classrooms write daily about the reading they are doing in language arts, and they receive assistance from teachers, peers and thinksheets to help them write about literary selections they are reading. As the name suggests, a thinksheet is a guide to help students think carefully and write about the reading they are doing. The presenters discuss the results of the large experiments which generally show that students in low-performing schools can significantly improve their reading comprehension and writing abilities through sustained, assisted practice with using writing to make sense of their reading. The presenters also discuss formative and qualitative studies designed to help the WIRC researchers understand factors which may facilitate or constrain successful applications of interventions using writing to improve reading comprehension. Findings support further development of sociocognitive theories of literacy development and collaborative means of literacy instruction and assessment.


Host: Bill Rapaport


Back to Top



David Mark, Department of Geography, Unversity at Buffalo



Cross-cultural and Cross-linguistic Variation in Conceptualization of the Landscape and its Elements


Ethnophysiography studies how people conceptualize the natural landscape, especially landforms and water bodies. Are the concepts underlying terms for landscape features and places more or less the same across the languages, or are there significant differences across languages and cultures? Ethnophysiography focuses on kinds of things in the landscape, and aims to document in detail the things in the world (extensions) that are referred to by each term. Ethnophysiography relies heavily on ethnographic methods for obtaining information through interviews, description, and community participation. Because landscape elements seldom fall into anything like 'natural kinds', there is more room for cross-cultural variation than in the cases of plants and animals. Thus the landscape and its elements provide an interesting venue for studies of categorization in general.


For the last several years, my colleagues and I have been conducting ethnophysiography research with the Yindjibarndi people of northwestern Australia and the Dine (Navajo) of New Mexico and Arizona, both of whom live in arid or semiarid landscapes. This presentation will elaborate on ethnophysiography in general, and then present some findings for these two case studies and related work. Almost none of the lexicalized landscape terms in these languages have exact semantic equivalents in the other languages. Yet in most cases the conceptual chunks that are lexicalized in each language seem coherent and reasonable concepts to have terms for. Implications for geographic ontology also will be mentioned.


Host: Bill Rapaport


Back to Top



Adele E. Goldberg, Program in Linguistics, Princeton University


Learning the general from the specific


The constructionist approach to grammar allows both broad generalizations and more limited patterns to be analyzed and accounted for fully. In particular, constructionist approaches are generally usage-based: facts about the actual use of linguistic expressions such as frequencies and individual patterns that are fully compositional are recorded alongside more traditional linguistic generalizations (Langacker 1988). Instances are represented at some level of abstraction due to selective encoding, and generalizations over instances are made as well.


In an impressive interdisciplinary convergence, a similar position has been developed within the field of categorization. Early accounts of categories adopted general abstract summary representations; a subsequent wave of "exemplar based" models of categorization held sway in the field for a period following. Most recently, categorization researchers have argued for an approach that combines exemplar-based knowledge with generalizations over that knowledge. Insights gained from research in general categorization can shed light on how learners go from the specific to the more general. My presentation will focus on several such factors that promote and constrain generalizations: a) skewed input, b) degree of coverage and c) statistical preemption.


Host: Bill Rapaport


Back to Top





Michael McGlone, Department of Philosophy, Unversity at Buffalo


Conscious Experience and the Knowledge Argument


Physicalism, as I will understand it, is the view that all correct information is physical information---information of the sort studied by the physical sciences. In this talk I will be concerned with what philosophers call "the knowledge argument" against physicalism. According to this argument, there is correct information regarding the nature of conscious experience that is not itself information of the sort studied by the physical sciences. Although the key idea behind this argument has been with us for a long time, I will focus on a fairly recent formulation due to Frank Jackson (1982). Having considered this form of the argument, I will turn to a recent response, that offered by Jackson himself (2004), and argue that this response is unsatisfactory. Finally, I will attempt to isolate what I take to be the deepest and most interesting issues raised by the argument.