2014 Events

Spring Semester

5 February 2014

Jürgen Bohnemeyer

Associate Professor, Department of Linguistics, University at Buffalo

Cognitive Science 2.0: Evidence from spacial cognition

ABSTRACT:

I argue that the cognitive sciences have been undergoing a slow paradigmatic shift. This shift deemphasizes the original foundational assumptions of innate knowledge and symbolic processing and embraces cultural transmission of knowledge, individual variation, and brain plasticity as areas of exploration rather than complications that are to be abstracted away from. I illustrate this dialectic with preliminary findings from a study of the use of spatial reference frames in 11 populations: speakers of six Mesoamerican languages, two indigenous languages spoken to the north and southeast of the Mesoamerican area, and three varieties of Spanish: Mexican and Nicaraguan Spanish and European Spanish spoken in Barcelona. A series of mixed-models linear regression analyses of the responses to a referential communication task indicates that language, alongside literacy, is an irreducible factor in predicting frame use, contra Li & Gleitman (2002). It also suggests that the use of Spanish as a second language makes an irreducible contribution to the use of relative frames (a subtype of egocentric frames) in the indigenous languages, pointing toward language as one vehicle of the cultural transmission of cognitive practices of spatial reference. We also ran a recall memory experiment with a larger number of members of the 11 populations. Here, all of the New World populations in fact responded predominantly geocentrically, including the Mexican and Nicaraguan Spanish speakers, even though these, as well as several of the indigenous populations, did not show a clear preference for either egocentric or geocentric coding during the linguistic task. We tentatively suggest that this finding may be in line with results from primate studies (Haun et al 2006), which point toward a geocentric bias in non-human primates. It is conceivable that this geocentric bias is weakly innate in all primates - human and nonhuman - but can be overridden by a learned, culturally (including linguistically) transmitted egocentric bias. A confirmation of this hypothesis would mean that we have come from the rationalist assumption that European-style spatial cognition is universal and innate and that culture is therefore irrelevant to the study of spatial cognition to the empirically motivated conclusion that the spatial cognition of Europeans is in fact quite exotic.

RECOMMENDED READINGS:

  1. Haun, D. B. M., Rapold, C., Call, J., Janzen, G., & Levinson, S. C. (2006). Cognitive cladistics and cultural override in hominid spatial cognition. PNAS, 103, 17568.17573.
  2. Li, P. & L. Gleitman. (2002). Turning the tables: Language and spatial reasoning. Cognition 83(3), 265.294.

 

 

19 February 2014

Daphna Heller

Assistant Professor, Department of Linguistics, University of Toronto

Perspective-taking behavior as the probabilistic weighing of multiple domains

ABSTRACT:

It is usually assumed that definite descriptions such as "the candle" are used to refer to a candle which is uniquely identifiable relative to a set of entities defined by the situational context. Thus, the interpretation of a definite crucially depends on listeners' ability to correctly construct this situation-specific "referential domain". While there is considerable experimental evidence that listeners are indeed able to use various types of information to construct referential domains in real time, some evidence suggests that this is not the case with respect to information about the other's perspective. The starting point of this talk is two apparently-contradictory studies (Keysar et al., 2000; Heller et al., 2008). Previous attempts of reconciliation have focused on the idea that these studies had different situational cues that led listeners to adopt a different perspective. Here I take a different approach: I propose that the apparently-contradictory results point to the probabilistic nature of referential domains. Specifically, I present a Bayesian model where situational cues do not lead listeners to choose one domain or the other, but are rather used to determine how multiple domains are combined. I then demonstrate that the findings from Keysar et al., (2000) and Heller et al., (2008) can be replicated in a single experiment. This approach departs from the standard (implicit) assumption that reference resolution proceeds relative to a single domain.

RECOMMENDED READINGS:

 

  1. Heller, D., Grodner, D. & Tanenhaus, M. K. 2008. The role of perspective in identifying domains of reference. Cognition, 108, 831-836.
  2. Keysar, B., Barr, D.J., Balin, J.A. & Brauner, J.S. 2000. Taking perspective in conversation: The role of mutual knowledge in comprehension. Psychological Science, 11, 32-37.

 

 

26 March 2014

Shimon Edelman

Professor, Department of Psychology, Cornell University

The role of similarity in object and scene representation

ABSTRACT:

The concept of similarity has traditionally played a central explanatory role in cognitive science. With the advent of associative memory algorithms that respect the local similarity structure of the representation space, it has reasserted its importance also in computer vision. In this talk, I shall (i) review conceptual, mathematical, computational, and empirical aspects of similarity, as applied to the problems of visual object and scene representation, recognition, and interpretation, (ii) mention some key computational problems arising in attempts to put similarity to use, along with their possible solutions, (iii) briefly state a previously developed similarity-based framework for visual object representation, the Chorus of Prototypes, along with the empirical support it enjoys, (iv) present new mathematical insights into the effectiveness of this framework, derived from its relationship to locality-sensitive hashing and to concomitant statistics, and finally (v) introduce a new model, the Chorus of Relational Descriptors (ChoRD), that extends this framework to scene representation and interpretation.

RECOMMENDED READING

 

 

 

 

9 April 2014

Joy Hanna

Assistant Professor, Department of Psychology, Daemen College

Language comprehension in face-to-face conversation - how what you know, where you look, and who you are can tell someone what you mean

ABSTRACT:

I will present some of my recent and ongoing work that investigates how people understand language in conversational settings. This work generally has examined how non-linguistic information - such as what both a speaker and listener mutually know (versus what is only known to the listener), where a speaker is looking, and what kinds of descriptions speakers and listeners have used before - affects linguistic interpretation. Taking advantage of head-mounted eyetracking methodology, my work has examined whether the conversationally-based factors listed above can enable a listener to identify a referent even before a speaker refers to it linguistically, or can bias the listener to pay attention to some objects as potential referents more than others. I'll present data and videos that demonstrate that the language processing system does indeed take into account both non-linguistic as well as linguistic sources of information, immediately and incrementally, to restrict the domain of interpretation during comprehension. I will focus largely on my most recent work that examines both the benefits and costs of using a speaker's eyegaze as a cue to referential identity. In particular, I will discuss several studies that indicate that while listeners can rapidly use where a speaker's eyes are pointing to figure out what is meant, they are less able to use a speaker's head orientation. I will also discuss the degree to which some of these conversationally-based cues may be automatically computed, and yet are still a form of perspective-taking.

 

 

16 April 2014

Celeste Kidd

Assistant Professor, Department of Brain and Cognitive Sciences, University of Rochester

Curiosity and decision-making in development

ABSTRACT:

Good decision-making requires the decision-maker to generate accurate expectations about what is likely to happen in the future. Adults' decisions, especially those pertaining to attention and learning, are guided by their substantial experience in the world. Very young children, however, possess far less data. In this talk, I will discuss work that explores the mechanisms that guide young children's early visual attention decisions and subsequent learning. I present eye-tracking experiments in both human and non-human primates which combine behavioral methods and computational modeling in order to test competing theories of attentional choice. I present evidence that young learners rely on rational utility maximization both to build complex models of the world starting from very little knowledge and, more generally, to guide their behavior. I will also discuss recent results from related on-going projects about learning and attention in macaque learners, as well as some data on other sorts of decision-making processes in children.

RECOMMENDED READING

 

 

What's in a look?Developmental Science

 

 

30 April 2014

Nicole Wicha

Assistant Professor, Department of Biology, University of Texas at San Antonio

When (and how) our brain accesses meaning during language comprehension

ABSTRACT:

My research focuses on understanding the temporal dynamics of brain processes underlying language comprehension. I will discuss a set of representative studies that focus broadly on how, and when, the brain accesses the meaning of words during comprehension. I will discuss work in monolinguals where I show that syntactic cues can be used to validate predictions that were made based on the meaning of a preceding sentence context. I will then discuss some of our work with bilinguals looking at accessing the meaning of words, in isolation or in a sentence context, when 2 languages are active. I propose that studying the timing of language comprehension in the brain, using techniques like event-related potentials (ERPs), can help constrain anatomical and theoretical models. 

RECOMMENDED READINGS

  1. Ng, Shukhan and Nicole Y. Y. Wicha 2013 Meaning first: a case for language-independent access to word meaning in the bilingual brain. Neuropsychologia 51(5): 850-863.
  2. Wicha, Nicole Y. Y., Eva M. Moreno and Marta Kutas 2004. Anticipating words and their gender: an event-related brain potential study of semantic integration, gender expectancy, and gender agreement in Spanish sentence reading. Journal of Cognitive Neuroscience 16:7, pp. 1272-1288.
  3.  

Fall Semester

10 September 2014

Gregory Ward

Professor, Department of Linguistics, Northwestern University

A Pragmatic Analysis of a Focus-Indicating Modal: That would be would

ABSTRACT:

In this talk (representing collaborative work), I analyze and compare two copular constructions of English, both with a demonstrative pronoun occurring in subject position: epistemic would equatives and that-equatives (Birner, Kaplan, and Ward 2007; Hedberg 2000; Heller & Wolter 2008; Mikkelsen 2007; inter alia), as illustrated in (1) and (2), respectively:

(1) Cockney rhyming slang, professional jargon, and thieves' cant are all about in-group language and its functions - that would be pragmatics/sociolinguistics. [http://forums.atozteacherstuff.com/] 

(2) G: Who's that up there at the podium? 
     C: That's our guest speaker. 
[G.W. and C.L. in conversation]

Drawing upon a large corpus of naturally-occurring data, I show that the modal in an epistemic would equative serves to mark the FOCUS of the utterance, thus requiring that an OPEN PROPOSITION (in the sense of Prince 1986) be contextually salient, with the post-copular constituent serving as the instantiation of the variable of that open proposition (OP). The information structure of the epistemic would construction accounts for the humorous and/or ironic tone often associated with its use. The that-equative construction, on the other hand, is more constrained. It may also be used to instantiate an OP; however, for that-equatives, unlike epistemic would equatives, such a possibility is determined contextually rather than morpho-syntactically. As for the interpretation of the two constructions, I present the results of a series of empirical studies that show that use of an epistemic would equative conveys a high degree of speaker commitment to the truth of the proposition expressed. Indeed, far from being a marker of tentativeness as has been claimed (Palmer 1990, Perkins 1983), our results suggest that use of epistemic would conveys an even higher degree of speaker certainty than does use of a that-equative.

RECOMMENDED READINGS:

 

24 September 2014

Peter Q. Pfordresher

Professor, Department of Psychology, University at Buffalo

Sensorimotor translation in singing

ABSTRACT:

Singing is a complex activity that involves the coordination of multiple sensory and motor functions. In this talk I will focus on a particularly critical element: the ability to translate a target pitch (based on memory or a tune you just heard) into motor movements. I will review empirical evidence suggesting that this function may be the most critical element in distinguishing "accurate" versus "inaccurate" singers as well as cross-sectional developmental data suggesting that this ability requires domains-specific practice. Next, I describe a model of sensorimotor translation based on mapping of mental images across modalities. Model simulations and recent behavioral data suggest that distortions of sensory-motor mapping may be the cause of most poor singing. The talk will close with a consideration of pedagogical implications that arise from this research.

RECOMMENDED READINGS:

 

 

15 October 2014

Michael S. Vitevitch

Professor, Department of Psychology, University of Kansas

Using Network Science in Cognitive Science to examine the mental lexicon

ABSTRACT:

Network science is an emerging discipline drawing from sociology, computer science, physics and a number of other fields to examine complex systems in economical, biological, social, and technological domains. To examine these complex systems, nodes are used to represent individual entities, and connections are used to represent relationships between entities, forming a web-like structure, or network, of the entire system. The structure that emerges in these complex networks influences the dynamics of that system. This approach has also been used to examine complexcognitive systems and has increased our understanding of the brain, psychological disorders, and the mental processes involved in semantic memory and in human collective behavior. I will summarize recent work from my lab that has used this approach to examine the structure found in the mental lexicon. Using conventional psycholinguistic tasks we further demonstrate that the structural characteristics of the phonological network influence various language-related processes, including word retrieval during the recognition and production of spoken words, recovery from instances of failed lexical retrieval, and the acquisition of word-forms. This approach allows researchers to examine multiple levels of the language system, holding much promise for increasing our understanding of language-related processes and representations.

RECOMMENDED READINGS

  1. Vitevitch, Michael S. 2008 What can graph theory tell us about word learning and lexical retrieval? Journal of Speech, Language, and Hearing Research vol.51, 408-422.
  2. Newman, Mark 2008 The physics of networks Physics Today.

 

22 October 2014

Dorit Bar-On

Professor, UNC-Chapel Hill Philosophy Department

Gricean Intentions, Expressive Communication, and Origins of Meaning

ABSTRACT:

The task of explaining language evolution is often presented by leading theorists in explicitly Gricean terms. I first offer a critical evaluation of this conceptualization of the explanatory task facing theories of language evolution. In particular, I take issue with the claim that, given a fundamental signaler-receiver asymmetry in animal communication, the main puzzle of language evolution is to explain how signalers could become genuine Gricean communicators. I then motivate, through examination of various animal studies, an alternative, non-Gricean conceptualization of the task, which focuses on the potential of non-Gricean, expressive communication to illuminate the origins of meaning. On the construal of expressive communication I advocate, animal expressers show to their designated audience, without intentionally telling - and the audience directly recognizes, without rationally inferring - the expressers' states of mind. Expressers thereby show - and their audience realizes - how things are in the world, and what to do about them. Recognizing that, like many extant nonhuman animals, our extinct nonhuman predecessors were already proficient - though non-Gricean - sharers of information would free us to focus on a more tractable problem concerning the emergence of meaning. This is the problem of explaining how meaningful linguistic expressive vehicles could come to transform and transcend the nonlinguistic expressive behaviors to which nonhuman animals are consigned.

RECOMMENDED READING:

 

 

5 November 2014

James R. Sawusch

Professor, Department of Psychology, University at Buffalo

Lexical Influences on Phonetic Processing: Levels of Processing and Information Flow

ABSTRACT:

In normal language listening, our knowledge of the language influences what we think we hear. The focus in this presentation will be on one aspect of this influence: How the mental lexicon influences phonetic (segmental) processes. The first part will examine the types of information that are embedded in our knowledge of the forms of the words of our language. This will be followed by computational analyses and empirical studies to examine how form based information in words influences perception. In particular, the focus will be on whether these influences represent an interaction between bottom-up acoustic-phonetic processes and top-down influences of the lexicon. Two sources of information: lexical status ("cat" is a word and "cet" is not) and lexical neighborhood density (words like "back" have many neighbors and words like "mouth" have very few) will be compared to show that the lexicon influences perceptual processing both prior to lexical access and to create post-lexical biases.

RECOMMENDED READING

  1. Newman, R. S., Sawusch, J. R., & Luce, P. A. 1997. Lexical neighborhood effects in phonetic processing. Journal of Experimental Psychology: Human Perception and Performance, 23, 873-889.
  2. Clarke-Davidson, C. M., Luce, P. A. & Sawusch, J. R. 2008. Does perceptual learning in speech reflect changes in phonetic category representation or decision bias? Perception & Psychophysics, 70 (4), 604-618.

 

12 November 2014

Mark Frank

Professor, Department of Communication, University at Buffalo

Lies, damn lies, and CS colloquia

ABSTRACT:

This presentation will discuss the processes that are engaged when a person tells a lie. It will feature a discussion on the factors that may cause more of less of certain behaviors to be exhibited, and discuss some recent findings that support or refute some of the accepted wisdom on the subject. The presentation will include some video examples of the behaviors and conclude with some thoughts on how this information may be best applied to the world outside the laboratory. 

RECOMMENDED READINGS