January 31
February 7
Speaker: Sarah Muldoon
Associate Professor, Department of Mathematics, University at Buffalo, SUNY
Personalized Brain Network Models (BNMs) are a computational tool that simulate a specific individual’s brain activity based on measured structural brain connections. These models have been shown to be sensitive to individual differences in brain network structure and allow one to perform in silico experiments in order to make predictions about the effects of stimulation, disease progression, or drug treatment at the level of a specific individual. I will describe how one builds such computational models from neuroimaging data and describe work using personalized BNMs to explore individual differences in brain structure and function.
February 28
Speaker: Robert Hawkins
Assistant Professor, Department of Psychology, University of Wisconsin - Madison
Why do we use language differently with different partners? In this talk, I will argue for a computational approach to sociolinguistics, which formalizes the obstacles standing in the way of effective communication and explains how people construct shared meaning to achieve their communicative goals with different audiences. Specifically, I'll present a computational model of partner-specific coordination and convention via hierarchical Bayesian inference — using feedback from a partner to update one's beliefs about what is meaningful to them. I test predictions of the model in two natural-language communication experiments where participants are grouped into small communities for a referential communication task. Finally, I'll discuss ongoing work exploring broader implications across four areas: (1) code-switching and the relationship between language and social identity, (2) neural mechanisms of common ground in a hyper scanning study, (3) developmental trajectories of sociolinguistic competence, and (4) artificial agents that can flexibly construct meaning with human partners.
March 13
Speaker: Jessica Huber
Professor, Department of Communicative Disorders and Sciences, University at Buffalo, SUNY
In this talk, I will review what I have learned about how we control our respiratory (breathing) system to ensure successful communication exchanges. Prior to the 1990s, the respiratory system was viewed as a “pump” – simply producing pressure that was modulated by the rest of the speech system to make communication possible. I will review evidence that motor planning and execution of the respiratory system is adjusted to achieve specific communication goals (in a variety of tasks, environments, and contexts). I will also discuss how language and cognitive load can impact how we breathe and speak. I will apply these principles to people with communication disorders due to neurodegenerative diseases (like Parkinson disease) and to the impact of treatments on respiratory planning and execution.
March 27
Speaker: Valerie Langlois
Post-doctoral Researcher, Department of Psychology and Neuroscience, University of Colorado - Boulder
Understanding the processes involved in how language ambiguity is resolved has been a central question in psycholinguistics. One proposal is that domain-general processes like cognitive control play a role in resolving conflict between linguistic representations. However, there is a lack of consensus about which sorts of language processing challenges do or do not engage cognitive control. I will present new evidence showing that theta-band oscillations (4Hz to 8Hz) can index cognitive control engagement. In “While Anna dressed the baby spit up…”, people must decide between a frequent but syntactically unsupported interpretation (Anna dressing the baby), and a syntactically licensed but improbable interpretation (Anna dressing herself). In these sentences, we find an increase in theta-band power. In contrast, sentences that lack conflict, but were equally as difficult to process, did not elicit this same effect. Lastly, I will also show that linear classifiers can successfully decode offline interpretation from EEG activity during sentence presentation.
April 24
Speaker: Kimele Persaud
Assistant Professor, Department of Psychology, Rutgers University - Newark
Theories dating back as early as the 1930s suggest that the retrieval of information from memory is a reconstructive process. That is, past experiences can provide a means to “fill in” partially stored information later. Yet, it is not fully understood how we integrate our prior knowledge with incomplete episodic information during reconstructive memory, what this integration processes looks like across developmental groups, and what happens when our prior knowledge is inconsistent with to be reconstructed information. In this talk, I will discuss several lines of research that employ computational models to understand how the mind adopts the sophisticated process of using prior knowledge to compensate for noisy and incomplete memories, how this process changes across development, and how the integration is influenced by congruency between prior knowledge and newly acquired information.
May 1
Speaker: Lilia Rissman
Assistant Professor, Department of Psychology, Rochester Institute of Technology
Research on the interface between language and conceptual knowledge is often framed in a dichotomous way: as either emphasizing the role of universals in shaping this interface and downplaying variability, or the reverse, emphasizing the presence of variability and downplaying universals. Focusing on studies of event roles, verb semantics, and morphosyntax, I argue that we should adopt a perspective that centers both universality and variability. Otherwise, we risk conceptualizing the interplay between universality and variability as a zero-sum game, in which empirical contributions from a given side of the debate are always weighted more strongly than contributions from the other side.
September 2, 2 p.m.
TBA
September 18, 2 p.m.
Speaker: Elise Piazza
Assistant Professor, Department of Brain and Cognitive Sciences, University of Rochester
Communication is inherently social and requires an efficient exchange of complex cues between speakers and listeners. However, language processing is typically studied using individual listeners and simplistic stimuli. What are the interpersonal mechanisms that allow us to connect with and learn from others across the lifespan? My lab studies everyday interactions using behavioral, computational, and dual-brain neuroimaging techniques in real-life environments. To understand the real-time dynamics of communication at the biological level, I have used brain-to-brain coupling (child-caregiver, adult-adult) as a measure of interpersonal alignment to predict communicative success and learning outcomes. In one fNIRS study, we found that activation in the infant prefrontal cortex preceded and drove similar activation in the adult brain, a result that advances our understanding of children’s influence over the accommodative behaviors of caregivers. In ongoing work using dual-brain EEG during adult dialogue, we are exploring the causal relationship between representations of fine-grained linguistic features, speaker-listener coupling, and overall communication quality. Across several studies, we have developed new methods for quantifying the acoustic and semantic structure of naturalistic speech in different communicative modes (e.g., infant-directed speech, podcasting to diverse audiences) and measuring how this structure relates to overall impressions of a speaker (e.g., creativity, personality). This collection of findings provides a new understanding of how our brains and behaviors both shape and reflect different audiences during everyday communication.
October 2, 2 p.m.
Speaker: Indranil Goswami
Assistant Professor of Marketing, School of Management, University at Buffalo
People regularly encounter a series of positive numbers in everyday life, such as price lists, expense streams, amounts saved, calories consumed, or time estimates for work tasks or household chores. Often, people must integrate this information in real-time to make judgments and decisions. Such integration can take several forms, including forming an impression of a count, an average, or a sum. We refer to people’s perception of running totals as intuitive sums, and we show that people’s intuitive summation is systematically lower than the actual value—a phenomenon we call Undersum bias. The phenomenon does not occur for intuitive estimates of averages. Undersum bias is robust to whether participants have the numbers before them when estimating the sum, suggesting a limited role of errors due to memory retrieval. The bias is absent for sequences generated with numbers in the subitizing range, suggesting a plausible role of initial encoding. Consistent with this, the accuracy of memory recalls is significantly correlated with estimates of intuitive summation both in magnitude and direction. Undersum bias has both practical and theoretical implications. We find that it is a novel cognitive antecedent of overconsumption behavior and highlight how individuals’ representations of their consumption are an understudied and theoretically relevant determinant of self-control failure.
October 16, 2 p.m.
Speaker: Ken Regan
Professor, Department of Computer Science and Engineering, University at Buffalo
My "Fidelity" model of human move-choice at chess resembles utility-based predictive models that gauge risk and forecast consumer behavior. The model puts a probability on every possible move in any chess position based on utility values for those moves given by strong chess programs and parameters denoting the skill of a player or players P. The parameters have a strong many-one correspondence to the Elo rating system for measuring skill at chess. Those probabilities generate both projections and internal confidence intervals for aggregate statistics such as the projected rate at which P will play the same move as the chess program and the total utility loss from inferior moves. The outputs are z-scores quantifying the unlikelihood of P's deviations from the projections and the Intrinsic Performance Rating (IPR) of P's moves in the games. They are used by the International Chess Federation (FIDE) and various national federations to help arbitrate allegations of players cheating with strong computer programs in human-only matches.
The model also promotes general research on human decision making, in real competitive settings whose evaluation metrics are robust and long established. When and why does a player decide to stop thinking and make a move? How does reduced thinking time affect quality? One surprising new result is that moves on which P thought for a long time have vastly lower IPR than moves made almost instantly. Can broader psychological tendencies be mapped into chess and thereby be validated in a data-rich environment? How can we tell between results indicating human psychological phenomena and artifacts of the model's construction and graininess? The talk will more broadly address modes of cross-validation, the "replication crisis", and general conduct of science as employed in forecasting. This will open to general Q&A.
October 30, 2 p.m.
Speaker: John Beverley
Assistant Professor, Department of Philosophy, University at Buffalo
Too much data has been collected, in too many different fields, using too many different methods, to possibly sort through manually. If our collection efforts are to ever be justified, we need a way to interoperate this data, and make it comprehensible with minimal cognitive effort. The field I work in - Applied Ontology - includes these tasks among its aims. Applied Ontology intersects metaphysics, epistemology, logic, artificial intelligence, and data science. Simplifying a bit, we use philosophical insights to help subject-matter experts - in domains ranging across physics, virology, immigration law, and psychology among others - organize datasets to promote interoperability, identify novel predictions, and extract implicit information. In this talk, I will introduce you to the field of Applied Ontology, its history, its many present employment opportunities, and its future. Throughout, I will provide examples of interesting applications one encounters when working with experts in other fields and highlight reasons why the applied ontology toolkit is particularly well-suited to this work.
December 4, 2 p.m.
Speaker: Lian Arzbecker
Postdoctoral Associate, Department of Communicative Disorders and Sciences, University at Buffalo
TBA