Regular colloquia are Wednesdays, 2:00 P.M. – 4:00 P.M., in 280 Park Hall (unless otherwise noted), North Campus, and are open to the public. To receive email announcements of each event, please subscribe to one of our mailing lists by clicking the link that best describes you: student, UB Faculty and Staff, or Non-UB Cognitive Scientist. You can also subscribe to our calendar.
Background readings for each lecture are available to UB faculty and students on UB Learns. To access, please log in to UB Learns and select "Center for Cognitive Science" → "Course Documents" → "Background Readings for (Semester/Year)." If you are affiliated with UB and do not have access to the UBLearns website, please contact Eduardo Mercado III, director of the Center for Cognitive Science.
Speaker: Zhanpeng Jin
Associate Professor, Department of Computer Science and Engineering, University at Buffalo, SUNY
Smart wearable devices have recently become one of the major technological trends and have been widely adopted by the general public. Wireless earphones, in particular, have seen a skyrocketing growth due to their superb usability and convenience. Instead of being only a sound-listening accessory, we argue that earphones hold the potential for significant disruptions in the mobile computing era, as a new human-computer interaction platform. As a sensing platform, the ears are less susceptible to motion artifacts and are located in close proximity to a number of critical anatomical structures, including the brain, blood vessels, and facial muscles, which reveal a wealth of information. As computer scientists have envisioned, next-generation earphones will run augmented reality via 3D sound, have Alexa and Siri whisper just-in-time information, track our motion and health, make authentications seamless, and much more. The leap from today’s earphones to tomorrow’s “earable computing” could mimic the transformation we have seen from functional phones to smartphones. In this talk, I will introduce our recent work in earphone-based biometric authentication solutions, leveraging the human ear canal’s unique physical and geometrical characteristics and a headphone-based wearable sign language gesture recognition system, breaking the communication barriers between the ASL signers and hearing people.
Speaker: Justin Couchman
Associate Professor, Department of Psychology, Albright College
Students often gauge their performance before and after an exam, usually in the form of rough grade estimates or general feelings. Are these estimates accurate? Should they form the basis for decisions about study time, test-taking strategies, revisions, subject mastery, or even general competence? In several studies in the US and other cultures, undergraduates took a real multiple-choice exam, described their general beliefs and feelings, tracked their performance for each question, and noted any revisions or possible revisions. Beliefs formed before or after the exams were often poor predictors of performance. In contrast, real-time metacognitive monitoring – measured by confidence ratings for each individual question – accurately predicted performance and were a much better decisional guide. Measuring metacognitive monitoring also allowed us to examine the process of revising an answer. Should a test-taker rely on their first choice or revise in the face of uncertainty? Experience seems to show that first instincts are correct. The decision-making literature calls this the first-instinct fallacy, based on extensive analysis of revisions, and recommends revising more. However, whereas revisions have been analyzed in great detail, previous studies did not analyze the efficacy of sticking with an original choice. We found that both revising and sticking resulted in significantly more correct than incorrect outcomes, with real-time metacognition predicting when each was most appropriate.
April 6 (ZOOM ONLY)
Speaker: Paulo Carvalho
Project Scientist, Human-Computer Interaction Institute, Carnegie Mellon University
As the adage goes “practice makes perfect”. Despite our intuition that practice improves performance, the computational mechanisms of this common phenomenon are not well understood. Instead, current research has focused on describing which practice makes perfect: you will want to space it out, you should alternate topics, and be sure to test yourself as you go (but not too much!). Because we don’t fully understand how these prescriptions operate, considerable evidence from laboratory studies fails to generalize beyond constrained situations, fails to reach applied settings, or does not replicate.
I will present a new mechanistic framework to understand how practice improves learning. My proposal is that through memory and attentional processes, practice tunes encoding towards a subset of information in predictable ways. I will present empirical and computational evidence from my past and ongoing research demonstrating that we learn markedly different information from small changes in how we practice. The work I will discuss pushes the boundaries of existing theories towards broader generalizability, replicability, and applicability with key implications for how to design tailored practice. No one type of practice will improve all learning.
Speaker: Casey Roark
Postdoctoral Fellow, Department of Communication Science and Disorders, University of Pittsburgh
Everyday behaviors like interpreting a child’s squeal as thrilled or terrified or understanding diverse acoustic signals from different talkers to be the same word rely on categorization. Although categories are ubiquitous across modalities, the majority of research has focused on visual category learning. Using behavioral and computational modeling approaches, my research aims to understand the mechanisms supporting auditory category learning. In this talk, I will discuss my research in three areas: (1) the mechanisms of learning in naturalistic environments, (2) constraints on learning based on prior experience, and (3) the sources of individual variability in learning outcomes. Together, this research demonstrates that the context in which we learn, our prior experiences, and our goals and abilities influence category learning processes and outcomes. Understanding whether (and why) an individual will be successful during learning requires consideration of these factors.
Speaker: Sarah Vincent
Clinical Assistant Professor, Department of Philosophy, University at Buffalo, SUNY
Are humans the only beings capable of empathy? Does the answer to that question change depending on the type of empathy we are discussing? If other animals are capable of empathy, does that matter morally? In this presentation, I will engage these questions along with recent studies from cognitive science, ultimately making the case that evidence of empathy in a variety of other species has philosophical implications regarding their moral status. It will be helpful to differentiate some terminology as I pursue this goal. First, we will need to distinguish between three types of related abilities, following de Waal : (1) emotional contagion, or the ability to match the emotional state of another; (2) sympathetic concern, or the ability to be concerned about another’s emotional state such that you are motivated to try to ameliorate that state; and (3) empathic perspective-taking, or the ability to adopt the perspective of another such that you experience emotional arousal. The latter two are sometimes jointly called cognitive empathy. Second, we will need to distinguish between three types of moral status, following Rowlands : (1) moral patients are those who are objects of moral concern but who do not possess moral responsibilities; (2) moral subjects are those who are objects of moral concern and who can be motivated to act by morally laden emotions; and (3) moral agents are those who are objects of moral concern and who can be motivated to act by moral reflection. Putting all of this together then, I defend the claim that those nonhuman animals whose behavior indicates the presence of cognitive empathy are – at least – moral subjects.
September 21st, 2pm
Speaker: Federica Bulgarelli
Assistant Professor, Department of Psychology, University at Buffalo, SUNY
How do learners contend with variability in the input? In the case of language learning, learners encounter input that varies along many dimensions. For example, words in the real world sound different every time they are produced, even when spoken by a single talker, and variability increases exponentially between different talkers, who may speak with different regional or non-native accents. I will present a set of studies investigating how learners process input from varying talkers, and how this in turn impacts learning. Using both experimental methods and corpus analyses, my research program explores how the process and outcome of learning are shaped by inevitable variability in linguistic experience, extending our knowledge about learning in naturally varying environments.
September 28th, 2pm
Speaker: Mishaela DiNino
Assistant Professor, Department of Communicative Disorders and Sciences, UB, SUNY
Following a conversation in a noisy room requires adequate attention to a target talker. While speech perception in noise can be challenging for those with hearing loss, some people with normal hearing thresholds struggle to understand speech in noisy listening situations. This phenomenon has been termed “hidden” hearing loss because it cannot be detected with a traditional hearing test, and its causes are unclear. In this talk, I will explain why auditory attention is important for listening, especially in situations with background noise, and describe how impaired auditory attention might be one cause of hidden hearing loss.
October 19th, 2pm
Speaker: Ariel James
Assistant Professor, Department of Psychology, Macalester College
October 26th, 2pm
Speaker: Andrés Buxó-Lugo
Assistant Professor, Department of Psychology, UB, SUNY
Interpreting a sentence can be characterized as a rational process in which comprehenders integrate linguistic input with top-down knowledge (e.g., plausibility). One type of evidence for this is that comprehenders sometimes reinterpret sentences to arrive at interpretations that conflict with the original language input (e.g., Ferreira, 2003; Gibson et al., 2013). Does this reflect a reinterpretation of only the message, or also of earlier stages of linguistic representation such as the syntactic parse? The present study relies both on comprehension questions as a measure of the eventual interpretation (as in past work) and on syntactic priming as an implicit measure of the eventual parse of a sentence. Plausible dative sentences yielded a classic syntactic priming effect. Implausible dative sentences, for which a plausible alternative version corresponded to the alternate dative structure, not only tended to be interpreted as the plausible alternative, but also showed no priming effect from the perceived syntactic structure. These results suggest that the plausibility of a message can not only impact the interpretation of a perceived sentence, but also its underlying syntactic representation.
November 9th, 2pm
Speaker: Michael-Paul Schallmo
Assistant Professor, Department of Psychiatry, University of Minnesota
December 7th, 2pm
Speaker: Bill Palmer
Associate Professor, School of Humanities, Creative Industries and Social Sciences, University of Newcastle