February 23
March 9
Speaker: Zhanpeng Jin
Associate Professor, Department of Computer Science and Engineering, University at Buffalo, SUNY
Smart wearable devices have recently become one of the major technological trends and have been widely adopted by the general public. Wireless earphones, in particular, have seen a skyrocketing growth due to their superb usability and convenience. Instead of being only a sound-listening accessory, we argue that earphones hold the potential for significant disruptions in the mobile computing era, as a new human-computer interaction platform. As a sensing platform, the ears are less susceptible to motion artifacts and are located in close proximity to a number of critical anatomical structures, including the brain, blood vessels, and facial muscles, which reveal a wealth of information. As computer scientists have envisioned, next-generation earphones will run augmented reality via 3D sound, have Alexa and Siri whisper just-in-time information, track our motion and health, make authentications seamless, and much more. The leap from today’s earphones to tomorrow’s “earable computing” could mimic the transformation we have seen from functional phones to smartphones. In this talk, I will introduce our recent work in earphone-based biometric authentication solutions, leveraging the human ear canal’s unique physical and geometrical characteristics and a headphone-based wearable sign language gesture recognition system, breaking the communication barriers between the ASL signers and hearing people.
March 16
Speaker: Justin Couchman
Associate Professor, Department of Psychology, Albright College
Students often gauge their performance before and after an exam, usually in the form of rough grade estimates or general feelings. Are these estimates accurate? Should they form the basis for decisions about study time, test-taking strategies, revisions, subject mastery, or even general competence? In several studies in the US and other cultures, undergraduates took a real multiple-choice exam, described their general beliefs and feelings, tracked their performance for each question, and noted any revisions or possible revisions. Beliefs formed before or after the exams were often poor predictors of performance. In contrast, real-time metacognitive monitoring – measured by confidence ratings for each individual question – accurately predicted performance and were a much better decisional guide. Measuring metacognitive monitoring also allowed us to examine the process of revising an answer. Should a test-taker rely on their first choice or revise in the face of uncertainty? Experience seems to show that first instincts are correct. The decision-making literature calls this the first-instinct fallacy, based on extensive analysis of revisions, and recommends revising more. However, whereas revisions have been analyzed in great detail, previous studies did not analyze the efficacy of sticking with an original choice. We found that both revising and sticking resulted in significantly more correct than incorrect outcomes, with real-time metacognition predicting when each was most appropriate.
April 6 (ZOOM ONLY)
Speaker: Paulo Carvalho
Project Scientist, Human-Computer Interaction Institute, Carnegie Mellon University
As the adage goes “practice makes perfect”. Despite our intuition that practice improves performance, the computational mechanisms of this common phenomenon are not well understood. Instead, current research has focused on describing which practice makes perfect: you will want to space it out, you should alternate topics, and be sure to test yourself as you go (but not too much!). Because we don’t fully understand how these prescriptions operate, considerable evidence from laboratory studies fails to generalize beyond constrained situations, fails to reach applied settings, or does not replicate.
I will present a new mechanistic framework to understand how practice improves learning. My proposal is that through memory and attentional processes, practice tunes encoding towards a subset of information in predictable ways. I will present empirical and computational evidence from my past and ongoing research demonstrating that we learn markedly different information from small changes in how we practice. The work I will discuss pushes the boundaries of existing theories towards broader generalizability, replicability, and applicability with key implications for how to design tailored practice. No one type of practice will improve all learning.
April 20
Speaker: Casey Roark
Postdoctoral Fellow, Department of Communication Science and Disorders, University of Pittsburgh
Everyday behaviors like interpreting a child’s squeal as thrilled or terrified or understanding diverse acoustic signals from different talkers to be the same word rely on categorization. Although categories are ubiquitous across modalities, the majority of research has focused on visual category learning. Using behavioral and computational modeling approaches, my research aims to understand the mechanisms supporting auditory category learning. In this talk, I will discuss my research in three areas: (1) the mechanisms of learning in naturalistic environments, (2) constraints on learning based on prior experience, and (3) the sources of individual variability in learning outcomes. Together, this research demonstrates that the context in which we learn, our prior experiences, and our goals and abilities influence category learning processes and outcomes. Understanding whether (and why) an individual will be successful during learning requires consideration of these factors.
May 4
Speaker: Sarah Vincent
Clinical Assistant Professor, Department of Philosophy, University at Buffalo, SUNY
Are humans the only beings capable of empathy? Does the answer to that question change depending on the type of empathy we are discussing? If other animals are capable of empathy, does that matter morally? In this presentation, I will engage these questions along with recent studies from cognitive science, ultimately making the case that evidence of empathy in a variety of other species has philosophical implications regarding their moral status. It will be helpful to differentiate some terminology as I pursue this goal. First, we will need to distinguish between three types of related abilities, following de Waal [2008]: (1) emotional contagion, or the ability to match the emotional state of another; (2) sympathetic concern, or the ability to be concerned about another’s emotional state such that you are motivated to try to ameliorate that state; and (3) empathic perspective-taking, or the ability to adopt the perspective of another such that you experience emotional arousal. The latter two are sometimes jointly called cognitive empathy. Second, we will need to distinguish between three types of moral status, following Rowlands [2012]: (1) moral patients are those who are objects of moral concern but who do not possess moral responsibilities; (2) moral subjects are those who are objects of moral concern and who can be motivated to act by morally laden emotions; and (3) moral agents are those who are objects of moral concern and who can be motivated to act by moral reflection. Putting all of this together then, I defend the claim that those nonhuman animals whose behavior indicates the presence of cognitive empathy are – at least – moral subjects.
September 21st, 2pm
Speaker: Federica Bulgarelli
Assistant Professor, Department of Psychology, University at Buffalo, SUNY
How do learners contend with variability in the input? In the case of language learning, learners encounter input that varies along many dimensions. For example, words in the real world sound different every time they are produced, even when spoken by a single talker, and variability increases exponentially between different talkers, who may speak with different regional or non-native accents. I will present a set of studies investigating how learners process input from varying talkers, and how this in turn impacts learning. Using both experimental methods and corpus analyses, my research program explores how the process and outcome of learning are shaped by inevitable variability in linguistic experience, extending our knowledge about learning in naturally varying environments.
September 28th, 2pm
Speaker: Mishaela DiNino
Assistant Professor, Department of Communicative Disorders and Sciences, UB, SUNY
Following a conversation in a noisy room requires adequate attention to a target talker. While speech perception in noise can be challenging for those with hearing loss, some people with normal hearing thresholds struggle to understand speech in noisy listening situations. This phenomenon has been termed “hidden” hearing loss because it cannot be detected with a traditional hearing test, and its causes are unclear. In this talk, I will explain why auditory attention is important for listening, especially in situations with background noise, and describe how impaired auditory attention might be one cause of hidden hearing loss.
October 19th, 2pm
Speaker: Ariel James
Assistant Professor, Department of Psychology, Macalester College
A listener's eye movements can reveal how and when they integrate linguistic and visual information in real time. While eye movements are a dynamic and sensitive outcome that can be easily manipulated by factors of interest, these properties also make it challenging to capture stability in the measure. In this talk, I will describe some of my work with language-mediated eye movements, including a project connecting anticipatory eye movements with measures of language experience and current work exploring webcam-based methods for tracking eye movements.
October 26th, 2pm
Speaker: Andrés Buxó-Lugo
Assistant Professor, Department of Psychology, UB, SUNY
Interpreting a sentence can be characterized as a rational process in which comprehenders integrate linguistic input with top-down knowledge (e.g., plausibility). One type of evidence for this is that comprehenders sometimes reinterpret sentences to arrive at interpretations that conflict with the original language input (e.g., Ferreira, 2003; Gibson et al., 2013). Does this reflect a reinterpretation of only the message, or also of earlier stages of linguistic representation such as the syntactic parse? The present study relies both on comprehension questions as a measure of the eventual interpretation (as in past work) and on syntactic priming as an implicit measure of the eventual parse of a sentence. Plausible dative sentences yielded a classic syntactic priming effect. Implausible dative sentences, for which a plausible alternative version corresponded to the alternate dative structure, not only tended to be interpreted as the plausible alternative, but also showed no priming effect from the perceived syntactic structure. These results suggest that the plausibility of a message can not only impact the interpretation of a perceived sentence, but also its underlying syntactic representation.
November 9th, 2pm
Speaker: Michael-Paul Schallmo
Assistant Professor, Department of Psychiatry, University of Minnesota
Both psychotic disorders and hallucinogenic drugs can have profound effects on visual perception, but the neural basis of these phenomena remains unknown. Because of the vivid hallucinations that are induced, some have theorized that psychedelic drugs such as psilocybin (i.e., magic mushrooms) produce a psychotic-like state in the brain. However, due to prohibition and stigma over the last 50 years, relatively little is known about the precise effects of psilocybin on visual functioning. One aspect of visual perception that may be particularly affected in both psychotic and psychedelic states is context processing, or the influence of surrounding objects on the appearance of a target (e.g., camouflage). . We and others have shown that people with psychosis spectrum disorders (i.e., schizophrenia and bipolar disorder) report weaker effects of surrounding context during perception of a visual illusion known as surround suppression. In this talk, I will describe what is known about how perception of this context illusion differs among people with psychosis from previous and ongoing work in my laboratory. In addition, I will present results from our placebo-controlled pilot study (n = 6) examining the effects of psilocybin on visual context perception, and I will draw comparisons between these two altered states of visual consciousness. Although both can induce visual hallucinations, our findings suggest that psychosis and psilocybin may have profoundly different effects on neural and perceptual functioning during visual context perception.
December 7th, 2pm
Speaker: Bill Palmer
Associate Professor, School of Humanities, Creative Industries and Social Sciences, University of Newcastle
Representing physical space in language and in the mind is fundamental to humans. However, the way languages represent physical space varies widely, and a substantial body of research shows cross-linguistic diversity in strategies for expressing spatial relations (Levinson 2003; Levinson & Wilkins 2006; Majid et al 2004 inter alia). However, individual languages are not single data points. Considerable variation exists in spatial language and non-linguistic behaviourwithin language communities. Moreover, individual speakers may display significant variation in spatial strategy depending on a range of factors such as task, scale, interlocutor, or the topography of the locus of speaking. Languages make a range of spatial frame strategies available to speakers, and individuals vary in which strategies they employ, to what extent, and in which contexts. A diverse array of factors interact to drive this variation, from terrain to group-level cultural practices and associations, from individual demographic diversity to innate cognitive biases. Recent quantitative and qualitative studies have shown patterns of spatial strategy use correlating with demographic factors such as occupation, age, gender, education, literacy, bilingualism and more (e.g. Adamou 2017; Bohnemeyer 2011; Cerqueglini 2022; Danziger 1999; Dunn et al. 2021; Edmonds-Wathen2022; Lawton 2001; Lin 2022; Meakins et al. 2016; Shapero 2017). In some cases these demographic factors appear to be proxies for nature and intensity of engagement with the environment. For example, in Dhivehi (Maldives), all speakers in a recent study make some use of both geocentric and egocentric strategies, but fishermen and sailors make much greater use of geocentric strategies than indoor workers, who employ more egocentric strategies. But community-level factors also play a role. Even indoor workers on islands where fishing is the dominant subsistence mode make more use of geocentric strategies than indoor workers on islands where indoor work and small scale farming dominate (Lum 2018; Palmer et al 2017). The physical environment itself (urban v rural, inland v coastal, mountain v riverine, regular v irregular built environment, etc) also plays a role in shaping representations of space, both between and within languagecommunities (Palmer 2015; e.g. Ameka & Essegbey 2006; Dasen & Mishra 2010; Lawton 2001; Pederson 2006). A history of language contact appears to play a role alongside the environment (e.g. Heegård &Liljegren 2018). However, no single factor is deterministic. Recent discussions of spatial language and cognition increasingly recognize that spatial representations are shaped by an interplay of environmental, social, cultural, and linguistic factors that is much more complex than previously recognized (Bohnemeyer et al. 2014, 2015; Dasen & Mishra 2010; Palmer et al 2017). The theory of sociotopography (Lum et al 2022; Palmer 2022; Palmer et al 2017, 2018, f.c.) attempts to model the factors at play in the construction of representations of space. In this model, spatial behaviour, both linguistic and nonlinguistic, results from the complex multidirectional interaction of perceptually salient topography, affordances of that topography, sociocultural practices and associations assigned to aspects of the landscape, the nature of each person’s individual engagement with the world, the linguistic repertoire available, and practices of language use.