February 15
Speaker: Matthew Bolton
Assistant Professor, Industrial and Systems Engineering, University at Buffalo
Abstract:
Breakdowns in complex systems frequently occur as a result of system elements interacting in ways unanticipated by designers. In systems with human operators, human behavior (both normative and erroneous) is often associated with such failures. These problems are difficult to predict because they are emergent features of the complex interactions that occur between the human operator and the other elements of the system. Model checking is a technology that uses automated proof techniques to discover failures in formal models of computer hardware and software by considering all of the possible interactions. In this talk I will describe novel methods and tools I have developed to allow systems engineers to use task analytic models of human behavior, taxonomies of erroneous human behavior, formal system modeling, and model checking to predict when normative and potentially unanticipated erroneous human-system interactions can contribute to system failure. I will demonstrate how these techniques can be used in the analysis and design of a realistic medical system: a patient-controlled analgesia pump.
Recommended Readings:
March 1
Speaker: Brendan Johns
Assistant Professor, Communicative Disorders and Sciences, University at Buffalo
Abstract:
The collection of large text sources has revolutionized the field of natural language processing, and has led to the development of many different models that are capable of extracting sophisticated semantic representations of words based on the statistical redundancies contained within natural language. However, these models are trained on text bases that are a random collection of language written by many different authors, designed to represent one's average experience with language. This talk will focus on two main issues: 1) how variable the usage of language is across individuals, and 2) how this variability can be used to optimized models of natural language processing. It will be shown that by optimizing models of natural language based on the same lexical variability that humans experience, it is possible to attain benchmark fits to a wide variety of lexical tasks.
Recommended readings:
April 5
Speaker: John K. Pate
Assistant Professor, Linguistics, University at Buffalo
Abstract:
Children learn the syntax of their native language without explicit instruction. What learning biases do they rely on to accomplish this amazing feat? In this talk, I will discuss how probabilistic models from the field of natural language processing can be used to make proposed learning biases explicit, and also to compare different sets of learning biases in terms of their practical utility for learning syntax from strings of words. I will conclude by considering how to leverage recent advances in deep learning to build models that capture long-distance dependencies.
Recommended readings:
John K. Pate, and Mark Johnson (2016) Grammar induction from (lots of) words alone In Proceedings of COLING, pp.10.
April 12
Speaker: Nazbanou Nozari
Assistant Professor, Neurology and Cognitive Science, Johns Hopkins
Abstract:
A large body of research has explored the nature of representations in the language production system. In comparison, little is known about how language production is monitored and regulated. I will argue that the production system is monitored and controlled in a fashion similar to other cognitive and motor systems, and endorse this claim with behavioral data from individuals with aphasia and typically-developing children, as well as electrophysiological data from neurotypical adult speakers. At the same time, I will present evidence showing that monitoring and the ensuing control are based directly on the information within specific parts of a cognitive system. Thus, to the degree that such information is specific to a domain or sub-domain, so is the cognitive control that regulates that part of the system. In summary, I will endorse a monitoring-control model of language production that is domain-general in its computational principles and neural correlates, but operates specifically to meet the needs of the language production system.
Recommended readings:
April 26
Speaker: Wayne Wu
Associate Professor and Associate Director of the Center for the Neural Basis of Cognition, Carnegie Mellon University
Abstract:
This talk discusses the nature of attention and its interaction with cognition. A theory of attention that conforms with experimental practice and coheres with multiple levels of analysis is presented. The theory is then applied to clarify empirical debates about the nature of consciousness (inattentional blindness and overflow) and the relation between cognition and perception.
Recommended readings:
May 3
Speaker: Emily Morgan
Postdoctoral Researcher, Psychology, Tufts University
Abstract:
The ability to generate novel utterances compositionally using generative knowledge is a hallmark property of human language. At the same time, languages contain non-compositional or idiosyncratic items, such as irregular verbs, idioms, etc. In this talk I ask how and why language achieves a balance between these two systems - generative and item-specific - from both the synchronic and diachronic perspectives. Specifically, I focus on the case of binomial expressions of the form "X and Y", whose word order preferences (e.g. bread and butter/#butter and bread) are potentially determined by both generative and item-specific knowledge. I show that ordering preferences for these expressions indeed arise in part from violable generative constraints on the phonological, semantic, and lexical properties of the constituent words, but that expressions also have their own idiosyncratic preferences. I argue that both the way these preferences manifest diachronically and the way they are processed synchronically is constrained by the fact that speakers have finite experience with any given expression: in other words, the ability to learn and transmit idiosyncratic preferences for an expression is constrained by how frequently it is used. The finiteness of the input leads to a rational solution in which processing of these expression relies gradiently upon both generative and item-specific knowledge as a function of expression frequency, with lower frequency items primarily recruiting generative knowledge and higher frequency items relying more upon item-specific knowledge. This gradient processing in turn combines with the bottleneck effect of cultural transmission to perpetuate across generations a frequency-dependent balance of compositionality and idiosyncrasy in the language, in which higher frequency expressions are gradiently more idiosyncratic. I provide evidence for this gradient, frequency-dependent trade-off of generativity and item-specificity in both language processing and language structure using behavioral experiments, corpus data, and computational modeling.
Recommended readings:
Morgan, Emily and Roger Levy (2016) Abstract knowledge versus direct experience in processing of binomial expressions Cognition, 157:384-402.
September 6
Speaker: Sarah Brown-Schmidt
Associate Professor in the Department of Psychology & Human Development at Vanderbilt University
Abstract:
In conversation, the discourse history, including past referents and how they were described, shapes future referential form and language use. While it is widely known that conversational partners form representations of the discourse history, the veracity and similarity of these representations among partners in conversation has not been widely explored. Through the study of referential form in conversation, combined with explicit measures of recognition memory for past referents, I show that partners are likely to walk away from a conversation with distinct memories for the contents, and in some cases the context of conversation. In general, speakers tend to remember what was said better than listeners do. Studies of conversational language use in persons with anterograde amnesia offer insights into the biological memory systems involved. The findings have implications for how common ground is formed in conversation, and suggest that there are limits on the degree to which interlocutors can achieve coordinated representations of the discourse history. More generally, this work demonstrates that the successful exchange of meaning in conversation involves imperfect, asymmetric representations of the jointly experienced past.
Recommended Readings:
September 20
Speaker: Ann Bisantz
Professor and Chair of Industrial and Systems Engineering, University at Buffalo
Abstract:
Health information systems have been advocated as a solution to the problems of errors and adverse events in health care. In emergency departments, electronic patient tracking systems are being implemented to replace manual status boards (“whiteboards”) that are commonly used for managing clinical work. Manual status boards traditionally contained medical and logistical information about patients and provide staff with information about patients as well as higher level information regarding hospital state and team coordination information (assignments of providers to patients; status of on-call providers). While electronic versions of the status boards may mimic the look and layout of manual boards, support automated recording keeping and reporting, and allow information on the status board to be accessed at different locations in the hospital, they also impose new constraints on use, miss a critical opportunity to best support the work of the healthcare providers, and introduce new failure modes with unanticipated consequences. Such new technologies are often designed without an in depth understanding of the work they need to support, or are designed with a focus on administrative functions rather than patient care functions (e.g., record keeping; billing). Without a careful understanding of how new technologies will be used in practice or the barriers to their use as expected, new technology can lead to unanticipated, undesirable consequences. This talk describes results from field studies cognitive engineering analyses, and human-in- the-loop studies that can better inform the design of health IT for emergency medicine.
Recommended Readings:
September 27
Speaker: Lauren Calandruccio
Associate Professor, Department of Psychological Sciences, Case Western Reserve University
Abstract:
Speech understanding, though effortless for most listeners in many listening environments, can at times be quite daunting. People who have hearing loss, people who are non-native speakers of the language, and children tend to exhibit greater difficulty with speech understanding in noise than young, normal-hearing adults. Even people with normal hearing can struggle to understand speech in noise when the listening environment becomes very noisy and/or reverberant. The research in our lab is focused on creating complex-listening tasks by using a speech-on- speech recognition paradigm in which listeners are asked to attend to the speech of one talker while other talkers are competing in the background. We are most interested in understanding why certain stimuli are more effective at masking the target speech than other stimuli and how speech-on- speech recognition fits into the already well-established models of informational masking, which suggest that masker uncertainty and target-masker similarity increases informational masking. A main focus of our work has been on the contributions of linguistic meaning to speech-on- speech recognition. Background information on this topic will be presented, as well as data that explore the importance of different linguistic features of the competing speech that potentially contribute to informational masking.
Recommended Readings:
October 4
Speaker: Victor Kuperman
Associate Professor, Department of Linguistics and Languages, McMaster University
Abstract:
This project investigates a claim that information about the quality and stability of contexts is encoded in the mental representation of a word in the long-term lexical memory. In one experiment, we evaluated the influence of average concreteness, valence (positivity) and arousal of the contexts in which a word occurs on response times in the lexical decision task, age of acquisition of the word, and word recognition memory performance. Using large corpora and norming mega-studies we quantified semantics of contexts for thousands of words and demonstrated that contextual factors were predictive of lexical representation and processing above and beyond the influence shown by concreteness, valence and arousal of the word itself. Our findings indicate that lexical representations are influenced not only by how diverse the word's contexts are, but also by the embodied experiences they elicit. Another experiment presented readers with correctly spelled words that had more or less frequent homophonous misspelled counterparts in a corpus of unedited English (e.g., commit vs comit). In an eye-tracking sentence reading study, we demonstrated that correctly spelled words are identified with a greater effort if they have more frequent orthographic alternatives. We explain these findings from the standpoint of theories of learning, i.e. the presence of an alternative spelling weakens associations between the meaning and the correct spelling. Our minds store the history of word learning, and are influenced by the contexts in which it occurs and both successful and unsuccessful cases of word identification.
Recommended Readings:
October 25
Speaker: Eduardo Mercado III
Professor of Psychology, University at Buffalo
Abstract:
Whales are the only mammals that sing continuously for ten hours or more, and they do so loudly even when no listeners are nearby. They also change the songs they sing each year, and may have been doing so for millions of years. Why? Fifty years of field research have led most scientists to conclude that humpback whales sing for the same reason that birds do – to advertise their sexual fitness. But, if humpback whale songs are nothing more than sonic peacock tails, then why are whales constantly changing their songs, and why are singers “spreading their tails” when no listeners are around? When the evidence is reconsidered in the light of modern advances in neuroscience and ocean acoustics, it points toward the surprising conclusion that whales are actually not singing, but are instead engaging in an activity more commonly associated with dolphins – echolocation. In this talk, I will argue that by incessantly streaming sounds while listening closely to the echoes generated by those sounds, whales actively retune their brains in ways that enable them to monitor the movements of other whales located miles away. In which case, “singing” whales may possess the most plastic brains on the planet.
Recommended Readings:
November 29
Speaker: Tim Pruitt
Ph.D. candidate in Cognitive Psychology, Department of Psychology, University at Buffalo
Abstract:
Auditory imagery is a vehicle for the inner voice and inner ear, which is critical for internal thought, language, and learning. Previous anecdotal and empirical observations have documented covert motor activity oftentimes accompanies auditory imagery. Such audio-motor associations within the linguistic domain have been well studied (Gathercole & Baddeley, 1993; Smith, Reisberg, & Wilson, 1994) but comparatively little work has explored subvocalization related to nonverbal information, such as music. Neuroimaging (Zatorre & Halpern, 2005) and behavioral research (Brodsky, Rubinstein, Ginsborg, & Henik, 2008) have shown that such covert motor processes are present during musical imagery. Thus, this presentation highlights my collaborators and my efforts to further examine subvocalizations related to musical imagery using surface electromyography (sEMG). Pruitt, Pfordresher and Halpern (in prep) observed sternohyoid (musculus sternohyoideus) and upper lip (orbicularis oris) muscles were more active during the encoding of pitch in preparation for singing compared to a similarly structured visual imagery task. Our subsequent experiments attempt to better define laryngeal and orofacial motor activity during overt singing so to inform upon the role of these subvocalizations. Are they merely by-products of auditory imagery or do they carry veridical properties? How does subvocalization of music relate to singing abilities? Akin to this, is the imitation of pitch affected by subvocalization during imagery? Answering such questions contributes to the understanding of sensorimotor translation in singing, which heavily relies on the coupling of audio-motor associations (Pfordresher, Halpern, & Greenspon, 2015).
Recommended Readings: