February 27
Speaker: Ross Maddox
Assistant Professor, Departments of Biomedical Engineering and Neuroscience, University of Rochester
Both the visual and auditory systems provide spatial information. Because many objects can be both seen and heard, the brain combines information from each sensory modality to form a single estimate of an object’s location. When there is conflict between the cues from a single object, those cues are weighed by their reliability, which usually results in a visually-driven bias of auditory localization, namely the “ventriloquist effect.” Numerous studies of crossmodal integration have described the phenomenon behaviorally, modeled it, and explored its neural underpinnings. The overwhelming majority of these studies have used absolute localization tasks, for which localization bias is the principal outcome, rather than discrimination tasks (i.e., relative localization), which measure spatial acuity. Studies from our lab suggest this represents a missed opportunity. We have used spatial discrimination to show effects of eye gaze on auditory spatial coding unrevealed by absolute localization tasks. We have also used discrimination in tasks where auditory and visual stimuli are collocated, so no localization bias is expected. Finally, our preliminary results suggest that spatial discrimination tasks reveal abnormal auditory spatial processing in people with hemianopia (one-sided blindness due to a visual cortical lesion), where studies of auditory localization have suggested normal processing. Taken together, these findings demonstrate the benefits of diversity in experimental design.
March 13
Speaker: Richard Samuels
Professor, Department of Philosophy, Ohio State University
In this talk, we sketch a new puzzle arising from an apparent tension in number concept acquisition and the meanings of number expressions, which we call ‘The Acquisition Puzzle’. It runs roughly as follows. It is widely accepted among developmental psychologists that children acquire cardinality concepts, i.e. the sorts of concepts employed when counting, well before acquiring basic arithmetic concepts, i.e. the sorts of concepts employed when doing basic arithmetic. On the other hand, it is widely accepted by semanticists that the meanings of number expressions employed in cardinality statements at least implicitly reference numbers, and that numbers are the sorts of things referenced by numerals in basic arithmetic statements. This seems to suggest that acquiring cardinality concepts requires prior possession of arithmetic concepts, just as e.g. acquiring the concept BACHELOR seemingly requires prior possession of the concept MALE. Thus, the Puzzle is this: How can children acquire cardinality concepts before basic arithmetic concepts if possessing the former presupposes prior possession of the latter? The purpose of the talk is to motivate the assumptions underlying The Acquisition Puzzle, as well as to consider certain seemingly plausible ways out. These include adopting (i) a non-referential semantics for numerals, (ii) a referential semantics for numerals according to which numbers are derived (via type-shifting) from the meanings of cardinals, (iii) nativism with respect to the Dedekind-Peano Axioms or the Approximate Number System, or (iv) a certain structuralist-inspired account on which children acquire both cardinality and basic arithmetic concepts via abstraction on the basis of “intransitive counting” (reciting the numerals in their usual order). Ultimately, though we would like option (iv) to work, none of these is without significant challenges.
April 3
Speaker: Tom Covey
Assistant Professor, Department of Neurology, University at Buffalo
Working memory is a core cognitive ability that enables the short-term maintenance and manipulation of stimulus information for the purpose of completing task-specific goals. There is evidence that working memory performance can be improved with targeted training, and it is possible that this training may promote benefits in other cognitive domains. To date, studies that have examined the efficacy of working memory training for producing improved performance on untrained tasks (i.e., transfer effects) have yielded mixed findings. Questions also remain regarding the specific neural processes associated with working memory training gains, and regarding the mechanisms that can account for transfer of these gains to other tasks. Here, I will present findings indicating that (1) working memory training enhances neural activity associated with executive control processes, (2) cognitive training that targets selective attention (not engaging memory) can also induce changes in neural activity, but via processes that are distinct from those targeted by working memory training, and (3) the enhancement of neural activity associated with executive control and improvements in cognitive outcomes are observed in patients with Multiple Sclerosis (a neurodegenerative disorder) that undergo working memory training. Neurophysiological indices of brain activity obtained during cognitive tasks will be a primary focus of this talk (EEG, event-related potentials). The implications of these findings and future research directions will be discussed.
SUGGESTED READING
Covey, Thomas J., Janet L. Shucard, and David W. Shucard 2019. Working memory training and perceptual discrimination training impact overlapping and distinct neurocognitive processes: Evidence from eventrelated potentials and transfer of training gains. Cognition 182, 50-72.
April 24
Speaker: Christopher Kanan
Assistant Professor, Carlson Center for Imaging Science at the Rochester Institute of Technology
Deep learning has been tremendously successful, especially for solving problems in natural language understanding and computer vision. This is a great achievement, but artificial intelligence (AI) still has a long way to go toward achieving the versatility of humans. After training on large amounts of data, conventional deep neural networks are frozen in time. Unlike humans, they cannot easily continue to learn more information. If this is attempted, then conventional models will suffer from catastrophic forgetting, resulting in the loss of previously acquired skills. In this talk, I review hippocampal replay during sleep in mammals and its hypothesized role in enabling generalization in cortex. I then describe two deep neural network models, developed by my lab, that are inspired by the dual-memory architecture of the mammalian brain. These models use replay mechanisms to avoid catastrophic forgetting, and they achieve state-of-the-art results on both image and audio classification datasets.
SUGGESTED READING
German I. Parisi, Ronald Kemker, Jose L. Part, Christopher Kanan, Stefan Wermter (2019) "Continual Lifelong Learning with Neural Networks: A Review" Neural Networks 113 (2019) 54–71
May 8
Speaker: Robert Goldstone
Professor, Department of Psychological and Brain Sciences, Indiana University
By one account, formal thought in mathematics and science requires developing deep construals that run counter to perception. This approach draws an opposition between superficial perception and principled understanding. In this talk, I advocate the converse strategy of grounding scientific and mathematical reasoning in perception and action. Relatively sophisticated reasoning is typically achieved not by ignoring perception, but rather by adapting perception and action routines so as to conform with and support formally sanctioned responses. Perception and action are more sophisticated than usually thought, particularly because they can be adapted to do the (cognitive) Right Thing. The first case study for this thesis concerns arithmetic and algebraic reasoning, where we find that mathematical proficiency involves executing spatially explicit transformations to notational elements. People learn to attend mathematical operations in the order in which they should be executed, and the extent to which students employ their perceptual attention in this manner is positively correlated with their mathematical experience. People also produce mathematical notations that they are good at reading. Based on observations like these, we have begun to design, implement, and assess virtual, interactive sandboxes for students to explore algebra. The second case study involves students learning about science by exploring simulations. We have developed a computational model of the process by which human learners discover patterns in natural phenomena. Our approach to modeling how people learn about a system by interacting with it follows three core design principles: 1) perceptual grounding, 2) experimental intervention, and 3) cognitively plausible heuristics for determining relations between simulation elements. In contrast to the majority of existing models of scientific discovery in which inputs are presented as symbolic, often numerically quantified, structured representations, our model takes as input perceptually grounded, spatio-temporal movies of simulated natural phenomena.
SUGGESTED READING:
Goldstone, R. L., Marghetis, T., Weitnauer, E., Ottmar, E. R., & Landy, D. (2017). Adapting perception, action, and technology for mathematical reasoning. Current Directions in Psychological Science, 1-8.
May 9, 3:30pm
Student Center auditorium
Distinguished Speaker: Robert Goldstone
Professor, Department of Psychological and Brain Sciences, Indiana University
We have developed internet-enabled experimental platforms to explore group patterns that emerge when people attempt to solve simple problems while taking advantage of the developing solutions of other people in their social network. I will describe imitation and innovation in low-dimensional, spatial environments, and then extend this approach to high-dimensional and more abstract solution spaces. Our human experiments and computer simulations show that there is a systematic relation between the difficulty of a problem search space and the optimal social network for transmitting solutions. People tend to imitate high-scoring solutions, prevalent solutions, solutions that become increasingly prevalent, and solutions similar to one’s own solution. In a real-world extension of this work, we study how parents in the United States name their babies. Using a historical database of the names given to children over the last century in the United States, we find that naming choices are influenced by both the frequency of a name in the general population, and increasingly by its “momentum” in the recent past. More broadly, we consider collective patterns of diversity, problem space coverage, and group performance that arise when people interact – patterns that group members often do not understand or even perceive.
Mini-Symposium - “Learning about sounds: Interdisciplinary approaches”
June 5th 2019, 7:30 pm, Park Hall 280
"Humans learn to use and appreciate a wide range of complex sounds throughout their lives, including sounds produced by artifacts (musical instruments) and other animals. The mechanisms that enable this kind of flexible production and perception of sound remain poorly understood. Technological, experimental, and theoretical advances are revealing new insights on the nature of auditory learning and plasticity. This mini-symposium highlights recent research from multiple perspectives on learning processes relevant to understanding vocal and auditory behavior."
Speakers
James Mantell - Assistant Professor, Department of Psychology, St. Mary’s College of Maryland
Jennifer Schneider - Associate Professor, Department of Social Sciences, LCC International University
Matthew Wisniewski - Assistant Professor, Department of Psychological Sciences, Kansas State University
Cynthia Henderson - Department of Psychology, Stanford University
Event sponsored by:
William K. and Katherine W. Estes Fund
Department of Psychology Dr. Donahue Tremaine Memorial Lecture Fund
Summer 2019 Short Course
Methods for Analyzing Sound and Modeling Auditory Plasticity (MASMAP)
-- June 5-7, 2019 --
Description
Perceptual processes have been a central focus of computational models of neural and cognitive mechanisms for much of the past century. Often, these models serve to prove the feasibility of proposed mechanisms rather than to simulate actual situations faced by organisms. For instance, models of speech processing may assume that listeners are working with pristine representations of received words rather than natural speech in noisy conditions. One way to increase the ecological validity of perceptual models is to represent actual inputs in biologically plausible ways rather than using idealized inputs. This requires transforming recorded signals into representations that reflect the sensory and perceptual sensitivities of receivers prior to analyzing the patterns within those representations. Providing psychology researchers with the computational skills necessary to implement biologically-based signal transformations in combination with simulations of perceptual processing can move the field closer to realistic theories of perception and cognition.
Application for financial support to attend MASMAP, to be sent along with a CV and letter of support to cperazio@buffalo.edu.
September 18
Speaker: Kelly Radziwon
Research Assistant Professor, Department of Communicative Disorders and Sciences, University at Buffalo
Hyperacusis is an auditory perceptual disorder in which everyday sounds are perceived as uncomfortably or excruciatingly loud, resulting in abnormally low loudness discomfort levels (LDLs). Although approximately 6 percent of the general population suffers from debilitating hyperacusis, relatively little is known about the biological bases of this loudness intolerance problem largely due to the lack of reliable and valid animal behavioral models. Recently, our lab has developed two behavioral paradigms that can reliably detect hyperacusis rats. One model uses operant conditioning techniques to accurately assess loudness growth in animals while the other model uses a modified place preference apparatus to assess sound avoidance behavior in rats. These animal models provide the basis for further studies examining the physiological and biological correlates of this disorder.
RECOMMENDED READING:
Kelly Radziwon, David Holfoth, Julia Lindner, Zoe Kaier-Green, Rachael Bowler, Maxwell Urban, Richard Salvi (2017) "Salicylate-induced hyperacusis in rats: Dose- and frequency-dependent effects", Hearing Research, Volume 350, 133-138.
October 9
Speaker: Christopher Heffner
Postdoctoral Fellow, Department of Speech, Language, and Hearing Sciences, University of Connecticut
Speech understanding requires constant adjustment. This can be seen most radically when encountering new speech sound tokens in a second language, where categorizing non-native sounds requires learning. Yet adjustment to phonetic categories can be observed even within a native language, where tolerating variation between talkers in speech rate or in accent requires adaptation. Though non-native and native speech perception share similarities, it is unclear whether they rely on common cognitive resources. In the present study, we explore individual differences in each process in behavior. We focus first on adaptation to speech rate, finding that individual differences can be challenging in the world of rate adaptation. Next, we examine accent adaptation and both explicit learning and incidental learning of phonetic categories. In the end, we find interrelationships in performance on three of our measures of phonetic plasticity: explicit learning, accent adaptation, and rate adaptation. Better learners of non-native tokens in explicit tasks were also better adapters. This suggests that learning and adaptation may depend on shared behavioral plasticity. MRI data analysis indicates that those shared behavioral measures may be supported by areas of the brain largely not associated with language.
RECOMMENDED READING:
Heffner, C.C., Idsardi, W.J. & Newman, R.S. (2019) "Constraints on learning disjunctive, unidimensional auditory and phonetic categories" Attention, Perception and Psychophysics, 81: 958.
October 23
Speaker: Minghui Zheng
Assistant Professor, Department of Mechanical and Aerospace Engineering, University at Buffalo
Robots are usually programmed for repetitive tasks in a structured environment or particular tasks in a less structured environment with a considerable amount of hand-crafted tuning work. Whenever a new robot with different dynamics and a new type of tasks are presented, the well-designed planning and control algorithms for the robot usually have to be re-derived with modification to both hardware and software by researchers to guarantee the required performance, and the actions for the robot to take have to be re-programmed and debugged by engineers. It remains very challenging to directly program a robot to automatically perform new tasks and/or adjust to new dynamics with guaranteed performance in terms of accuracy and robustness to uncertainties. Towards overcoming this limitation, this talk discusses our recent work on a learning framework by reasoning the error from physics, enabling a robot to effectively and efficiently learn from the experiences gathered by other robots. With this learning framework, the robot will learn three skills that are fundamentally critical in performing complex tasks: (1) how to sense and respond to environmental uncertainties, (2) how to generate a dynamically feasible trajectory, and (3) how to collaborate with human. This learning framework stands at the intersection of data-driven and model-based methodologies and is expected to possess promising data utilization efficiency and learning reliability.
October 30
Speaker: Yunjeong Chang
Assistant Professor, Department of Learning and Instruction, University at Buffalo
Increasing interest in student-centered learning has become evident across grade levels (e.g., K-12, higher education and professional education). Rooted in epistemology that individuals learn from experiences and construct their own learning, socio-constructivist extended the social aspects of student-centered learning, emphasizing others’ roles in helping individuals reach the point of independent functioning. The use of technology has increasingly assumed learners to have individual and group responsibilities for what, how, where, and when to learn. Some researchers have questioned the validity and wisdom of student-centered learning practices. While the reported benefits of individual and group-based experiential learning (e.g., problem-based, case-based, project-based learning) have been established, few studies have empirically investigated the relative benefits across students of mixed abilities and backgrounds. In this talk, I will share my research findings from ongoing studies to equitably engage cognitively and culturally diverse learners in the student-centered collaborative learning environments. Situating in postsecondary level teaching and learning environments with sociocultural and social-cognitive perspectives, I will present how educators design and develop equitably engaging student-centered online/hybrid/face-to-face learning environments with cognitive learning theories.
RECOMMENDED READINGS
Chang, Y. & Brickman, P. (2018). When Group Work Doesn’t Work: Insights from Students. CBE—Life Sciences Education, 17(3), ar52.
Chang, Y., Cintron, L., Cohoon, J. P., & Tychonievich, L. (2018, February). Diversity-focused Online Professional Development for Community College Computing Faculty: Participant Motivations and Perceptions. In Proceedings of the 49th ACM Technical Symposium on Computer Science Education (pp. 783-788). ACM.
November 6
Speaker: Malini Suchak
Assistant Professor, Department of Animal Behavior, Ecology, and Conservation, Canisius College
Although observations of cooperation in wild chimpanzees are relatively widespread, a lot of the mechanisms responsible for initiating and maintaining cooperative relationships are unknown. Indeed, some researchers have speculated that rather than truly cooperating, wild chimpanzees may be acting individually in pursuit of a common goal. Captivity provides an ideal setting for investigating these questions; however, many captive studies have failed to find cooperation, or only find it under severely constrained settings when they “engineer” cooperative partnerships. We argue that these constraints limit the natural conflict resolution mechanisms chimpanzees might use to mitigate competition to favor cooperation and further, limited partnerships provide less opportunity to learn about cooperative relationships and choose specific partners. We conducted a series of group-wide cooperation experiments with two groups of captive chimpanzees examining the roles of partner choice, competition, and costly outcomes in initiating and maintaining cooperation. We found that under these complex circumstances the chimpanzees quite readily establish cooperation, develop varying expertise, are sensitive to outcomes and risk in investing in cooperative acts, and use a variety of natural conflict resolution mechanisms to mitigate competition. Chimpanzees appear to quite readily exhibit a number of interesting mechanisms to maintain cooperation, but only if given the chance to express their behavioral repertoire to the fullest possible extent.
November 13
Speaker: Marie Coppola
Associate Professor, Department of Psychological Sciences, University of Connecticut
In the last few decades many studies have demonstrated striking parallels between signed and spoken languages in terms of their grammatical structure, acquisition, processing, and neural underpinnings. This talk focuses on two domains that have not been previously studied in both modalities: prosodic processing and number cognition. The Closure Positive Shift (CPS) is an established ERP component observed in response to processing prosodic phrase boundaries, which signal the internal structure of the message. In spoken languages, the CPS has been elicited by sound-based correlates of prosodic phrasing, such as slower speech before a boundary, or shorter or longer pauses between phrases. Is sound necessary for this type of linguistic processing? Using novel, carefully controlled stimuli in ASL and spoken English, we observed a CPS at comparable phrase boundaries in both Deaf native American Sign Language (ASL) signers and hearing native English speakers. These results challenge the claim that the CPS depends on sound, and suggests similarity in prosodic processing across signed and spoken languages.
The second part of the talk addresses number cognition. Deaf children learning ASL from their Deaf parents from birth show the same developmental trajectory as do hearing children who learn spoken English from their hearing parents from birth (Early Language groups). However, on average, deaf children who begin acquiring ASL or spoken English at some point after birth (Later Language groups) perform more poorly. These findings may in part explain the significant “math gap” long observed between deaf children and their hearing peers.
Thus, the timing of language exposure is important for the development of age-appropriate number concepts, but the modality of language is not. These results, taken together, contribute novel developmental and neurolinguistic evidence to the growing body of knowledge regarding language modality, sensitive periods, and neural plasticity.