February 7
Speaker: Gualtiero Piccinini
Professor, Philosophy Department University of Missouri – St. Louis
I discuss the connection between three theses that are sometimes conflated. Functionalism is the view that the mind is the functional organization of the brain. The Computational Theory of Mind (CTM) is the view that the whole mind—not only cognition but consciousness as well—has a computational explanation. When combined with the empirical discovery that the brain is the organ of the mind, CTM entails that the functional organization of the brain is computational. Computational functionalism is the conjunction of the two: the mind is the computational organization of the brain. Contrary to a common assumption, functionalism entails neither CTM nor computational functionalism. This finding makes room for an underexplored possibility: that consciousness be (at least partly) due to the functional organization of the brain without being computational in nature. This is a non-computational version of functionalism.
RECOMMENDED READING
Piccinini, Gualtiero, Computation and the Function of Consciousness, Ms.
February 14
Speaker: Ehsan Hoque
Assistant Professor of Computer Science at the University of Rochester
Many people fear automation. They may see it as a potential job killer. They may also be concerned about what can be automated. Could we train a computer to teach us human skills? Should we? Artificial intelligence, when designed properly, can help people improve important social and cognitive skills. My research group has shown how automated systems can develop skills that improve performance in job interviews, public speaking, negotiations, working as part of a team, producing vowels during music training, end-of-life communication between oncologists and cancer patients, and even routine social interactions for people with Asperger’s syndrome. In this talk, I will offer insights gained from our exploration of several questions: How are humans able to improve important social and cognitive skills with a computer? What aspect of the feedback helps the most? How to design experiments to ensure that the skills generalize?
RECOMMENDED READING
Mohammed (Ehsan) Hoque and Rosalind W. Picard (2014) "Rich Nonverbal Sensing Technology for Automated Social Skills Training", IEEE Computer Society, vol. 47, 28-35.
February 21
Speaker: Jesse Snedeker
Professor, Department of Psychology, Harvard University
In ordinary conversation, we produce about 3 words per second. As these words fly by, we must identify them, incorporate them into our syntactic analysis of the sentence, and make inferences about the speaker’s intentions. My work explores how these processes develop using eye-tracking and EEG. We find that, by three or four years of age, children’s language comprehension has the central features of the adult system: the child interprets the utterance incrementally as it is spoken, constructing abstract syntactic representations and using multiple sources of information to disambiguate words and structures. Children, differ from adults in critical ways: they have difficulty overcoming interference and revising their misanalyses and they are less adept at using of top-down constraints. We find that high-functioning children with autism are strikingly similar to their age-matched peers but continue to have difficulties with interference until a later age. Our most recent work uses a naturalistic paradigm where event-related potentials are recorded as children listen to a story book.
RECOMMENDED READING:
Snedeker, J., & Huang, Y. T. (2016). Sentence Processing. In E. Bavin & L. Naigles (Eds.), The Handbook of Child Language, 2nd Edition (pp. 409-437): Cambridge University Press.
March 7
Speaker: Alison Hendricks
Assistant Professor in the Department of Communicative Disorders and Sciences at the University at Buffalo
Language acquisition is commonly considered a universal process, in which all children fully acquire their native languages. Steven Pinker joked, “In general, language acquisition is a stubbornly robust process; from what we can tell there is virtually no way to prevent it from happening short of raising a child in a barrel” (Pinker, 1996, p. 29). In reality, not all children fully acquire their native language, such as children with Developmental Language Disorder (DLD). At the same time, the time course of typical language acquisition differs based on the language the child hears. Children who hear variable input, in which features are overtly marked or not depending on sociolinguistic and linguistic factors, may require additional time or input to acquire variable grammatical structures. However, little is known about how language ability and language input interact during acquisition. In this study, 89 first and second grade students, including children with DLD and typically developing (TD) peers, were tested on their production of three morphosyntactic features: regular past tense marking, third person singular –s, and regular plural marking. Comparing across dialect groups, TD-MAE students were more accurate on past tense and third person singular items compared to TD-NMAE peers. Comparing within dialect groups, the DLD-NMAE students were less accurate than TD-NMAE peers. The results underscore the importance of understanding how research on language variation and differences in language learning ability can broaden our understanding of language acquisition.
RECOMMENDED READINGS
Miller, K. (2013). Variable Input: What Sarah reveals about non-agreeing don’t and theories of root infinitives. Language Acquisition, 20(4), 305–324.
April 4
Speaker: Ehsan T. Esfahani
Assistant Professor at the Department of Mechanical and Aerospace Engineering, University at Buffalo
We live in the ubiquitous presence of robotic systems, assisting humans in their day-to-day activities. The current interaction of human and autonomous agent is still statics which cause the cooperation of human and autonomous agent thus far fail to reach the expectations. Although, robots (or autonomous agents in general) are used to elevate the human performance, too much assistance from the automation can lead to undesirable upshots such as degraded situational awareness, over-reliance of the humans, and potential loss of skills. Therefore, an adaptive automation strategy is desired to adjust the level of assistance provided to the operator that is commensurate with his level of workload and skills. In this talk, Brain Computer Interfaces (BCI) is introduced as a novel modality to elevate the human-machine cooperation. The integration of this modality in different cyber-human applications including robotic assisted surgery, robotic rehabilitations and controlling unmanned aerial vehicle will be demonstrated.
RECOMMENDED READING:
Khurshid A. Guru, Ehsan T. Esfahani, Syed J. Raza, Rohit Bhat, Katy Wang, Yana Hammond, Gregory Wilding, James O. Peabody, and Ashirwad J. Chowriappa (2015) "Cognitive skills assessment during robot-assisted surgery: separating the wheat from the chaff", BJU Int 115: 166–174
Amirhossein H. Memar and Ehsan T. Esfahani (2018) "Physiological measures for Human performance analysis in Human-Robot teamwork: case of tele-exploration", IEEE Access vol6. 3694-3705.
April 11
Speaker: Robert Van Valin
Professor, Linguistics Department at the University at Buffalo
The investigation of the interface of morphosyntax and prosody with discourse has become a major topic of interest in linguistics over the past few decades. It has been variously characterized as ‘information packaging’ or ‘common ground management’, labels which refer to the fact that information structure concerns how speakers structure their utterances in order to maximize their communicative impact based on their assessment of what their interlocutors know, believe and are willing to accept as given in the context. In this talk I will introduce the major information-structural concepts (e.g. topic, focus) and illustrate them with examples from English and other languages, paying special attention to the ways of expressing contrast in them.
RECOMMENDED READING:
Van Valin & LaPolla (1997), Syntax: structure, meaning and function, pp. 199-214.
September 19
Speaker: James Magnusson
Professor and Associate Director of the Connecticut Institute for the Brain and Cognitive Sciences, Psychological Sciences, University of Connecticut
One of the great unsolved challenges in the cognitive and neural sciences is understanding how human listeners achieve phonetic constancy (seemingly effortless perception of a speaker's intended consonants and vowels under typical conditions) despite a lack of invariant cues to speech sounds. Models (mathematical, neural network, or Bayesian) of human speech recognition have been essential tools in the development of theories over the last forty years. However, they have been little help in understanding phonetic constancy because most do not operate on real speech (they instead focus on mapping from a sequence of consonants and vowels to words in memory). The few models that work on real speech borrow elements from automatic speech recognition (ASR), but do not achieve high accuracy and are arguably too complex to provide much theoretical insight. Over the last two decades, however, advances in deep learning have revolutionized ASR, using neural networks that emerged from the same framework as those used in cognitive models. These models do not offer much guidance for human speech recognition because of their complexity. Our team asked whether we could borrow minimal elements from deep learning to construct a simple cognitive neural network that could work on real speech. The result is DeepListener, a neural network model trained on 1000 words produced by 10 talkers. It learns to map spectral slice inputs to sparse "pseudo-semantic" vectors via recurrent hidden units. The element we have borrowed from deep learning is to use "long short-term memory" (LSTM) nodes in the hidden layer. LSTM nodes have internal "gates" that allow nodes to become differentially sensitive to variable time scales. DeepListener achieves high accuracy and moderate generalization, and exhibits human-like over-time phonological competition. Analyses of hidden units – based on approaches used in human electrocorticography – reveal that the model learns a distributed phonological code to map speech to semantics. I will discuss the implications for cognitive and neural theories of human speech learning and processing.
October 10
Speaker: Matthew Paul
Assistant Professor, Psychology, University at Buffalo
Several neurodevelopmental disorders impact social function. The underlying neurobiology, however, is not yet understood. The neuropeptide, vasopressin, has long been known to regulate adult social behavior, but until recently, its role in development has been largely overlooked. In this presentation, I will discuss experiments on the behavioral development of Brattleboro rats, which lack vasopressin due to a mutation in the vasopressin gene. These experiments address the impact of this lifelong disruption in vasopressin on social behaviors of juvenile and adolescent rats (social play, ultrasonic vocalizations, huddling), how vasopressin influences these complex behaviors (e.g., motivation, arousal, etc…), and the neural pathways through which vasopressin regulates social development.
RECOMMENDED READING:
Schatza et al. 2018 "Investigation of social, affective, and locomotor behavior of adolescent Brattleboro rats reveals a link between vasopressin's actions on arousal and social behavior" in Hormones and Behavior, 106, pp.1-9.
October 24
Speaker: Fredrik (Frits) van Brenk
Postdoctoral Research Associate, Communicative Disorders and Sciences, University at Buffalo
The production of speech requires a complex organization, interaction, and execution of motoric, sensory, cognitive, and linguistic processes. An important topic of study over the years has been how unimpaired speakers and speakers with motor speech disorders are able to maintain control over speech organs under the influence of subject- and task-specific constraints on speech production. Relatively new data processing techniques of analyzing speech movement stability (and its inverse, variability) are the spatiotemporal index and functional data analysis, and enable the analysis of temporal and spatial variability of time-varying speech properties extracted from the acoustic speech signal, potentially providing researchers and clinicians with new venues into characterizing and quantifying speech impairment. In this talk I will evaluate the suitability of estimators of acoustic variability to distinguish speakers with dysarthria from healthy control participants, discuss the effects of varying linguistic, cognitive, and motor demands on variability, and discuss to what extent acoustic variability estimators relate to established clinical outcome measures and quantifiable details of treatment history. Based on the results of these analyses, I will consider its relevance for speech motor control, and its potential for clinical use.
RECOMMENDED READING:
van Brenk, F., & Lowit, A. (2012). The relationship between acoustic indices of speech motor control variability and other measures of speech performance in dysarthria. Journal of Medical Speech Language Pathology, 20(4), 24-29.
October 31
Speaker: Huei-Yen (Winnie) Chen
Assistant Professor, Industrial and Systems Engineering, University at Buffalo
The majority of motor vehicle crashes are attributed to driver errors, such as distraction, sleep, excessive speed and false assumption of others’ actions. Certain driver populations are at an elevated risk for vehicular crashes. For example, older drivers are affected by age-related declines in perceptual, cognitive, and motor abilities, while younger and in particular novice drivers tend to lack sufficient skills to recognize or anticipate road hazards. Understanding individual differences among drivers, such as age, driving experience, as well as personality and other social-psychological factors, can help identify ways of supporting safer driving. The talk will begin with an overview of my past research on understanding automobile drivers: modeling visual information sampling behaviour using self-paced visual occlusion, and investigating individual differences in susceptibility to involuntary and voluntary driver distractions. The second part of the talk will present some of my current research on feedback to drivers for mitigating unsafe driving behaviours: (a) a naturalistic driving study to compare the use of financial incentives with providing post-drive feedback to improve speed limit compliance, and (b) a simulator study that explored gamification in driver feedback for mitigating unsafe visual manual distraction. The analysis of data collected using a variety of research methods – in particular, online surveys, driving simulator studies, and on-road and naturalistic driving studies – has necessitated the use of advanced analytic techniques, including structural equation modeling and time series analysis. Findings from these studies have implications for designing safer in-vehicle systems and effective feedback mechanisms tailored towards individual drivers.
RECOMMENDED READING:
Jeanne Y. Xie, Huei-Yen Winnie Chen, Birsen Donmez (2016) "Gaming to Safety: Exploring Feedback Gamification for Mitigating Driver Distraction", Proceedings of the Human Factors and Ergonomics Society 2016 Annual Meeting.
November 14
Speaker: Irina Mikhalevich
Assistant Professor, Philosophy, Rochester Institute of Technology
Simplicity is typically regarded as scientific virtue despite being surprisingly difficult to define and even more difficult to defend qua virtue. Despite these considerable challenges, scientists often appeal to simplicity to justify inferential and methodological choices. For example, a common heuristic in comparative cognition advises choosing the ‘simplest’ (or least cognitively sophisticated) explanation of animal behavior whenever multiple explanatory hypotheses are equally empirically adequate. In this talk, I suggest that this ‘simplicity heuristic’ adversely affects a relatively new tool in experimental comparative cognition: namely, cognitive models. It does so, I argue, by directing intellectual resources into the development and refinement of putatively simple cognitive models at the expense of putatively more complex ones, which in turn directs experimenters to develop tests to rule out these ‘simple’ models. The result is a state of affairs wherein putatively simple models appear more successful than their more ‘complex’ rivals not because they are in fact epistemically superior, but because they have been the lucky recipients of the lion’s share of intellectual resources. I suggest that moving toward a more quantitative science of animal minds is likely to improve the explanatory and predictive power of animal cognition research, but only if these models do not fall prey to existing biases such as the simplicity heuristic.
RECOMMENDED READING:
Mikhalevich, Irina (2015) "Experiment and Animal Minds: Why the Choice of the Null Hypothesis Matters" Philosophy of Science, 82.
November 28
Speaker: Julia Bruno
Presidential Scholar in Society and Neuroscience, Columbia University
The songs of zebra finches and other birds can be described as sequences of vocal gestures, and current research indicates that developmental song learning is a process of generating and linking together song elements, or “syllables.” However, zebra finch song also sounds highly rhythmic, and song is often associated with courtship dancing. To test the biological significance of song rhythm, we trained juvenile zebra finches to alter their songs by incorporating a new syllable that either fit or deviated slightly from the prior rhythm. Contrary to a purely sequential account of song learning, birds more readily acquired the new sequence when it fit within a pre-established rhythm. This result suggests that temporal patterns are learned, in addition to and perhaps separately from constituent syllable sequences. We further investigated how song rhythms and tempos may be learned and reused. We found that tutored song rhythm (defined as ratios of syllable onset-to-onset intervals) was copied, but not tempo (the intervals themselves). Nevertheless, we found a strong tendency toward conservation of birds’ own tempo across development, including in cases of incomplete sequence imitation. In other words, birds appear to learn new syllables within the ‘framework’ of their existing song tempo. Together, these findings raise the possibility that the developing songbird brain encodes song time structure independently of acoustic content.