1 February 2012
Princeton Neuroscience Institute and Department of Psychology, Princeton University
Hierarchical reinforcement learning
ABSTRACT:
Research on human and animal behavior has long emphasized its hierarchical structure, according to which tasks are comprised of subtask sequences, which are themselves built of simple actions. The hierarchical structure of behavior has also been of enduring interest within neuroscience, where it has been widely considered to reflect prefrontal cortical functions. In recent work, we have been reexamining behavioral hierarchy and its neural substrates from the point of view of recent developments in computational reinforcement learning. Specifically, we've been considering at a set of approaches known collectively as hierarchical reinforcement learning, which extend the reinforcement learning paradigm by allowing the learning agent to aggregate actions into reusable subroutines or skills. A close look at the components of hierarchical reinforcement learning suggests how they might map onto neural structures, in particular regions within the dorsolateral and orbital prefrontal cortex. It also suggests specific ways in which hierarchical reinforcement learning might provide a complement to existing psychological models of hierarchically structured behavior. A particularly important question that hierarchical reinforcement learning brings to the fore is that of how learning identifies new action routines that are likely to provide useful building blocks in solving a wide range of future problems. Here and at many other points, hierarchical reinforcement learning offers an appealing framework for investigating the computational and neural underpinnings of hierarchically structured behavior. In addition to introducing the theoretical framework, I'll describe a first set of neuroimaging and behavioral studies, in which we have begun to test specific predictions.
RECOMMENDED READING:
15 February 2012
University of Maryland
Department of Linguistics
Discontinuities in Syntactic Development
ABSTRACT:
When children acquire a given piece of linguistic knowledge, they must also learn to tie this knowledge to deployment systems that enable robust speech production and comprehension in real time. This talk will explore the degree to which these acquisition processes are interleaved and the degree to which development in each system impacts development in the other. Through two case studies, we will examine the ways that an immature deployment system impacts both understanding and learning. This work helps to clarify the independent contributions of statistical-distributional analysis and constraints on structural representations in the acquisition of syntax.
RECOMMENDED READING:
22 Feburary 2012
Business Meeting (PARK 208)
29 February – 1 March
Distinguished Speaker
Center for Cognitive Science
and Department of Psychology Donald Tremaine Fund
Susan Carey
Department of Psychology
Harvard University
UB Center for Cognitive Science Colloquium
Wednesday, 29 February 2012, 2:00 P.M.
280 Park Hall
The Origin of Concepts: A Case Study of Natural Number
ABSTRACT:
A theory of conceptual development must specify the innate representational primitives, must characterize the ways in which the initial state differs from the adult state, and must characterize the processes through which one is transformed into the other. I will defend three claims: 1) With respect to the initial state, the innate stock of primitives is not limited to sensory, perceptual, or sensory-motor representations; rather, there are also innate conceptual representations. 2) With respect to developmental change, conceptual development consists of episodes of qualitative change, resulting in systems of representation that are more powerful than, and sometimes incommensurable with, those from which they are built. 3) With respect to a learning mechanism that achieves conceptual discontinuity, I offer Quinian bootstrapping.
RECOMMENDED READING:
UB Center for Cognitive Science Distinguished Speaker Lecture
Thursday, 1 March 2012, 2:00 P.M.
Student Union Theater
(Room 106 if entering from ground floor; Room 201 if entering from 2nd floor)
Infants' Rich Representations of the Social World
ABSTRACT:
In this talk I present a case study of infants' representations of people, intentional agency, social relations, and morality to illustrate the methods and arguments in favor of claims that human infants are endowed with rich innate representational resources.
7 March 2012
School of Electrical and Computer Engineering
Purdue University
Mediating Cross-Modal Perception, Motor Control, Language, and Reasoning with Common and Deep Semantic Representations
ABSTRACT:
Human intelligence is tightly intertwined. Multiple perceptual modalities,like vision and audition, and multiple motor modalities, like manipulation and locomotion, inform, influence, and mediate each other though multiple thought modalities, like language and reasoning. Yet most computer intelligence research is compartmentalized into disjoint fields like computer vision, robotics, natural language, and AI. Understanding human intelligence and emulating it computationally will require common semantic representations across all these modalities. I will present our concerted effort to develop just that: common representations that allow rich and deep semantic interaction between computer vision, robotics, and natural language. I will present an overview of three projects that exemplify our efforts to do this: sentential description of video, assembly imitation, and learning to play board games.
In the first project, short video clips depicting people performing actions are processed to produce descriptions of the observed actions as common English verbs, descriptions of the participant objects as noun phrases,properties of those objects as adjectival modifiers in those noun phrases, the spatial relations between those participants as prepositional phrases, and characteristics of the action as adverbial modifiers. In the second, one robot builds an assembly out of Lincoln Logs while a second robot observes the assembled structure and describes it in English to a third robot who must replicate that structure from the English description. In the third, two robots play a board game, driven by an English description of the rules, while a third robot observes the play and must infer the rules from visual observation and use the learned rules to drive robotic play.
Joint work with Andrei Barbu, Daniel Barrett, Alexander Bridge, Ryan Buffington, Zachary Burchill, Tommy Chang, Dan Coroian, Sven Dickinson, Sanja Fidler, Seongwoon Ko, Alex Levinshtein, Aaron Michaux, Sam Mussman, Siddharth Narayanaswamy, Dhaval Salvi, Lara Schmidt, Jiangnan Shangguan, Brian Thomas, Jarrell Waggoner, Song Wang, Jinliang Wei, Yifan Yin, Haonan Yu, and Zhiqi Zhang.
RECOMMENDED READING:
14 March 2012
Spring break
21 March 2012
Department of Philosophy, at the University of Pennsylvania.
The Center for Neuroscience and Society, the Center for Cognitive Neuroscience and the Institute for Research in Cognitive Science.
Concepts: a Pragmatist Theory
In this talk I devise a theory of the nature of concepts that is descended from the "conceptual atomist" position (associated with Jerry Fodor) but which adds a much needed psychological dimension to the atomist's view. I discuss the relation between this account of concepts and positions on concepts in psychology, such as the prototype theory, and I relate my view of concepts to work in cognitive science in the connectionist tradition.
RECOMMENDED READING:
28 March 2012
UB Department of Computer Science and Engineering
Affiliated Faculty, Department of Philosophy
Affiliated Faculty, Department of Linguistics
and Center for Cognitive Science
Toward Deep Understanding of Short Intelligence Messages
ABSTRACT:
We have been developing a software system to analyze short English messages presumably written by human informants or human intelligence gatherers, and represent the information in the messages in a graph-based knowledge representation formalism. The graphs from multiple messages will later be combined, and used to provide information to intelligence analysts. Our system uses: GATE, the General Architecture for Text Engineering, for natural language processing tools such as tokenizing, part-of-speech tagging, named-entity recognition, and anaphora resolution; the Stanford Dependency Parser; a graphical editor for human intra-message reference resolution; Cyc to provide additional ontological information; and a syntax-semantics mapper of our design to convert a graph of syntactic information into a propositional graph of semantic/conceptual information based on the FrameNet system of deep lexical semantics and represented using the SNePS knowledge representation and reasoning system. In this technology-oriented talk, I will describe the various parts of this system, and show how an English message gets transformed into a SNePS propositional graph representing the information in the message.
RECOMMENDED READING:
4 April 2012
Department of Communicative Sciences and Disorders
NYU Steinhardt School of Culture, Education, and Human Development
Semantic feature processing in stroke-aphasia: Preliminary findings
ABSTRACT:
There is growing evidence that the cognitive and neuroanatomical representations of object concepts emerge from their underlying semantic feature structure. Much of this evidence has been obtained from work with neuropsychological patient populations, such as those who present with degenerative disorders such as semantic dementia or chronic brain damage due to disease processes such as herpes simplex encephalitis. Less is known about the nature of semantic feature processing, and impairment thereof, in patients with language impairment (i.e., aphasia) consequent to stroke. This is a surprising gap in the literature considering that many of the behavioral treatments for the word retrieval deficits observed in these patients are predicated on semantic feature cueing. In this talk, we will discuss an assessment designed to examine responsiveness to different types of semantic feature cues, and their relationship with different categories of object concepts, in patients with stroke-aphasia. Preliminary findings will also be explored.
RECOMMENDED READING:
11 April 2012
Department of Modern Languages and Cultures
Rochester Institute of Technology
Length-based phrase-ordering in Japanese and its interaction with canonicality
ABSTRACT:
Speakers change the order of words and phrases within a sentence for a variety of reasons. One example is word-order changes based on the length of phrases in a sentence. English speakers tend to exhibit the "Short-before-long" tendency, i.e., they postpone a long phrase until after a short phrase (e.g., Hawkins, 1994; Stallings, MacDonald, and O'Seaghdha, 1998). In contrast, speakers of Japanese and Korean show the "Long-before-short" tendency (e.g., Hawkins, 1994; Yamashita and Chang, 2001; Kondo and Yamashita, 2011). Several proposals have been made to account for both phenomena, yet more data that explore them in depth are necessary. In this talk I will highlight some of the hypotheses of the Long-before-short tendency in Japanese and present my analysis of a corpus of spontaneous speech. The study, which employed the probability of filler occurrence as a measure of production load, suggests that the Long-before-short tendency in Japanese may not be accounted for by the efficiency of the production load, and furthermore the tendency interacts with the degree of canonicality of each sentence structure.
RECOMMENDED READING:
ADDITIONAL BACKGROUND READING:
18 April 2012
Department of Psychology
Vanderbilt University
Predicting the neural and behavioral dynamics of perceptual decisions
ABSTRACT:
How do humans and non-human primates make perceptual decision about an object's category, identity, or importance? In our work, we formally contrast competing hypotheses about perceptual decision making mechanisms using computational models that are tested on how well or how poorly they predict behavioral and neural dynamics. Our starting point is a well known class of models that assume that perceptual decisions are made by a noisy accumulation of perceptual evidence to a response boundary. Our efforts have focused on developing models of the perceptual evidence that drives this accumulation process and testing whether and how these mechanisms are instantiated in the brain. After introducing the general framework and briefly reviewing past work, I will focus on recent projects that associate perceptual evidence with one class of neurons and the accumulation process with another class of neurons recorded from monkeys trained to make perceptual decisions by a saccadic eye movement. I will highlight novel approaches we have taken to relate cognitive-level explanations and neural-level explanations, using neural data to constrain cognitive theories and using cognitive theories to explain neural dynamics.
RECOMMENDED READING:
5 September 2012
Research Scientist, Department of Brain and Cognitive Sciences, MIT
A novel framework for a neural architecture of language
ABSTRACT:
In this talk I will argue that high-level linguistic processing is accomplished by the joint engagement of two functionally and computationally distinct brain systems: i) the classic 'language regions' on the lateral surfaces of left frontal and temporal lobes that appear to be functionally specialized for linguistic processing, showing little or no response to arithmetic, general working memory, cognitive control or music (e.g., Fedorenko et al., 2011; Monti et al., 2009), and ii) the fronto-parietal "multiple demand" network, a set of regions that are engaged across a wide range of cognitive demands (e.g., Duncan, 2001, 2010). Most past neuroimaging work on language processing has not explicitly distinguished between these two systems, especially in the frontal lobes, where subsets of each system reside side by side within the region referred to as "Broca's area". Using methods which surpass traditional methods in sensitivity and functional resolution - i.e., identifying key brain regions functionally in each individual brain (Fedorenko et al., 2010; Nieto-Castañon & Fedorenko, in press; Saxe et al., 2006) - we are beginning to characterize the important roles played by both domain-specific and domain-general brain regions in linguistic processing.
3 October 2012
Assistant Professor, Department of Linguistics, UMASS Amherst.
Linguistic structure in memory search
ABSTRACT:
It has long been recognized that our short-term memory for structured linguistic material is unlike our memory for unstructured material, and there is widespread, firm support for the 'psychological reality' of grammatical structures. However, the exact functional role that this structure plays in language behavior remains one of the major topics of research in contemporary psycholinguistics. In this talk, I investigate the role of linguistic structure for memory access tasks in online parsing. A number of models suggest that memory access in parsing proceeds by applying domain-general working memory principles to domain-specific linguistic representations. In contrast to this view, I will attempt to motivate the Structured Access Hypothesis, according to which linguistic structure is deployed in a direct and specialized way to guide memory access in comprehension. I present two test cases in support of this hypothesis. One test case comes from a comparison of English agreement and reflexive processing, which shows that cues to memory access are not directly chosen on the basis of their functional utility but rather reflect the underlying grammatical function of the dependency they express. In addition, I will present time course data on the processing of the Mandarin long-distance reflexive 'ziji', which suggests that memory search is bounded by clausal structure.
RECOMMENDED READING:
17 October 2012
Assistant Professor, Department of Cognitive Linguistic and Psychological Sciences, Brown University
Parallelism and Anaphora
ABSTRACT:
The term `anaphora' refers to a class of linguistic devices, including pronouns and other underspecified forms, for which interpretation is context dependent. Hankamer & Sag (1984) proposed that anaphors could be divided into two types: `surface' anaphors, which require access to a linguistic antecedent for interpretation, and `deep' anaphors, which can be interpreted via reference to a discourse model. While the processing claims of that proposal have not been strongly supported by psycholinguistic research, the model has remained influential, nonetheless, within the linguistics literature. In the current talk I consider complications associated with the analysis of verb phrase ellipsis, which behaves like a surface anaphor in some contexts (requiring a parallel antecedent) and a deep anaphor in others (non-parallelism is tolerated). I present results from judgment and reading time studies which show that sensitivity to parallelism is dependent on information structure and propose that such effects follow from the violation of general (i.e. not ellipsis-specific) discourse constraints, which become amplified in ellipsis contexts.
7 November 2012
Assistant Professor at the Department of Philosophy, University at Buffalo
What David Hume should, can, and does say about denial
David Hume fancied himself the Newton of the mind, aiming to reinvent the study of human mental life in the same way that Newton had revolutionized physics. And it was his view that the novel account of belief he proposed in his "Treatise of Human Nature" was one of that work's central philosophical contributions. From the earliest responses to the "Treatise" forward, however, there was deep pessimism about the prospects for his account. It is easy to understand the source of this pessimism: The constraints he employed in theorizing stem from his Newtonian ambitions. Constraints such as his copy principle and his decision to rely only on variations in ''force and vivacity'' for differentiating types of mental states severely limit his available explanatory resources. However, it is one thing to regard an account as untenable, and quite another to understand where it fails. In this paper, I focus on one long-standing objection to Hume's account -- the objection that Hume cannot offer an account of "negative belief" or "denial" -- as presented by Hume's contemporary Thomas Reid, as well as more recently by Barry Stroud, and argue that this objection to Hume does not succeed. I defend an interpretation of Hume on which his limited resources are yet sufficient to meet this challenge, undermining some of the long-standing pessimism surrounding Hume's account of the human understanding.
RECOMMENDED READING:
28 November 2012
Professor, Department of Computer Science and Engineering, University at Buffalo
Multilingual Text Mining: Lost in Translation, Found in Native Language Mining
ABSTRACT:
There has been a meteoric rise in the amount of multilingual content on the web. This is primarily due to social media sites such as Facebook, and Twitter, as well as blogs, discussion forums, and reader responses to articles on traditional news sites. It is interesting to observe the explosive growth in languages such as Arabic. The availability of this content warrants a discussion on how such information can be effectively utilized. This talk will begin with motivations for multilingual text mining, including commercial and societal applications, digital humanities applications forums, and lastly, government applications, where the value proposition (benefits, costs and value) is different, but equally compelling. There are several issues to be addressed, beginning with the need for processing native language, as opposed to using machine translated text. In tasks such as sentiment or behaviour analysis, it can be argued that a lot is lost in translation, since these depend on subtle nuances in language usage. On the other hand, processing native language is challenging, since it requires a multitude of linguistic resources such as lexicons, grammars, translation dictionaries, and annotated data. This is especially true for "resource-poor languages" such as Urdu, and Somali. The availability of content such as multilingual Wikipedia provides an opportunity to automatically generate needed resources, and explore alternate techniques for language processing. The rise of multilingual social media also leads to interesting developments such as code mixing, and code switching giving birth to "new" languages such as Hinglish, Urdish and Spanglish! This trend presents new challenges to automatic natural language processing. A case study involving the detection of extremist messages in foreign media will be presented. One goal of the study is to adapt methods for discovering and documenting descriptions of violent acts in traditional media to social and digital media. Unlike traditional content analysis, we are less concerned with finding events and more concerned with finding how an event is being framed (e.g., direct incitement to violent retaliation or euphemistic terms for abuse of power) and who is framing it that way (e.g., military groups vs. affiliated non-militant group). In summary, the availability of multilingual data provides new opportunities in a variety of applications. It also poses a need for effective mining techniques.
RECOMMENDED READINGS