2010 Events

Spring Semester

27 January 2010

Micheal Dent

Comparative Bioacoustics Laboratory
Department of Psychology, and Center for Cognitive Science
University at Buffalo

An Avian Cocktail Party:
Masking and Unmasking in Birds

ABSTRACT:

Although laboratory experiments on hearing in animals generally begin with studies of absolute sensitivity in quiet environments, the reality of an animal's life is that it is rarely communicating in that type of sterile situation. There are auditory studies that more closely approximate ecologically-relevant conditions of an animal's life, such as discrimination, localization, and masking experiments, but even those can have limitations. Studies on the cocktail-party problem in birds have utilized various techniques to describe how well animals can effectively communicate in noisy environments and whether spatial separation of signals and noise aids in hearing out those signals. In humans, spatially separating a signal from a noise source significantly increases the audibility of that signal. Various species of birds also show this "release from masking" under natural field conditions as well as in controlled lab studies, using orienting responses as well as conditioned responses, and using simple pure tones embedded in broadband noises as well as calls and songs embedded in birdsong choruses. As a whole, these experiments suggest that the cocktail-party effect is a basic auditory process used by many animals to aid in signal detection under difficult and complex listening situations and that excellent localization skills are not necessary for the task.

RECOMMENDED READING:

Dent, Micheal L.; McClaine, Elizabeth M.; Best, Virginia; Ozmeral, Erol; Narayan, Rajiv; Gallun; Frederick J. Sen, Kamal; & Shinn-Cunningham, Barbara G. (2009), "Spatial Unmasking of Birsong in Zebra Finches (Taeniopygia guttata) and Budgerigars (Melopsittacus undulatus)"Journal of Comparative Psychology 123(4): 357–367.

 

 

3 February 2010

Business Meeting for Faculty Members of the Center

 

 

10 February 2010

Eon-Suk Ko

Department of Linguistics and Center for Cognitive Science
University at Buffalo

Children's Acquisition of Vowel Duration
Conditioned by Consonantal Voicing

ABSTRACT:

The finding that English vowels are longer before voiced than voiceless consonants by a ratio of about 3:2 has been known for a long time. In this talk, I address the question of when and how young children learning American English develop their knowledge of this phonetic pattern. In the first part, I present an acoustic analysis of corpus data that was conducted to find out how early children begin to produce different vowel durations as a function of post-vocalic voicing (Ko 2007). The age range covered by the data was from 0;11 to 4;0. It was found that children control the vowel duration conditioned by voicing before the age of 2, and that there is no developmental trend in the acquisition of the vowel duration conditioned by post-vocalic voicing within the age range examined. In the second part, I present a study where 8- and 14-month-old infants' perceptual sensitivity to vowel duration conditioned by post-vocalic consonantal voicing was examined (Ko et al. 2009). Half the infants heard CVC stimuli with short vowels; half heard stimuli with long vowels. In both groups, stimuli with voiced and voiceless final consonants were compared. Older infants showed significant sensitivity to mismatching vowel duration and consonant voicing in the short condition, but not the long condition; younger infants were not sensitive to such mismatching in either condition. The results suggest that infants' sensitivity to extrinsic vowel duration begins to develop between 8 and 14 months. Taken together, these results suggest that the presence of the vowel-duration difference conditioned by post-vocalic consonantal voicing in early speech may reflect children's knowledge of English phonotactics in the perceptual domain. The current study thus provides some concrete data to corroborate the idea that the development of speech production is preceded by the development of perceptual sensitivity.

RECOMMENDED READING:

 

  1. Ko, Eon-Suk (2007), "Acquisition of Vowel Duration in Children Speaking American English"Proceedings of Interspeech 2007 (Antwerp): 1881–1884..
  2. Ko, Eon-Suk; Soderstrom, Melanie; & Morgan, James (2009), "Development of Perceptual Sensitivity to Extrinsic Vowel Duration in Infants Learning American English"Journal of the Acoustical Society of America 126(5) (November): 134–139.

 

 

17 February 2010

Veena D. Dwivedi

Department of Applied Linguistics, Brock University

Underspecification in Semantic Processing

ABSTRACT:

Recent work in language processing suggests that interpretive processes are often incomplete, such that comprehenders do not commit to a particular meaning during a parse. Underspecified representations have implications for understanding ambiguity at the syntax-semantics interface, particularly for scope ambiguous sentences, such as

(i) Every kid climbed a tree.

Is the meaning of (i) underspecified, or is a particular scope assignment preferred? Also, how would this representation impact anaphoric resolution downstream? Previous behavioral studies are equivocal regarding the interpretation of (i). Kurtzman & MacDonald (1993) showed that plural anaphors in continuation sentences (e.g., The trees were in the park), consistent with a surface scope interpretation of (i), are preferred over singular continuations (e.g., The tree was in the park), consistent with the inverse scope interpretation. This is precisely what one would expect on theoretical grounds. However, this effect was not replicated in Tunstall (1998). Moreover, Kemtes & Kemper (1999) showed that judgments for sentences like (i) are modulated by age and working-memory span. In this talk, I discuss recent experiments investigating the interpretation of scope-ambiguous sentences using both EEG/ERP and self-paced reading paradigms. I show that, in fact, sentences such as (i) are left unresolved until further information arrives for disambiguation. Furthermore, findings regarding individual differences are discussed, suggesting that underspecification is a strategic use of allocational resources.

RECOMMENDED READING:

Dwivedi, Veena D.; Phillips, Natalie A.; Einagel, Stephanie;& Baum, Shari R. (2009, in press), "The Neural Underpinnings of Semantic Ambiguity and Anaphora"Brain Research, doi:101016/j.brainres.2009.09.102.

 

 

24 February 2010

Paul Luce

Language Perception Laboratory
Department of Psychology, and Center for Cognitive Science
University at Buffalo

Competition among Variant Word Forms in Spoken Word Recognition

ABSTRACT:

Traditionally, much of the research on the perception of spoken words has employed carefully produced, isolated words as experimental stimuli. However, words in casual speech exhibit considerable variation in articulation. For example, alveolar stop consonants (/t/ and /d/) in certain phonetic environments may be produced as taps, glottal stops, careful /t/s and /d/s, or they may be deleted altogether. We have been examining the representation and processing of variants of spoken words. In particular, we have attempted to determine whether words containing non-word-initial alveolar stops may be represented in memory as multiple specific variants, by comparing processing time for monosyllabic words that end in either alveolar or non-alveolar (bilabial or velar) stops. Alveolar-ending words were responded to more slowly than carefully matched, non-alveolar-ending words, in a variety of experimental tasks. This result did not hold for similarly composed nonwords. The results suggest that variant word forms compete at a stage beyond sublexical processing. Implications for characterizing competition in spoken word recognition are discussed. (Work done with Micah Geer.)

RECOMMENDED READING:

McLennan, Conor T.; Luce, Paul A.; & Charles-Luce, Jan (2005), "Representation of Lexical Form: Evidence from Studies of Sublexical Ambiguity"Journal of Experimental Psychology: Human Perception and Performance 31(6): 1308–1314.

 

 

3 March 2010

Kevan Edwards

Department of Philosophy, Syracuse University

Representation and Mental Processes:
Unity amidst Heterogeneity in the Study of Concepts

ABSTRACT:

In the past few decades, a significant amount of work has taken place in both philosophy and (cognitive and developmental) psychology under the rubric of theorizing about the nature of concepts. As is often the case with relatively embryonic work on a topic at the intersection of academic disciplines, there has been a lot of conceptual confusion and cross-talk. Notably, it isn't easy to come up with an uncontroversial statement of exactly what a theory of concepts is supposed to do—never mind how to evaluate how well various candidate theories do it. Presumptive starting points vary from the assumption that concepts are the basic building blocks of cognitive states to the assumption that concepts are cognitive capacities, to the assumption that the aforementioned building blocks just are the aforementioned capacities.

The anchoring focus of the talk will be what I will describe as a fundamental tension between (i) various reasons—largely theoretically motivated—for maintaining that any viable concept (so to speak) of concepts needs to be robust across contexts, agents, and uses and (ii) the manifest flexibility of concepts as applied in practice—here the support is largely empirical and/or just good common sense. I want to sketch, in admittedly broad brushstrokes, an approach to concept individuation centered on the notion of what a concept refers to or represents. This approach has the virtue of tackling the tension between robustness and flexibility head on. However, the approach requires a significant departure from how the vast majority of philosophers and psychologists approach the topic. Moreover, the relatively impoverished nature of reference leads to some obvious prima facie stumbling blocks. These problems, I will claim, are just manifestations of the "fundamental tension" to which I want to draw attention. The key to resolving the problems (and the tension, in general) is to embracing substantial restrictions on a theory of concepts per se, and to acknowledge that a theory of concepts is only part of a much richer account of the structure and function of (the relevant components of) the cognitive mind.

In the process of getting all of this on the table, I hope to draw attention to some broader issues, in particular issues having to do with the nature of the contribution that even a relatively traditional philosopher of mind can hope to make to cooperative, inter-disciplinary research projects in cognitive science.

RECOMMENDED READING:

 

 

 

7–8 April 2010

Distinguished Speaker
Center for Cognitive Science
and Department of Psychology Donald Tremaine Fund

Jeffrey L. Elman

Dean, Division of Social Science
Co-Director, Kavli Institute for Brain & Mind
Distinguished Professor of Cognitive Science
Chancellor's Associates Endowed Chair
University of California, San Diego

David E. Rumelhart Prize for Theoretical Contributions to Cognitive Science

UB Center for Cognitive Science Colloquium
Wednesday, 7 April 2010, 2:00 P.M.
280 Park Hall

Event Knowledge and Sentence Processing:
A Blast from the Past

ABSTRACT:

Language processing has often focused on how language users comprehend and produce sentences. Although fluent use obviously requires integrating information across multiple sentences, the syntactic and semantic processes necessary for comprehending sentences have (with some important exceptions) largely been seen as self-contained. That is, it was assumed that these processes were largely insensitive to factors lying outside the current sentence's boundaries. This assumption is not universally shared, however, and remains controversial. In this talk, I shall present a series of experiments that suggest that knowledge of events and situations—often arising from broader context—plays a critical role in many intrasentential phenomena often thought of as purely syntactic or semantic. The data include findings from a range of methodologies, including reaction time, eye tracking (both in reading in the visual world paradigm), and event-related potentials. The timing of these effects implies that sentence processing draws in a direct and immediate way on a comprehender's knowledge of events and situations (or, the "blast from the past", on knowledge of scripts, schemas, and frames).

UB Center for Cognitive Science Distinguished Speaker Lecture
Thursday, 8 April 2010, 2:00 P.M. 
Student Union Theater
(Room 106 if entering from ground floor; Room 201 if entering from 2nd floor)

Words and Dinosaur Bones:
Knowing about Words without a Mental Dictionary

ABSTRACT:

For many years, language researchers were not overly interested in words. After all, words vary across language in mostly random and unsystematic ways. Language learners simply had to learn them by rote. Words were uninteresting. Rules were where the exciting action lay, and considerable effort was invested in trying to figure out what the rules of languages are, whether they come from a universal toolbox, and how language learners could acquire them. Over the past decade, however, there has been increasing interest in the lexicon as the locus of users' language knowledge. There is now a considerable body of linguistic and psycholinguistic research that has led many researchers to conclude that the mental lexicon contains richly detailed information about both general and specific aspects of language. Words are in again, it seems. But this very richness of lexical information poses representational challenges for traditional views of the lexicon. In this talk, I will present a body of psycholinguistic data, involving both behavioral and event-related-potential experiments, that suggest that event knowledge plays an immediate and critical role in the expectancies that comprehenders generate as they process sentences. I argue that this knowledge is, on the one hand, precisely the sort of stuff that on standard grounds one would want to incorporate in the lexicon but, on the other hand, cannot reasonably be placed there. I suggest that, in fact, lexical knowledge (which I take to be real) may not properly be encoded in a mental lexicon, but through a very different computational mechanism.

RECOMMENDED READING:

Elman, Jeffrey L. (2009), "On the Meaning of Words and Dinosaur Bones: Lexical Knowledge Without a Lexicon"Cognitive Science 33(4): 547–582.

 

 

14 April 2010

LouAnn Gerken

Tweety Language Development Lab
Department of Psychology, Department of Linguistics,
and Director, Cognitive Science Program 


University of Arizona

Predicting and Explaining Babies

ABSTRACT:

The past 50 years or so of research in language and cognitive development have alternated between construing the learner's job as that of merely predicting new data in a particular domain from already-experienced input data, and that of explaining the state of affairs in the world that gave rise to the input data in the first place. Thus, in the domain of language development, researchers have debated about whether infants and children are learning linguistic grammars (explanations for linguistic data) or whether they are storing input in such a way that they can generalize to new instances without ever representing a grammar (prediction). The long back-and-forth about the nature of the English past-tense rule is an example of such a debate. One reason why the field has made somewhat less headway on this issue than we might hope is that the debate has been about the nature of stored representations, which are notoriously difficult to distinguish based on infant and child behavioral data. More recently, and largely under the banner of Bayesian inference, several labs have begun to approach the question of prediction vs. explanation in development in a different way. In my talk, I will briefly review some of the history of the prediction-vs.-explanation debate and discuss several examples of new studies supporting the view that infants and children are, in many domains, driven to explain.

RECOMMENDED READING:

Gerken, LouAnn (2010), "Infants Use Rational Decision Criteria for Choosing among Models of their Input"Cognition 115(20) (May): 362–366.

 

 

21 April 2010

Anna Papafragou

Department of Psychology and Department of Linguistics & Cognitive Science
University of Delaware

Space in Language and Thought

ABSTRACT:

The linguistic expression of space draws from, and is constrained by, basic, probably universal, elements of perceptual/cognitive structure. Nevertheless, there are considerable cross-linguistic differences in how these fundamental space concepts are segmented and packaged into sentences. This cross-linguistic variation has led to the question whether the language one speaks could affect the way one thinks about space—hence whether speakers of different languages differ in the way they see the world. This talk addresses this question through a series of cross-linguistic experiments comparing the linguistic and non-linguistic representation of motion and space in both adults and children. Taken together, the experiments reveal remarkable similarities in the way space is perceived, remembered, and categorized, despite differences in how spatial scenes are encoded cross-linguistically.

RECOMMENDED READING:

Papafragou, Anna; Hulbert, Justin; & Trueswell, John (2008), "Does Language Guide Event Perception? Evidence from Eye Movements"Cognition 108: 155–184. 

Fall Semester

15 September 2010

Holly L. Storkel

Word and Sound Learning Lab
Department of Speech-Language-Hearing: Sciences & Disorders
University of Kansas

Word Learning by Typically Developing Preschool Children:
Effects of Phonotactic Probability, Neighborhood Density, and Semantic Set Size

ABSTRACT:

This talk explores how phonological (i.e., individual sound), lexical (i.e., whole-word), and semantic (i.e., meaning) representations contribute to word learning. Past work has shown that retrieval and retention of phonological information is influenced by phonotactic probability (the likelihood of occurrence of a sound sequence), whereas retrieval and retention of lexical information is influenced by neighborhood density (the number of similar-sounding words). Moreover, emerging work suggests that visual and auditory word-recognition is influenced by semantic-set size (the number of words that are meaningfully related to or frequently associated with a given word). In this series of studies, we explore how these three variables influence the creation of new representations during word learning by typically developing preschool children. Results showed that children learned low-phonotactic-probability nonwords more accurately than high-phonotactic-probability nonwords. In contrast, the effect of neighborhood density and semantic-set size varied across test points. In particular, children learned low-density nonwords more accurately than high-density nonwords at an early test point but then showed the reverse pattern, learning high-density nonwords more accurately than low-density, at a later retention test. Turning to semantic-set size, children showed no effect of set size at an early test point, but learned low-set-size nonwords more accurately than high at a later retention test. These results are discussed in terms of the potential effect of phonotactic probability, neighborhood density, and semantic-set size on different word-learning processes (i.e., triggering vs. configuration vs. engagement).

RECOMMENDED READING:

This reading explores the effects of phonotactic probability, neighborhood density, and semantic-set size in a database of words learned by infants. The reading provides an introduction to the variables that will be used in the talk and to the different word-learning processes (i.e., triggering vs. configuration vs. engagement) that will be discussed in the talk. The reading in conjunction with the talk illustrate the different methods that can be used to examine language acquisition.

 

 

22 September 2010

No meeting.

 

 

29 September 2010

Rajendra Badgaiyan

UB Department of Psychiatry

Neurobiology and Clinical Correlates of Nonconscious Mind

ABSTRACT:

Neuroimaging studies of conscious (explicit) and nonconscious (implicit) memory provide an opportunity to understand how nonconscious information is processed in the brain. These studies have indicated that retrieval of implicit memory is associated with reduced activation in the posterior cortical areas located in the occipito-temporo-parietal junction. Explicit retrieval, on the other hand, involves increased activation in the prefrontal and medial temporal lobe regions. Analysis of the timeline of these activities reveals that the reduced activation in posterior cortical areas is critical, not only for retrieval of nonconscious information, but also for making this information available for consciously performed actions and cognitive processing. By studying the process of nonconscious retrieval, using newly developed dynamic molecular imaging techniques, we have started getting an insight into the neurochemical mechanisms that regulate transfer of information from nonconscious memory to conscious awareness. Understanding of these mechanisms will allow formulation of neurocognitive models of the disorders of cognition and help in identification of neural targets that could be manipulated to control nonconscious cognitive processes.

RECOMMENDED READING:

 

  1. Badgaiyan, R.D. (2000), "Executive Control, Willed Actions, and Nonconscious Processing", Hum Brain Mapp. 9(1): 38–41.
  2. Badgaiyan, R.D. (2000), "Neuroanatomical Organization of Perceptual Memory: An fMRI Study of Picture Priming", Hum Brain Mapp. 10(4): 197–203.
  3. Badgaiyan, R.D. (2005), "Conscious Awareness of Retrieval: An Exploration of the Cortical Connectivity", Int. J. Psychophysiol. 55(2): 257–262.
  4. Badgaiyan, R.D. (2006), "Cortical Activation Elicited by Unrecognized Stimuli", Behav Brain Funct. 2: 17.
  5. Badgaiyan, R.D.; & Posner, M.I. (1997), "Time Course of Cortical Activations in Implicit and Explicit Recall", J. Neurosci. 17(12): 4904–4913.
  6. Badgaiyan, R.D.; Schacter, D.L.; & Alpert, N.M. (1999), "Auditory Priming within and across Modalities: Evidence from Positron Emission Tomography", J. Cogn. Neurosci. 11(4): 337–348.
  7. Schacter, D.L.; & Badgaiyan, R.D. (2001), "Neuroimaging of Priming: New Perspectives on Implicit and Explicit Memory", Curr. Dir. in Psychol Sci. 10: 1 4.

 

 

6 October 2010

No meeting

 

 

13 October 2010

Adrian Staub

Department of Psychology, University of Massachusetts, Amherst

Lexical Processing Autonomy, Revisited: 
Evidence from Eye Movements

ABSTRACT:

A basic (and old) question regarding the architecture of the language comprehension system is whether word recognition is "autonomous", i.e., performed by an encapsulated system that is impervious to the context in which a word is encountered. While early findings regarding lexical ambiguity resolution appeared to provide a positive answer to this question, various recent models have abandoned the autonomy hypothesis. In this talk, I will present a variety of data from eye movements in reading that appear to support lexical processing autonomy, at least for visual word recognition. Word frequency influences the duration of essentially every word inspection, and this effect does not seem to be modulated by the context in which a word appears. The predictability of a word does have a clear effect on fixation durations, but the effects of frequency and predictability appear to be strictly additive. More strikingly, recent experiments from my laboratory suggest that the effect of frequency on early eye-movement measures is not modulated by whether a word is a grammatical continuation of the sentence in which it appears. I will show how the detailed data patterns that emerge from these experiments are actually predicted by the most recent iteration of the E-Z Reader model of eye movements in reading (Reichle, Warren, & McConnell 2009), which proposes strict ordering of lexical and integrative processing.

RECOMMENDED READING:

 

 

20 October 2010

Co-Sponsored by the UB Department of Computer Science & Engineering

Matthew D. Stone

Department of Computer Science, Rutgers University, New Brunswick

Discourse Coherence and Gesture Interpretation

ABSTRACT:

In face-to-face conversation, communicators orchestrate multimodal contributions that meaningfully combine the linguistic resources of spoken language and the visuo-spatial affordances of gesture. In this talk, we characterise this meaningful combination in terms of the COHERENCE of gesture and speech. Descriptive analyses illustrate the diverse ways gesture interpretation can supplement and extend the interpretation of prior gestures and accompanying speech. We draw certain parallels with the inventory of COHERENCE RELATIONS found in discourse between successive sentences. In both domains, we suggest, interlocutors make sense of multiple communicative actions in combination by using these coherence relations to link the actions' interpretations into an intelligible whole. Descriptive analyses also emphasise the improvisation of gesture; the abstraction and generality of meaning in gesture allows communicators to interpret gestures in open-ended ways in new utterances and contexts. We draw certain parallels with interlocutors' reasoning about underspecified linguistic meanings in discourse. In both domains, we suggest, coherence relations facilitate meaning-making by RESOLVING the meaning of each communicative act through constrained inference over information made salient in the prior discourse. Our approach to gesture interpretation lays the groundwork for formal and computational models that go beyond previous approaches based on compositional syntax and semantics, in better accounting for the flexibility and the constraints found in the interpretation of speech and gesture in conversation. At the same time, it shows that gesture provides an important source of evidence to sharpen the general theory of coherence in communication. (Joint work with Alex Lascarides, University of Edinburgh.)

RECOMMENDED READING:

 

 

 

27 October 2010

J. David Smith

UB Department of Psychology and Center for Cognitive Science

Animal Metacognition

ABSTRACT:

Do nonhuman animals share humans' capacity for metacognition—that is, for monitoring or self-regulating their own cognitive states and cognitive processes? Comparative psychologists have approached this question by testing a dolphin, pigeons, rats, monkeys, macaques, apes, and humans using perception, memory, classification, and foraging paradigms. There is growing evidence that animals share functional parallels with humans' conscious metacognition, though the field has not confirmed full experiential parallels and this remains an open question. I will review this new area of comparative inquiry, describing significant empirical milestones, remaining theoretical millstones, and the prospects for continuing progress in a rapidly developing area. This research area has the potential to open a new window on reflective mind in animals, illuminating its phylogenetic emergence and allowing researchers to trace the antecedents of human consciousness.

RECOMMENDED READING:

 

 

3 November 2010

Jason Corso

UB Deparment of Computer Science and Engineering

Pictorial Structures for High-Level Scene Understanding

ABSTRACT:

Scene understanding remains a significant challenge in the broad computer-vision community. Images are extremely high-dimensional objects, and the variability within images, from low-level noise to high-level structure, is vast. Decades-old, visual-psychophysics studies of human, semantic-scene understanding have indicated the strong dependence humans have on incorporating the interrelationships between scene elements during inference. Yet the majority of methods in the computational scene-understanding community remain local. E.g., high performances on recent benchmarks have been made by pseudo-likelihood approximations coupled with modern discriminative learning methods, which rely on small patches of, say, ten pixels squared for the entire scene modeling. In contrast to these local methods is a class of models called pictorial structures, which model a visual phenomenon as a collection of connected parts. Each part has a separate appearance model, estimated in various ways, including the patch-based method mentioned above. The parts are interpreted as nodes in a graph, and relationships between the parts are included as edges in the graph, parameterized based on part-similarity, distance, and other means. E.g., the pictorial structures model for a face might include parts for the two eyes, the nose, the hairline, the ears, the mouth, and the chin, with each part connected to at most two others nearby.

Pictorial structures have demonstrated good performance in object-modeling tasks, like detection and recognition, in recent years. However, they have a significant drawback that limits their usage for general scene-understanding tasks: Conventional pictorial structures require all parts to be present during inference in order to achieve a solution. In contrast, scene-understanding tasks require an adaptability on the part of the model to "switch" various parts on and off. E.g., in some urban scenes, we may find a bus, but, in others, we do not. It is intractable to define a separate pictorial-structures model for each combination of possible parts. We propose a new form of the pictorial-structures model that adaptively manipulates the structure of the model during inference. The key to our approach is the construction of a large model-graph that contains all scene elements and the creation of an on-line graph during inference, whose structure adapts to the image at hand. Our method can adapt to any combination of elements seen in the training set. Our inference algorithm is a Metropolis-Hastings algorithm that incorporates data-driven semantic priors to provide the proposal distribution. Initialization is performed based on an image-level comparison to a stored database of known images that each have an associated template graph. We demonstrate our method on contemporary benchmark databases from the computer-vision community.

RECOMMENDED READING:

Fischler, Martin A.; & Elschlager, Robert A. (1973), "The Representation and Matching of Pictorial Structures"IEEE Transactions on Computers C-22(1) (January): 67–92.

 

 

10 November 2010

Ling-yu (Hugo) Guo

UB Department of Communicative Disorders and Sciences

Effect of Subject Frequency on the Production of Auxiliary ‘Is’ in Young, English-Speaking Children

ABSTRACT:

Young children variably use tense and agreement morphemes (e.g., forms of auxiliary BE like ‘are’ in The dogs are running) in obligatory contexts, but the reason for this variability is still open to debate (Wilson, 2003). This study tested a nativist account (i.e., the unique-checking-constraint hypothesis; Wexler, 1998) and a constructivist account (i.e., the usage-based approach; Tomasello, 2003) by examining the effect of subject frequency on the production accuracy of auxiliary ‘is’ in twenty, three-year-old, English-speaking children via an elicited production task.

The unique-checking-constraint (UCC) hypothesis assumes that young children have innate abstract representations of tense and agreement. They use tense and agreement morphemes variably because of the presence of a maturational constraint (i.e., the UCC) in their grammar. Because this constraint affects sentences with different subject types (i.e., pronominal, high-frequency lexical NP, and low-frequency lexical NP subjects, like He/The dog/The deer is eating a cake) in the same way, the UCC hypothesis predicts that children's production of auxiliary ‘is’ should not vary with different subject types.

In contrast, the usage-based approach assumes that young children learn the knowledge of forms related to tense and agreement from the input. They use tense and agreement morphemes variably because they have not developed abstract representations of these morphemes. The usage-based approach argues that young children's use of tense and agreement morphemes is affected by the co-occurrence frequency of subject + auxiliary ‘is (e.g., he's, the deer's). Because pronominal subjects co-occur with auxiliary ‘is’ more often than lexical NP subjects, children's production of auxiliary ‘is’ should be more accurately in sentences with pronominal subjects than in sentences with high-frequency lexical NP subjects, followed by sentences with low-frequency NP subjects.

When we treated all the children as a group, the percent correct of auxiliary ‘is’ did not vary significantly with subject frequency, which supports the UCC hypothesis. However, when we explored children's performance by their developmental levels as measured by tense productivity scores (Rispoli et al., 2009), subject frequency did play a role in the use of auxiliary ‘is’. Children with lower tense productivity used auxiliary ‘is’ more accurately with pronominal subjects than with high- or low-frequency lexical NP subjects. In contrast, children with higher tense productivity did not show differences in the use of auxiliary ‘is’ with subject frequency.

The symmetry observed between lexical and pronominal subjects at the group level supports the predictions of the UCC hypothesis, although additional mechanisms may be needed to account for the asymmetry between subject frequency in children with lower tense productivity.

RECOMMENDED READING:

Guo, Ling-Yu; Owen, Amanda J.; & Tomblin, J. Bruce (2010), "Effect of Subject Types on the Production of Auxiliary ‘is’ in Young English-Speaking Children"Journal of Speech, Language, and Hearing Research; doi:10.1044/1092-4388(2010/09-0058).

 

 

17 November 2010

Michael C. Anderson

Cognition and Brain Sciences Unit, Medical Research Council, Cambridge, UK

Controlled Hippocampal Modulation
as a Mechanism Underlying the Suppression of Unwanted Memories

ABSTRACT:

We have all had moments when an object or event reminds us of an experience we would prefer not to think about. When such remindings occur, we often exclude the unwanted memory from awareness. In past work, we have sought to understand how attentional control mechanisms interact with brain structures involved in episodic memory to allow people to regulate memory in this fashion. Our approach to understanding memory control focuses on a model situation that captures essential features of what people confront in naturally occurring circumstances: exposure to reminders of some target event, together with a goal to prevent the associated event from coming into awareness. Using tasks developed to model this situation (Anderson & Green 2001), we have discovered that people engage control mechanisms mediated by the lateral prefrontal cortex to suppress retrieval of unwanted memories. This process induces forgetting of the excluded trace, with the amount of forgetting predicted by engagement of lateral prefrontal cortex (Anderson et al. 2004). Importantly, episodic forgetting induced by retrieval suppression is produced by modulation of hippocampal activation that is triggered by memory intrusions. Strikingly, this hippocampal modulation induces a temporally extended window of memory impairment around the epoch of suppression, the amnesic penumbra, within which recently experienced events are disrupted, irrespective of their relevance to the target memory. These findings suggest that controlled hippocampal modulation may serve as an important mechanism underlying selective forgetting, and people's ability to cope with unwanted memories (Anderson & Levy 2009).

RECOMMENDED READINGS:

 

  1. Anderson, M.C.; & Green, C. (2001), "Suppressing Unwanted Memories by Executive Control", Nature 410(6826): 131–134.
  2. Anderson, M.C.; Ochsner, K.; Kuhl, B.; Cooper, J.; Robertson, E.; Gabrieli, S.W.; Glover, G.; & Gabrieli, J.D.E. (2004), "Neural Systems Underlying the Suppression of Unwanted Memories", Science, 303: 232–235.
  3. Anderson, M.C.; & Levy, B. (2009), "Suppressing Unwanted Memories", Current Directions in Psychological Science 18(4): 189–194.

 

8 December 2010


Sunfa Kim

UB Department of Psychology


Shared Mapping between Conceptual Representations and Functional Representations across Languages Facilitates L2  Production

 ABSTRACT:

We hypothesized that bilingual speakers use second language (L2) syntactic alternations most easily when their first and second languages share mappings between conceptual representations and functional representations, despite differences in basic word order. We tested this by examining L2 English structural priming for transitive and dative alternations in three bilingual groups, with the prediction that L2 structural priming would occur for syntactic alternations that share such mappings across languages. Whereas Japanese, Korean, and Mandarin have transitive alternations, only Mandarin has a dative alternation. Japanese and Korean do not have a double object construction. We observed priming for the transitive alternation in Japanese and Korean bilingual speakers but no priming for the dative alternation. In contrast, we observed priming for the dative alternation for Mandarin bilingual speakers. These results suggest that shared mapping between conceptual representations and functional level facilitates use of L2 syntactic structures and that the abstract level of functional processing is involved in language production.

RECOMMENDED READING:

Bernolet, Sarah; Hartsuiker, Robert J.; & Pickering, Martin J. (2007), "Shared Syntactic Representations in Bilinguals: Evidence for the Role of Word-Order Repetition", Journal of Experimental Psychology: Learning, Memory, and Cognition 33(5): 931–949.