An Embodied Model of Young Children’s Categorization and Word Learning

An Embodied Model of Young Children’s Categorization and Word Learning

Katherine E. Twomey, Jessica S. Horst, Anthony F. Morse
DOI: 10.4018/978-1-4666-2973-8.ch008
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Children learn words with remarkable speed and flexibility. However, the cognitive basis of young children’s word learning is disputed. Further, although research demonstrates that children’s categories and category labels are interdependent, how children learn category labels is also a matter of debate. Recently, biologically plausible, computational simulations of children’s behavior in experimental tasks have investigated the cognitive processes that underlie learning. The ecological validity of such models has been successfully tested by deploying them in robotic systems (Morse, Belpaeme, Cangelosi, & Smith, 2010). The authors present a simulation of children’s behavior in a word learning task (Twomey & Horst, 2011) via an embodied system (iCub; Metta, et al., 2010), which points to associative learning and dynamic systems accounts of children’s categorization. Finally, the authors discuss the benefits of integrating computational and robotic approaches with developmental science for a deeper understanding of cognition.
Chapter Preview
Top

Starting Young: Categorization From Day One

From birth—indeed, even before birth (James, 2010; Shahidullah & Hepper, 1994)—infants encode a myriad of complex perceptual stimuli. The extent of this complexity cannot be overestimated: in the visual domain alone, the shortsighted newborn must segment the visual scene, distinguish between figure and ground, group surfaces into objects, represent temporal and spatial continuity of objects, and infer 3D characteristics of objects (Johnson, 2010a). However, very young infants can make sense of the intricacies of their environment. Even neonates can group aspects of their perceptual environment into early categories (Johnson, 2010b), systematically treating discriminably different exemplars as equivalent. A few hours after birth, infants are able to discriminate their mothers’ faces from those of strangers (Field, Cohen, Garcia, & Greenberg, 1984) and by three months, infants categorize male versus female and same- versus own-race faces (Slater, et al., 2010).

By the end of their first year, infants have developed an impressive ability to categorize in multiple domains, and use a variety of criteria to do so. For example, infants can use relative luminance to categorize patterns of horizontal or vertical black bars after familiarization with arrays of light or dark shapes (3-4 months; Quinn, Burke, & Rush, 1993); head information to categorize pictures of animals (3 months; Quinn, Eimas, & Rosenkrantz, 1993); auditory statistical cues to categorize phonemes in the speech stream (6 months; Grieser & Kuhl, 1989); and visual spatiotemporal information to categorize event types (7.5 months; see Baillargeon & Wang, 2002, for a review).

Children’s remarkably early ability to detect patterns in their environment is not in dispute (Gogate & Hollich, 2010). However, the processes underpinning children’s categorization and the structure of the categories themselves are less clear-cut. The current chapter presents novel insights into the interplay between young children’s categorization and word learning from an embodied computational model.

Complete Chapter List

Search this Book:
Reset