INTRODUCTION
Human intelligence is acquired through a prolonged period of maturation and growth during which a single fertilized egg first turns into an embryo, then grows into a newborn baby, and eventually becomes an adult individual—which, typically before growing old and dying, reproduces. The developmental process is inherently robust and flexible, and biological organisms show an amazing ability during their development to devise adaptive strategies and solutions to cope with environmental changes and guarantee their survival. Because evolution has selected development as the process through which to realize some of the highest known forms of intelligence, it is plausible to assume that development is mechanistically crucial to emulate such intelligence in human-made artifacts.
BACKGROUND
The idea that development might be a good avenue to understand and construct cognition is not new. Already Turing (1950) suggested that using some kind of developmental approach might be a good strategy. In the context of robotics, many of the original ideas can be traced back to embodied artificial intelligence (embodied AI), a movement started by Rodney Brooks at the beginning of the 1980s (Brooks et al., 1998), and the notion of enaction (Varela et al., 1991) according to which cognitive structures emerge from recurrent sensorimotor patterns that enable action to be perceptually guided. Researchers of embodied AI believe that intelligence can only come from the reciprocal interaction across multiple time scales between brain and body of an agent, and its environment. In a sense, throughout life, experience is learned and common sense is acquired, which then supports more complex reasoning. This general bootstrapping of intelligence has been called "cognitive incrementalism" (Clark, 2001).
DEVELOPMENTAL ROBOTICS
Developmental robotics (also known as epigenetic or ontogenetic robotics) is a highly interdisciplinary subfield of robotics in which ideas from artificial intelligence, developmental psychology, neuroscience, and dynamical systems theory play a pivotal role in motivating the research (Asada etal., 2001; Lungarella etal, 2003; Weng etal, 2001; Zlatev & Balkenius, 2001). Developmental robotics aims to model the development of increasingly complex cognitive processes in natural and artificial systems and to understand how such processes emerge through physical and social interaction. The idea is to realize artificial cognitive systems not by simply programming them to solve a specific task, but rather by initiating and maintaining a developmental process during which the systems interact with their physical environments (i.e. through their bodies or tools), as well as with their social environments (i.e. with people or other robots). Cognition, after all, is the result of a process of self-organization (spontaneous emergence of order) and co-development between a developing organism and its surrounding environment. Although some researchers use simulated environments and computational models (e.g. Mareschal et al., 2007), often robots are employed as testing platforms for theoretical models of the development of cognitive abilities - the rationale being that if a model is instantiated in a system interacting with the real world, a great deal can be learned about its strengths and potential flaws (Fig. 1). Unlike evolutionary robotics which operates on phylogenetic time scales and populations of many individuals, developmental robotics capitalizes on "short" (ontogenetic) time scales and single individuals (or small groups of individuals).
Figure 1. Developmental robots. (a) iCub (b) Babybot (c) Infanoid
AREAS OF INTEREST
The spectrum of developmental robotics research can be roughly segmented into four primary areas of interest. Although instances may exist that fall into multiple categories, the suggested grouping should provide at least some order in the large spectrum of issues addressed by developmental roboticists.
Socially oriented interaction: This category includes research on robots that communicate or learn particular skills via social interaction with humans or other robots. Examples are imitation learning, communication and language acquisition, attention sharing, turn-taking behavior, and social regulation (Dautenhahn, 2007; Steels, 2006).
Non-social interaction: Studies on robots characterized by a direct and strong coupling between sensorimotor processes and the local environment (e.g. inanimate objects), but which do not interact with other robots or humans. Examples are visually-guided grasping and manipulation, tool-use, perceptual categorization, and navigation (Fitzpatrick et al., 2007; Nabeshima et al., 2006).
Agent-centered sensorimotor control: In these studies, robots are used to investigate the exploration of bodily capabilities, the effect of morphological changes on motor skill acquisition, as well as self-supervised learning schemes not linked to any functional goal. Examples include self-exploration, categorization of motor patterns, motor babbling, and learning to walk or crawl (Demiris & Meltzoff, 2007; Lungarella, 2004).
Mechanisms and principles: This category embraces research on principles, mechanisms or processes thought to increase the adaptivity of a behaving system. Examples are: developmental and neural plasticity. mirror neurons, motivation, freezing and freeing of degrees of freedom, and synergies; characterization of complexity and emergence, study of the effects of adaptation and growth, and practical work on body construction or development (Arbib et al., 2007; Oudeyer et al., 2007; Lungarella & Sporns, 2006).
PRINCIPLES FOR DEVELOPMENTAL SYSTEMS
By contrast to traditional disciplines such as physics or mathematics, which are described by well-known basic principles, the fundamental principles governing the dynamics of developmental systems are unknown. Could there be laws governing developmental systems or a theory? Although various attempts have been initiated (Asada et al., 2001; Brooks et al., 1998; Weng et al., 2001), it is fair to say that to date no such theory has emerged. Here, en route to such a theory, we point out a set of candidate principles. An approach based on principles is preferable for constructing intelligent autonomous systems, because it allows capturing design ideas and heuristics in a concise and pertinent way, avoiding blind trial-and-error. Principles can be abstracted from biological systems, and their inspiration can take place at several levels, ranging from a "faithful" replication of biological mechanisms to a rather generic implementation of biological principles leaving room for dynamics intrinsic to artifacts but not found in natural systems. In what follows we summarize five key principles revealed by observations of human development which may be used to construct developmental robots.
The Value Principle
Observations: Value systems are neural structures that mediate value and saliency and are found in virtually all vertebrate species. They are necessary for an organism's behavioral adaptation to salient (meaningful) environmental cues. By linking behavior and neuroplasticity, value systems are essential for deciding what to do in a particular situation (Sporns, 2007).
Lessons for robotics: The action of value systems - through adaptive changes in sensorimotor connections and inputs - enables an embodied agent to learn action strategies without external supervision by increasing the likelihood that a "good" movement pattern can recur in the same behavioral context. Value systems may also be used to guide an exploratory process and hence allow a system to learn sensorimotor patterns more efficiently compared to a pure random or a systematic exploration (Gomez & Eggenberger, 2007). By imposing constraints through value-dependent modulation of saliency, the search space can be considerably reduced. Examples of value systems in the brain include the dopaminergic, cho-linergic, and noradrenergic systems; based on them, several models have been implemented and embedded in developmental robots (Sporns, 2007).
The Principle of Information Self-Structuring
Observations: Infants frequently engage in repetitive (seemingly dull) behavioral patterns: they look at objects, grasp them, stick them into their mouths, bang them on the floor, and so on. It is through such interactions that intelligence in humans develops as children grow up interacting with their environment (Smith & Breazeal, 2007; Smith & Gasser, 2005).
Lessons for robotics: The first important lesson is that information processing (neural coding) needs to be considered in the context of the embeddedness of the organism within its eco-niche. That is, robots and organisms are exposed to a barrage of sensory data shaped by sensorimotor interactions and morphology (Lungarella & Sporns, 2006). Information is not passively absorbed from the surrounding environment but is selected and shaped by actions on the environment. Second, information structure does not exist before the interaction occurs, but emerges only while the interaction is taking place. The absence of interaction would lead to a large amount of unstructured data and consequently to stronger requirements on neural coding, and - in the worst case - to the inability to learn. It follows that embodied interaction lies at the root of a powerful learning mechanism as it enables the creation of time-locked correlations and the discovery of higher-order regularities that transcend the individual sensory modalities. [Lungarella (2004; "principle of information self-structuring")].
The Principle of Incremental Complexity
Observations: Infants' early experiences are strongly constrained by the immaturity of their sensory, motor, and neural systems. Such early constraints, which at first appear to be an inadequacy, are in fact of advantage, because they effectively decrease the "information overload" that otherwise would overwhelm the infant (Bjorklund & Green, 1992).
Lessons for robotics: In order for an organism - natural or artificial - to learn to control its own complex brain-body system, it might be a good strategy to start simple and gradually build on top of acquired abilities. The well-timed and gradual co-development of body morphology and neural system provides an incremental approach to deal with a complex and unpredictable world. Early "morphological constraints" and "cognitive limitations" can lead to more adaptive systems as they allow exploiting the role that experience plays in shaping the "cognitive" architecture. If an organism was to begin by using its full complexity, it would never be able to learn anything (Gomez et al., 2004). It follows that designers should not try to "code" a full-fledged ready-to-be-used intelligence module directly into an artificial system. Instead, the system should be able to discover on its own the most effective ways of assembling low-level components into novel solutions [Lungarella (2004; "starting simple"); Pfeifer & Bongard (2007; "incremental process principle")].
The Principle of Interactive Emergence
Observations: Development is not determined by innate mechanisms alone (in other words: not everything should be pre-programmed). Cognitive structure, for instance, is largely dependent on the interaction history of the developing system with the environment in which it is embedded (Hendriks-Jansen, 1996).
Lessons for robotics: In traditional engineering the designer of the system imposes ("hard-wires") the structure of the controller and the controlled system. Designers of adaptive robots, however, should avoid implementing the robot's control structure according to their understanding of the robot's physics, but should endow the robot with means to acquire its own understanding through self-exploration and interaction with the environment. Systems designed for emergence tend to be more adaptive with respect to uncertainties and perturbations. The ability to maintain performance in the face of changes (such as growth or task modifications) is a long-recognized property of living systems. Such robustness is achieved through a host ofmechanisms: feedback, modularity, redundancy, structural stability, and plasticity [Dautenhahn (2007; "interactive emergence"); Hendriks-Jansen (1996; "interactive emergence"); Prince et al. (2005; "ongoing emergence")].
The Principle of Cognitive Scaffolding
Observations: Development takes place among conspecifics with similar internal systems and similar external bodies (Smith & Breazeal, 2007). Human infants, for instance, are endowed from an early age with the means to engage in simple, but nevertheless crucial social interactions, e.g. they show preferences for human faces, smell, and speech, and they imitate protruding tongues, smiles, and other facial expressions (Demiris & Meltzoff, 2007).
Lessons for robotics: Social interaction bears many potential advantages for developmental robots: (a) it increases the system's behavioral diversity through mimicry and imitation (Demiris & Meltzoff, 2007); (b) it supports the emergence of language and communication, and symbol grounding (Steels, 2006); and (c) it helps structure the robot's environment by simplifying and speeding up the learning of tasks and the acquisition of skills. Scaffolding is often employed by parents and caretakers (intentionally or not) to support, shape, and guide the development of infants. Similarly, the social world of the robot should be prepared to teach the robot progressively novel and more complex tasks without overwhelming its artificial cognitive structure [Lungarella (2004; "social interaction principle"); Mareschal et al. (2007; "ensocialment"); Smith & Breazeal (2007; "coupling to intelligent others")].
FUTURE TRENDS
The further success of developmental robotics will depend on the extent to which theorists and experimentalists will be able to identify universal principles spanning the multiple levels at which developmental systems operate. Here, we briefly indicate some "hot" issues that need to be tackled en route to a theory of developmental systems.
Semiotics: It is necessary to address the issue of how developmental robots (and embodied agents in general) can attribute meaning to symbols and construct semiotic systems. A promising approach, explored under the label of "semiotic dynamics", is that such semiotic systems and the associated information structure are continuously invented and negotiated by groups of people or agents, and are used for communication and information organization (Steels, 2006).
Core knowledge: An organism cannot develop without some built-in ability. If all abilities are built in, however, the organism does not develop either. It will therefore be important to understand with what sort of core knowledge and explorative behaviors a developmental system has to be endowed, so that it can autonomously develop novel skills. One of the greatest challenges will be to identify core abilities and how they interact during development in building basic skills (Spelke, 2000).
Core motives: It is necessary to conduct research on general capacities such as creativity, curiosity, motivations, action selection, and prediction (i.e. the ability to foresee consequence of actions). Ideally, no tasks should be pre-specified to the robot, which should only be provided with an internal abstract reward function and a set of basic motivational (or emotional) drives that could push it to continuously master new know-how and skills (Lewis, 2000; Oudeyer et al., 2007).
Self-exploration: Another important challenge is the one of self-exploration or self-programming (Bongard et al., 2006). Control theory assumes that target values and states are initially provided by the system's designer, whereas in biology, such targets are created and revised continuously by the system itself. Such spontaneous "self-determined evolution" or "autonomous development" is beyond the scope of current control theory and needs to be addressed in future research.
Learning causality: In a natural setting, no teacher can possibly provide a detailed learning signal and sufficient training data. Mechanisms will have to be created to characterize learning in an "ecological context" and for the developing agent to collect relevant learning material on its own. One significant future avenue will be to endow systems with the possibility to recognize progressively longer chains of cause and effect (Chater et al., 2006).
Growth: As mentioned in the introduction, intelligence is acquired through a process of self-assembly, growth, and maturation. It will be important to study how physical growth, change of shape and body composition, as well as material properties of sensors and actuators affect and guide the emergence of cognition. This will allow connecting developmental robotics to computational developmental biology (Gomez & Eggenberger, 2007; Kumar & Bentley, 2003).
CONCLUSION
The study of intelligent systems raises many fundamental, but also very difficult questions. Can machines think or feel? Can they autonomously acquire novel skills? Can the interaction of the body, brain, and environment be exploited to discover novel and creative solutions to problems? Developmental robotics may be an approach to explore such long standing issues. At this point, the field is bubbling with activity. Its popularity is partly due to recent technological advances which have allowed the design of robots whose "kinematic complexity" is comparable to that of humans (Fig. 1). The success of developmental robotics will ultimately depend on whether it will be possible to crystallize its central assumptions into a theory. While much additional work is surely needed to arrive at or even approach a general theory of intelligence, the beginnings of a new synthesis are on the horizon. Perhaps, finally, we will come closer to understanding and building (growing) human-like intelligence. Exciting times are ahead of us.
KEY TERMS
Adaptation: Refers to particular adjustments that organisms undergo to cope with environmental and morphological changes. In biology one can distinguish four types of adaptation: evolutionary, physiological, sensory, and learning.
Bootstrapping: Designates the process of starting with a minimal set of functions and building increasingly more functionality in a step by step manner on top of structures already present in the system.
Degrees of freedom problem: The problem of learning how to control a system with a very large number of degrees of freedom (also known as Bernstein's problem).
Embodiment: Refers to the fact that intelligence requires a body, and cannot merely exist in the form of an abstract algorithm.
Emergence: A process where phenomena at a certain level arise from interactions at lower levels. The term is sometimes used to denote a property of a system not contained in any one of its parts.
Scaffolding: Encompasses all kinds of external support and aids that simplify the learning of tasks and the acquisition of new skills.
Semiotic Dynamics: Field that studies how meaningful symbolic structures originates, spreads, and evolve over time within populations, by combining linguistics and cognitive science with theoretical tools from complex systems and computer science.