Logo IDEAL

Lessons:

Implementation of DEvelopmentAl Learning (IDEAL) Course

Home » 5. Radical interactionism » 53. Robot

Radical interactionist robot

By building upon the self-programming architecture presented in Figure 43, Figure 53 presents the architecture of a developmental robot based on RI.

Figure 53: Architecture of a developmental robot based on radical interactionism. The dashed arrows between the robot's program and the physical world do not represent data transfer but physical effects.

Figure 53 modifies Figure 43 to incorporate RI: now, the coupling between the agent and the environment (Line 2) uses primitive interactions instead of experiments and results. In the case of a simulated environment, the designer designs the environment (below Line 2) to process intended primitive interactions and to return enacted primitive interactions as we will do in the implementation on Page 58.

If we developed an agent in a simulated environment that we then want to use to control a robot in the real world, we would retain all the program above Line 2 (since this is the agent), and modify the program below Line 2 to command the actuators and to read the sensors of the robot. The program below Line 2, thus, implements the interface between the agent and the physical world. Developers may feel confused when programming the interface because they are used to thinking of the sensor data as the robot's perception, and of the actuator commands as the robot's actions, rather than thinking in terms of primitive interactions as RI recommends.

To understand how to program the interface, we must reject the idea that the robot "receives data from the physical world"; from where and by what would such data be sent anyway? Instead, the physical world simply has an effect on sensors, which modifies variables in the program. Interpreting the value of these variables as "data received from the physical world" is a choice that we do not make in the developmental approach.

In the developmental approach, the designer decides what primitive interactions will be available to the robot and programs the interface to control the enaction of these primitive interactions. The interface commands the actuators and reads the sensors until an end condition is reached. When an end condition is reached, the enaction is stopped, and the interface computes the enacted interaction on the basis of conditions that happened during the enaction, according to the designer's specifications. Video 53 illustrates this in an example.

Video 53: Developmental e-puck robot implementing our self-programming algorithm.

Now we have three levels of coupling:

The interface hides the complexity of the physical coupling for the agent's policy. The policy can, thus, control robots in an environment as complex as the physical world. The physical world in Video 1 was limited to an e-puck robot in a box, but it could have been arbitrarily more complex. For example, it could have included any kind of object. Those objects would just have appeared to the robot as walls, as long as it had no other way of interacting with them.

Still, the question of designing robots that can manage more complex interactions remains. In particular, this robot has no notion of space; it learns regularities in time but not in space. Lesson 6 will address the problem of managing more complex interactions in space and learning more sophisticated behaviors.

This example also raises again the question of how to design a developmental robot to perform a specific task. As introduced on Page 22, a developmental robot has no internal data generated by a direct function of the state of the world, which the designer could exploit to program the robot for performing a specific task. Instead, a developmental robot can be considered as an animal that has predefined tastes and drives, and that you must train if you want it to perform a specific task for you. The issue of training a developmental agent will be addressed on Page 55.

« Previous | Next »

See public discussions about this page or start a new discussion by clicking on the Google+ Share button. Please type the #IDEALMOOC053 hashtag in your post:

Share