Logo IDEAL

Lessons:

Implementation of DEvelopmentAl Learning (IDEAL) Course

Home » 5. Radical interactionism » 58. Implementation

Implementation of radical interactionist agents

Project 5 implements the algorithm described on Page 54 and Page 57 to let you run your first fully recursive RI agent.

You can use it with the EnvironmentMaze to reproduce similar behaviors as in Video 55. The behaviors will be similar but not exactly the same because we changed the algorithm since recording the video to make the algorithm more pedagogical. Small differences in the algorithm generate different choices when the choice is arbitrary, which leads to different learning.

Project 5: modified or new files since Project 4.

For Lesson 5, your only programming activity is to run Existence050 in the Maze environment and report the trace that you obtain. You should observe that Existence050 behaves similarly to the agent in the demonstration in Video 55.

Project 5 could also be used to replicate our various demonstrations as they are shown in Video 41, Video 53, and Video 55. However, you would have to re-implement the corresponding environments or robot, which is outside the scope of this course. The section below provides, nonetheless, a few indications on how to proceed, in case you would like to do it:

1. Video 41 was recorded using revision 238 of the Vacuum Environment Project, and revision 313 of the Ernest Agent Project. To replicate these demonstrations, checkout these two revisions in your favorite IDE (we used Eclipse). Include the Ernest project in the Java Build Path of the Vacuum project. Run the Main.java class of the Vacuum Project.

2. You can also replicate the demonstrations in Video 41 using Project 5 in the Vacuum Environment. In Project 5, create the EnvironmentVacuum238 class as an implementation of the Environment interface. Implement your Environment­Vacuum238.enact(intended­Interaction) method so that it calls revision 238 of the Vacuum Project to execute the intended interaction in the Vacuum Environment and to return the enacted interaction.

3. You can also replicate the demonstration in Video 53 using Project 5. Create the EnvironmentRobot class as an implementation of the Environment interface. Implement your EnvironmentRobot.enact(intendedInteraction) method to control the enaction of the primitive interactions by activating motors and reading sensors, and to return the actually enacted interaction. Recall the discussion on how to implement a robot interface on Page 53.

4. You can replicate the demonstration in Video 55 using revision 203 of the Vacuum Environment Project, and revision 296 of the Ernest Agent Project.

5. You can also replicate the demonstration in Video 55 using the Ernest Project in the NetLogo environment. Follow the instructions provided on the Online Demonstration Page. You can also re-implement our NetLogo extension to use Project 5.

Note that the Ernest project was a previous version of the algorithm which no longer complies with the notation and vocabulary used in this course. You may, thus, have difficulties to find your way around it. We do not intend to maintain it in the future. Be also aware that the Vacuum Environment was developed anarchically over the years to test various experiments as they went. It is not a standardized environment and you may find it messy. We do not provide support to use it.

« Previous | Next »

See public discussions about this page or start a new discussion by clicking on the Google+ Share button. Please type the #IDEALMOOC058 hashtag in your post:

Share