Instead of using plain-text traces to analyze our agent's activity as we did in the previous lessons, we are going to use the video recording presented in Video 55.
Importantly, Video 55 shows the emergence of active perception as the agent finds regularities of interaction and programs itself. The emergence of active perception radically differentiates the developmental approach from traditional AI approaches because it considers perception as a functional adaptation rather than as a mere interpretation of input data.
Video 55 also presents a summary explanation of our fully recursive self-programming algorithm based upon Figure 42 and our work since then. The experiment was designed to demonstrate this recursivity by offering hierarchical regularities of interaction in the policy coupling between the agent and the environment.
Video 55: Demonstrating the emergence of active perception from regularities of interaction.
The agent's active perception comes from the fact that it learns to use sensorimotor interactions that have a slightly negative valence to identify context in which it can confidently enact interactions that have a strong positive valence, and avoid enacting interactions that have a strong negative valence.
You can also play with the online demo in this page: Small Loop. It allows you to remodel the environment by changing walls into empty cells and vice versa by clicking on the grid. If you rerun the agent in a different environment, you will observe that it may learn different behaviors. This is because it may program itself differently depending on its individual experience. This training effect, made possible by self-programming, is also shown in this video.
At the end of Lesson 5, we invite participants interested in programming to reproduce this experiment with the algorithm provided on Page 58.
See public discussions about this page or start a new discussion by clicking on the Google+ Share button. Please type the #IDEALMOOC055 hashtag in your post: