Logo IDEAL

Lessons:

Implementation of DEvelopmentAl Learning (IDEAL) Course

Home » 6. Cognitive architecture » 61. Introduction

Introduction to developmental cognitive architectures

On Page 53, we raised the issue of designing agents that can deal with more complex possibilities of interaction. We defined three levels of coupling: 1) cognitive coupling, 2) policy coupling, and 3) physical coupling.

So far, we have been working with a policy coupling that is discrete and small: for example, a set of 10 primitive interactions in Video 55. A legitimate question is: what would happen if the policy coupling contained a greater number of primitive interactions, or even if the policy coupling was modeled as a continuous space rather than a discrete set? More broadly, how does the algorithm scale up when the policy coupling gets more complex?

The answer is simple: the algorithm that we have been using thus far can't scale up when the complexity of the policy coupling increases arbitrarily. The scaling limitation is because the time to discover and exploit regularities of interaction increases exponentially as the number of primitive interactions increases, and as length of the regularities that are afforded by the policy coupling grows.

To scale up towards "something more complex", we must examine what scaling problems we are facing and what "more complex results" we want to obtain.

Our demonstration in Video 53 shows that our algorithm can interact with the real world. However, it also shows that the behavior is still very rudimentary. This demonstration illustrates that our scaling problem consists of generating more sophisticated behaviors at the physical coupling level, but it does not involve dealing with an arbitrarily complex policy coupling.

Indeed, we must deal with the complexity of the physical coupling because this complexity is imposed by the real world. The complexity of the policy coupling, however, is not imposed. We are free to design the policy coupling to suite our needs. We need a well-designed policy coupling that makes learning smarter behaviors possible when the robot interacts with the real world at the physical coupling level.

Note that it is fortunate that we do not need to face an arbitrarily complex policy coupling because we have no reason to believe that an algorithm capable of scaling up with this complexity would even be possible. In contrast, the example of natural cognitive systems (animals and humans) illustrates that it is possible to deal with the complexity associated with the physical coupling. Even animals with modest computational resources (e.g., animals with small brains like insects and small vertebrates) exhibit relatively smart behaviors and complex learning in the real world.

On Page 42, we discussed why the agent should construct a coherent model of the world on the basis of regularities of interactions that it discovers. The agent must learn that regularities of interaction are caused by entities that exist in the world. Knowledge of entities that exist in the world is called ontological knowledge.

The terms ontological and ontology have the advantage of carrying with them centuries of discussions about the question "what is there in the world?", and about the possibility of even answering this question. See the Wikipedia article about Ontology (philosophy), or Willard Van Orman Quine's article that examines this question not without humor. The Wikipedia article about Ontology (information science) also gives an overview of how a designer can pre-encode ontological knowledge in a traditional AI system.

For the purposes of this course, let us take from these philosophical discussions that ontological knowledge is always pragmatical. That is to say, people or groups of people construct ontological knowledge, and this construction process is fundamentally influenced by their motivations. In contrast with traditional AI, designers of developmental agents do not encode the agent with presupposed ontological knowledge, because, if they did so, the agent's ontological knowledge would not be grounded in the agent's experiences and motivations (it would not be the agent's knowledge but the designer's knowledge). Instead, the developmental AI approach aims at designing agents capable of constructing their own ontological knowledge on the basis of their experiences interacting with the world and with reference to their own motivations.

Let us also take from these philosphical discussions that entities of the world exist in the three-dimensional real space. This leads us to the conclusion that developmental agents should not only be sensitive to temporal regularities (as are our algorithms thus far), but also to spatial regularities; hence the key concept of Lesson 6:

Spatio-temporal regularities of interaction lead to ontological knowledge of the world.

To design agents that can construct ontological knowledge from spatio-sequential regularities of interaction, we draw inspiration from natural organisms. Natural organisms generally have inborn brain structures that encode space, preparing them to detect and learn spatio-sequential regularities of interaction. We design the policy coupling of our agents by pulling lessons from these natural brain structures, which leads us to a biologically-inspired developmental cognitive architecture.

« Previous | Next »

See public discussions about this page or start a new discussion by clicking on the Google+ Share button. Please type the #IDEALMOOC061 hashtag in your post:

Share