My hope is that a common theory of agents will develop over time, with contributions by many different researchers, in the same way that theories in any scientific research area grow and improve over time.

Even if not framed explicitly in agentic terms, I would argue that cognitive science has been working on developing many aspects of this theory for some time now. A goal of the CAT research project is to collect existing work in this area, reframe it in agentic terms, and form this into a consistent agent-focused theory, that will apply to all types of agents, including human agents and digital agents.

To get the ball rolling, here are some preliminary ideas for a set of Common Agent Theory principles and definitions. These come out of work I’ve done with Ken Webb and others involved with the HED research program (in particular Rob West and members of his lab). I’m sure these ideas will evolve and/or are to some extent repetitions of other’s work, but I’ll put them out here as a starting point:

Preliminary Ideas for Basic Principles and Definitions of Common Agent Theory

Agent-hood is not binary. Rather it’s a continuum. Any object or entity can be more or less agent-like (agentic).

Agents may be organic or mineral, natural or synthetic. Agents don’t have to be alive, intelligent or aware to meet the criteria for agent-hood.

Agents have autonomy, identity and agency. The more of each of these that they have, the more agentic they are.

Agents exist in an environment. They are affected by, and affect this environment. Under at least some circumstances, they can independently act on and respond to the environment.

Agents have internal structures, mechanisms and processes. As a result, actions taken on the agent do not necessarily only have direct results.

The internal structures, mechanisms and processes of agents are what give them the potential to behave independently of external forces. Agents, because they are agents have the potential ability to act “by themselves”.

Objects or entities may move forward and back on the agent-hood continuum, even to the extent of transitioning from object-hood into agent-hood and back to object-hood again.

A transition into agent-hood can happen when environmental conditions trigger a change from a passive, static state into an active, dynamic state. Alternatively, the transition may occur due to the operation of internal processes.

Objects that have the potential to shift into agent-hood may also possess a structure that increases the probability that the object will be exposed to particular environments in which this shift will occur.

Agent-hood exists at a particular level of granularity. An object could be composed of agents, or contain agents, but still not be agentic itself.

Agents are dynamic, rather than static. Whether or not an object can be considered dynamic or static depends on the level of granularity being considered. Agents are dynamic at or adjacent to the level of the agent (the level at which it is defined or recognized).

It’s possible to adopt an intentional stance (à la Dennett) towards any object or entity. Here, I’ll refer to the intentional stance also as the “agentic stance”. The more agent-hood an object or entity has, the more functional the intentional/agentic stance will be in successfully predicting or understanding the behaviour of that object or entity.

In adopting an intentional/agentic stance, agents can be viewed as having goals. The agent can be viewed as having goals regardless of whether or not the agent is viewed as literally possessing, or containing goals, from a physical/ontological perspective.

The default goal that can be applied to an agent is the goal of continuing to exist as an agent. Assigning this goal to an agent doesn’t mean that the agent actually contains or possesses the goal, or has awareness of the goal.

The reason a goal of continued existence can be usefully assigned to any agent is a result, or side-effect, of its agentic properties. These properties both confer agent-hood and make “continued existence” a goal that can be assigned under the intentional/agentic stance.

Some Examples and Test Cases

When writing up these ideas for principles and definitions, I had a number of test cases in mind. These included: people, jellyfish, plants, single celled organisms, seeds, viruses, cloth fabric, rocks, tornadoes, windmills, toasters, computer programs, (computer program) functions, simple digital agents (of the kind that might be programmed in a multi-agent simulation) and LLM Chatbots.

To take up and expand on one of these examples, cognitive science has a history of using tornados as an edge case when trying to understand concepts like identity and emergent properties. Composed of inorganic materials and developing relatively abruptly and spontaneously, a tornado, both individually and as a category, nonetheless has macro-level structures and behaviours that can be observed, defined and studied.

From an agentic perspective, a tornado exists as a coherent object over time and has an identity separable from its environment. It is dynamic, but not random. It has some persistent structure and arguably processes (in fact, it might be argued that it is in some sense more process than object, due to its extreme level of dynamism).

While it may seem a bit excessive to say that a tornado has existence as a goal, it can somewhat be said that a tornado does (or would) forcefully resist efforts to stop its existence (to destroy it), simply by virtue of its size and force. As such, giving it a goal of continued existence could be useful from a predictive (agentic) standpoint.

Taking all of this into account, and viewing agent-hood as a continuum, it does seem that a tornado is farther along on the agentic continuum than a rock, or a piece of cloth.

Similar analyses could be taken for some of the other examples listed above as well, so I may return to this exercise as my thoughts evolve.