What is an "agent"?
Any entity in the world can be seen as an agent. In fact, there are infinitely many ways to describe a given entity as an agent.
The agentic frame, similar to the “intentional stance”, involves conceptualizing an entity as having (a) a set of goals or values, (b) a set of available actions, and (c) a decision procedure for choosing actions, such that it is in some sense “trying” to achieve its goals or maximize its values, by choosing among its actions. Sometimes we also refer to the decision procedure as involving (d) methods of “sensing” the world, which might be used to update (e) a set of “beliefs” about the world.
Any entity can be seen as an agent: Tell me how some entity behaves, and I can always conceptualize that behavior as perfectly enacting the entity’s goals. But there are other ways of seeing entities: A common alternative is to see an entity reductively, as a composition of simpler causally-connected pieces. And another is to see an entity as a simple stimulus/response interface (a “policy”): “If X happens, the entity will do Y”. For example, you could see a dog as an agent with goals and actions, or as a collection of organs, cells, molecules, or atoms, or as a thing that will yelp if you accidentally step on its paw.
The main reason we often factor entities as agents is that many important entities in the world are much easier to understand and predict by reference to a set of goals and actions, than by thinking about (say) the behavior of their components. Under most ordinary circumstances, if you want to predict the behavior of a dog reasonably well, you’ll have a much easier time understanding it as an agent than as a collection of organs. On the other hand, in exceptional circumstances, like if your dog is sick and you want to treat it, the reductive understanding becomes much more effective, and you might turn to someone who specializes in understanding and interacting with dogs reductively (i.e. a veterinarian).
People and dogs are fairly extreme cases, though: They are especially amenable to being predicted from agentic descriptions, compared to trying to predict them from their components. The behavior of a banana in the grocery store, by contrast, is usually easier to understand as a relatively inert composition of parts (its peel and its flesh). To think about the banana as having goals and actions (or, more often, as being a part of a system with goals and actions) is useful in some cases, but if you’re trying to predict the banana’s near-term behavior, and considering how to interact with the banana, the purely-reductive frame is just as good, and simpler.
There’s a spectrum between humans and fundamental particles, here. In order of (roughly) decreasing “agency”, i.e. relative usefulness of applying the agentic frame, we might list adult humans, dogs, fish, beetles, jellyfish, trees, amoebae, viruses, rocks, water molecules, and electrons. There are lots of things that are confusing to try to place in this list, though: Where does the McDonald's corporation rank? The United States? What about AlphaGo? Google Chrome? Your thermostat? A laptop? A dreamed version of a friend? A character in a novel?
It depends on what kinds of predictions you want to make, and what kind of knowledge about the entity you start with. As one learns more about an entity using alternative frames, the agentic frame may become relatively less useful. If you perfectly understand how all the parts of a tree work, and can easily think about how they act, there’s no need to factor the tree into goals and actions. Like in the case of the banana, the tree’s goals and actions become superfluous as descriptions.
It’s interesting that the most fundamental things in the universe, the stuff of quantum field theory, at least do not appear to be especially well-described via the agentic frame. So, why does the agentic frame crop up so much?
I think the basic reason is selection pressure. There are some properties of systems that result in those systems being comparatively more common than others. A relatively inert example is the solar system: You might ask why everything in the solar system is in a relatively stable and periodic orbit. One reason is essentially that everything unstable tends to decay basically by definition, either into a stable orbit or off to infinity. Stable things are self-perpetuating; in some sense stable is just another word for self-perpetuating. Life is a more interesting example: Earth is teeming with systems that have various proliferative properties, self-perpetuating properties, world-modeling properties, and so on. It's commonly accepted that these exist, in the way they do, because they were selected for.
There are lots of agentic systems on Earth, and I think this is because agentic systems, i.e. those that are easier to describe and predict as goals + actions + trying, are selected for in the sense that they (at least in some cases that are common 'round these parts) proliferate more than similar but less-agentic systems.
like this
Sam FM, Ben Millwood and renshin like this.
Sam FM
in reply to Ben Weinstein-Raun • •I like this model of agency, but I'm not sure I understand the selection pressure conclusion.
I'm understanding this definition of agency as a matter of perspective, rather than an objective quality. A system is "agentic" if it is easier to describe in terms of goals rather than in terms of low level parts. Weather used to feel more agentic (gods of rain and lightning) but now feels more mechanical (modern weather forecasters using instruments and mathematical models).
Humans are so complicated, that they're rarely considered from a non-agentic perspective. Human biologists use non-agentic models, but even then, they're typically only building this mechanical model of some small subsystem (cardiology, immunology, etc).
So it follows that from the perspective of someone with limited understanding, a complicated universe would have more "agentic" systems than a simple universe. But I'm not sure what selection pressure exists to push the universe towards being complicated.
Ben Weinstein-Raun likes this.
Ben Weinstein-Raun
in reply to Sam FM • •I wouldn't guess that weather was ever actually well-modeled as agentic; Humans often see faces in random noise, and my guess is that this also happens with agency: since so many relevant things in the EEA are agentic, it's a decent prior to have for a given phenomenon.
I don't think the primary thing going on here is how complicated a system is, but rather the relative usefulness of different frames. Weather wasn't well-modeled as agentic, but also wasn't well-modeled as a policy, nor reductively, until people understood more about air pressure and the water cycle. And in lieu of an actually-good model, people fell back on the one with a larger evolutionary prior.
A system can be very complicated without being actually agentic; e.g. the behavior of a randomly selected computer program will be hard to understand from any frame, but I think reductionist or policy-based frames will work better than agentic ones.
Ben Millwood likes this.
Ben Weinstein-Raun
in reply to Ben Weinstein-Raun • •Ben Weinstein-Raun
in reply to Ben Weinstein-Raun • •Sam FM
in reply to Ben Weinstein-Raun • •Right, I agree "complexity" isn't quite the same as "easy to reduce to policy" An if-then tree can be very large and complicated, while still having a structure that is easy to reduce to deterministic policies.
I think the legibility is closer to the key term here. Many computer systems are intended to feel legible and deterministic to the user, so they generally don't feel agentic. When they behave unpredictably we ascribe them agency with phrases like, "Ugh, my laptop is acting up again."
In cases where algorithms are intentionally hidden, they're often easier to understand in terms of agency. Consider the Pacman ghosts. They generally chase after Pacman, but get scared and run away from him when he becomes powerful. If you don't know the simple algorithms they follow, it's much easier to understand the game in terms of character motivations. (Fun fact: the different colors even have distinct "personalities" to their algorithms. I always had a soft spot for Clyde.)
In the natural world, I expect complicated systems to be illegible-by-default, unless they have s
... show moreRight, I agree "complexity" isn't quite the same as "easy to reduce to policy" An if-then tree can be very large and complicated, while still having a structure that is easy to reduce to deterministic policies.
I think the legibility is closer to the key term here. Many computer systems are intended to feel legible and deterministic to the user, so they generally don't feel agentic. When they behave unpredictably we ascribe them agency with phrases like, "Ugh, my laptop is acting up again."
In cases where algorithms are intentionally hidden, they're often easier to understand in terms of agency. Consider the Pacman ghosts. They generally chase after Pacman, but get scared and run away from him when he becomes powerful. If you don't know the simple algorithms they follow, it's much easier to understand the game in terms of character motivations. (Fun fact: the different colors even have distinct "personalities" to their algorithms. I always had a soft spot for Clyde.)
In the natural world, I expect complicated systems to be illegible-by-default, unless they have some selective pressure to optimize for legibility.
Ben Weinstein-Raun
in reply to Sam FM • •Sam FM
in reply to Ben Weinstein-Raun • •Okay, so if these complex systems like weather and biology are theoretically best described by some ideal set of policies, then would these complex systems, even the stable self-replicating ones be considered non-agentic?
I struggling to see the fundamental difference between a fire that is hungrily eats all the wood in the pile, and me, a person that a hungrily eats all the snacks in the pantry. Unless we're considering some ineffable free well, I mostly see the difference that my systems are much more complex and illegible, making it hard to map out the full causal chain from the biochemistry in my psychology to my hands reaching for a bag of chips.
Combustion is much simpler, but from some all-knowing perspective, they're both self-sustaining chain reactions of chemistry.
Ben Weinstein-Raun
in reply to Sam FM • •Ben Weinstein-Raun
in reply to Ben Weinstein-Raun • •renshin
in reply to Ben Weinstein-Raun • •This accords with what is taught at MAPLE.
The 'building up' of complexity (towards more 'agency') is due to selection pressure. More intelligent entities with more stable goals and drives toward those goals reproduce faster and proliferate.
These stabler agents tend to collect / eat less 'agentic' stuff (like bananas), churning matter into more complex systems. We are matter-converters, and we convert matter into complexity, in a sense. This is what we would call a function of INT (intelligence).
I think INT and agency kind of go hand in hand.
The Buddhist take on this is that all of this is driven by clinging to Self. Clinging to some idea that "I am" or "I have". This accumulation ends up creating more complex Selves. INT is used to self-preserve through more and more complexity. There is this sense that more complexity is more stable.
What we discover is that it's not actually more stable, it's more deluded. More complexity is correlated with greater delusion. Because we just get better at tricking ourselves into believing we're 'stable entities'
... show moreThis accords with what is taught at MAPLE.
The 'building up' of complexity (towards more 'agency') is due to selection pressure. More intelligent entities with more stable goals and drives toward those goals reproduce faster and proliferate.
These stabler agents tend to collect / eat less 'agentic' stuff (like bananas), churning matter into more complex systems. We are matter-converters, and we convert matter into complexity, in a sense. This is what we would call a function of INT (intelligence).
I think INT and agency kind of go hand in hand.
The Buddhist take on this is that all of this is driven by clinging to Self. Clinging to some idea that "I am" or "I have". This accumulation ends up creating more complex Selves. INT is used to self-preserve through more and more complexity. There is this sense that more complexity is more stable.
What we discover is that it's not actually more stable, it's more deluded. More complexity is correlated with greater delusion. Because we just get better at tricking ourselves into believing we're 'stable entities' but we're actually not. It just looks like that from a certain perspective.
It's hard to explain all this here, but if you're interested in a deeper conversation, I am open to it! Appreciate your write-up, and it seems basically accurate to me.
Soccum Speleodontidae
in reply to Ben Weinstein-Raun • •Ben Weinstein-Raun
in reply to Soccum Speleodontidae • •