Skip to main content


What is an "agent"?


in reply to Ben Weinstein-Raun

I like this model of agency, but I'm not sure I understand the selection pressure conclusion.

I'm understanding this definition of agency as a matter of perspective, rather than an objective quality. A system is "agentic" if it is easier to describe in terms of goals rather than in terms of low level parts. Weather used to feel more agentic (gods of rain and lightning) but now feels more mechanical (modern weather forecasters using instruments and mathematical models).

Humans are so complicated, that they're rarely considered from a non-agentic perspective. Human biologists use non-agentic models, but even then, they're typically only building this mechanical model of some small subsystem (cardiology, immunology, etc).

So it follows that from the perspective of someone with limited understanding, a complicated universe would have more "agentic" systems than a simple universe. But I'm not sure what selection pressure exists to push the universe towards being complicated.

in reply to Sam FM

I wouldn't guess that weather was ever actually well-modeled as agentic; Humans often see faces in random noise, and my guess is that this also happens with agency: since so many relevant things in the EEA are agentic, it's a decent prior to have for a given phenomenon.

I don't think the primary thing going on here is how complicated a system is, but rather the relative usefulness of different frames. Weather wasn't well-modeled as agentic, but also wasn't well-modeled as a policy, nor reductively, until people understood more about air pressure and the water cycle. And in lieu of an actually-good model, people fell back on the one with a larger evolutionary prior.

A system can be very complicated without being actually agentic; e.g. the behavior of a randomly selected computer program will be hard to understand from any frame, but I think reductionist or policy-based frames will work better than agentic ones.

in reply to Ben Weinstein-Raun

My sense is that agentically-shaped things are comparatively unlikely to be found in random noise, and you actually need some kind of process of selection to end up with a high probability of things that are better-modeled as agentic than as policies or compositions.
in reply to Ben Weinstein-Raun

(I'm acting like there's a trichotomy between agents, policies, and compositions here, but I don't think it really is one; probably there are tons of other frames you can use on a similar abstraction level)
in reply to Ben Weinstein-Raun

in reply to Sam FM

I do think complicated systems are often illegible, but I don't think this implies that the agentic frame is a better fit for them than others; if anything I think they're often better understood with tools like statistical mechanics, which is more of a "policy"-like frame than an "agent"-like frame. The agentic frame doesn't actually help you to understand or predict your computer's "acting up"; it does help you predict the Pacman ghosts, but mainly because they're designed to seem agentic. I think the tendency to see the buggy computer as agentic is another case of "seeing a human face in the leaves", i.e. it's kind of a (useful-in-the-EEA) bias we have in favor of agency-as-the-default-frame, that the buggy computer's user falls back on when they have ~ no good models at all.
in reply to Ben Weinstein-Raun

Okay, so if these complex systems like weather and biology are theoretically best described by some ideal set of policies, then would these complex systems, even the stable self-replicating ones be considered non-agentic?

I struggling to see the fundamental difference between a fire that is hungrily eats all the wood in the pile, and me, a person that a hungrily eats all the snacks in the pantry. Unless we're considering some ineffable free well, I mostly see the difference that my systems are much more complex and illegible, making it hard to map out the full causal chain from the biochemistry in my psychology to my hands reaching for a bag of chips.

Combustion is much simpler, but from some all-knowing perspective, they're both self-sustaining chain reactions of chemistry.

in reply to Sam FM

I think fire is actually fairly agentic; e.g. modeling fire like you suggest will work better than modeling "will it rain tomorrow" as an agentic process in the way pre-scientific humans seem to. Like, rain dances and goat sacrifices don't work, but giving the fire more wood does work.
in reply to Ben Weinstein-Raun

I do think the fire is also much, much less agentic than the person eating snacks, and not primarily because of the complexity difference, but because it's much easier to understand the fire as a simple policy (i.e. describing a simple rule for exactly what it will do given a set of conditions) or as a system made of smaller parts (i.e. describing the chemical process). In some sense this is related to low complexity, but I think there are things that are easy to describe this way while still being very complicated, like a heavily obfuscated computer program that does something simple.
This entry was edited (2 months ago)
in reply to Ben Weinstein-Raun

in reply to Ben Weinstein-Raun

From my background, it almost sounds like "actor" fits your description of "agent" better. "Agent" in my context historically being like a daemon, except in some user session context in order to be able to do I/O in said context ;)
in reply to Soccum Speleodontidae

yeah the usage here is from economics / game theory / AI, where it means basically "thing that's doing stuff to suit some preferences"