<aside> <img src="/icons/help-alternate_yellow.svg" alt="/icons/help-alternate_yellow.svg" width="40px" />
INCOMPLETE DRAFT by Dominik Lukeš. Send comments to [email protected]. Last edit on 15 Sept 2024
</aside>
<aside> <img src="/icons/shorts_gray.svg" alt="/icons/shorts_gray.svg" width="40px" />
Generated by Claude.ai
This text explores the historical pattern of attributing agency to artificial intelligence and computing systems, examining how people tend to extrapolate vague notions of independent agency to AI systems even when they understand the underlying technology is not truly autonomous or intelligent.
The current AI moment is unique because:
This is a kind of "what I did on my holiday essay". While travelling, I had a thought to write about the current AI moment and continuity with the past but went down a rabbit hole of AI history often rereading or reading for the first time some foundational texts. This text is about 15,000 words long but much of it are quotes. If you are too busy to read the text, read the quotes, you'll learn more.
You can also pretty much skip the first two section which are more about how metaphors and language work. There's a history of AI contained in this text. But it's not systematic and very much spends more time in some periods than others. Also, it is not perfectly linear, so I asked Claude to make a timeline of it and write a little appendix. But the middle section is the most history like, if you're of a mind to read that.
There has always been a lot of vague concern about people anthropomorphizing AI. People worry - usually about other people - that anthropomorphizing AI will somehow make us think of computers as human. And not just today. This sort of critique seems to gain larger salience at different times.
Here's Edward Feigenbaum reflecting on this in a 1992 retrospective:
As happened periodically, both the field of AI and the term "Artificial Intelligence," were under attack, as were other anthropomorphisms (such as "learning") suggesting human-like mental activities. (Feigenbaum, 1992)
But anthropomorphism is mostly not a problem. People anthropomorphize everything. Pets, toasters, houses, computers, rivers, and so on. But most of the time the anthropomorphism is mostly sentimental, giving objects names, talking to them and sometimes pretending they talk back to us. This may develop pathological dimensions but so can pretty much anything.
But the bigger confusion comes from assuming a vague independent agency and intentionality in the technology at some remove from us. That does not mean the agentivity is human-like, just that the involvement of humans at various steps is deemphasised. Often, the agent-like descriptions start out very vague, driven by the language we use, but once they are sufficiently distant from the nature of the technology, they start being filled out with other aspects we associate with agents such as planning, intentionality, and often emotion.
This post tries to outline the nature of this extrapolation and give various examples from the history of AI. It turned out to be much longer than I intended and much of it is quotes and historical context. So, even if the central argument is not convincing, it may provide a useful guide to the history of AI.
First, we cannot really talk about anything without agency seeping in. The minute we have a subject and verb, we are always imputing some agentivity. Because that's what the subject-verb construction is for. However, most of the time, we do not think of all the things we use the construction for as explicitly agentic.
Just notice all the words we use about engines: they kick, sputter, die, come alive, function, drink, sing, make noise, rumble, etc. We say all of these things about engines and don't really think twice about it. Because we know everything about what engines do and don't. We know we have to start them, give them fuel, push pedals, turn keys in ignition, connect them to transmissions, etc.
By and large the agentic language does not get extrapolated because of the knowledge we have. The other knowledge is more salient and limits how far we take agentivity. But when we have less knowledge, the extrapolation kicks in.
Agentic extrapolation is a kind of metaphoric framing. At the most basic level, any metaphoric framing is a kind of a bridge across a gap in knowledge. We don't really know what goes on inside a cat's head, so we may say it is thinking about something.