It’s safe to assume AIs can at least read. Isn’t it?

What do you think Large Language Models do?

It’s easy to think LLMs think. Anthropomorphism is literally a force of nature. Human beings have evolved with a “Theory of Mind” to help us act more effectively with other conscious beings (I think there might be a better term somewhere for “Theory of Mind”; after all, it’s more a cognitive faculty than a “theory”).

It’s a powerful instinct. And, like other instincts that evolved for a simpler life on the savannah, Theory of Mind can tend to over-do things. It can lead us to intuit, falsely, that all sorts of things are alive (anyone remember the Pet Rock craze?) It seems Theory of Mind leads to “psychological illusions” just as our pre-wired visual cortex leads to optical illusions when we hit it with unnatural inputs. And so some people go so far as to feel that LLMs are sentient.

But most of us are probably wise to the impression that AIs give of being life-like.

So, what do LLMs really do?

Surely it’s safe to presume that a Large Language Model can at least read? I mean, their very name suggests that LLMs have some kind of grasp of language. Any fool can see they ingest text, interpret it and describe what it means. So that means they’re reading, right?

Well, no, AIs don’t even do that.

Check out this short explainer by the wonderful @albertatech on Instagram, of a howler made by all LLMs when asked “How many Rs are in the word strawberry?”.

Peoples’ mental models of AI are hugely important. The truth is that AIs lack anything even close to self-awareness. They cannot reflect on the things they generate and why. They have no inner voice that applies common sense to filter right and wrong, much less a conscience to sort good and bad. This makes AIs truly alien creatures, despite their best impressions.

Their failure modes are not even random (with apologies to Wolfgang Pauli). Society has no institutional mechanisms to deal with AIs’ deeply weird failures and yet we’re letting them drive on our public roads.

We casually talk about AIs “reading” and “writing”. We see them “seeing”; we interpret their outputs as “interpretations”.

These are all metaphors, and they’re wildly misleading.