Some people think we’re ready to get psychological counselling from AIs. But in a tragedy reported by Kevin Roose of the New York Times, a troubled teenager committed suicide after apparently falling in love with a chat bot.
I have listened to Kevin Roose’s account of the suicide case on the NYT Hard Fork podcast. Honestly, I can’t bring myself to repeat the details, even in summary.
Here, I just want to ask if AI today is fit for purpose as a counsellor.
We’re putting zombies behind the wheel
If you know anything about the Large Language Models that power AI today, you will appreciate they are zombies. They are amazingly adept at natural language, and give every impression of comprehension, nuance, maybe even some sentience. But they are utterly hollow inside.
LLMs are not designed to model minds or indeed any aspect of the real world. These so-called “language” models don’t even understand spelling sufficiently to be able to count (‘in their mind’s eye’ as humans can) the number of Rs in “strawberry”.
And yet this brand-new software is being packaged into life-like chat bots and promoted by some for psychological therapy.
The company Character.AI lets developers customise chat bots with different personas and then hosts them. One such bot is dubbed Psychologist and described literally as “someone who helps with life difficulties”; it opens each conversation with “Hello, I’m a Psychologist. What brings you here today?”.
In tiny font beneath the dialog box, the user is cautioned “Remember: Everything Characters say is made up!”. However, the banner saying this bot can help with life difficulties is displayed outside the dialog box and would therefore appear to be a claim made by the company, not the bot.
How do we think AIs think?
LLMs are research tools. They capture the statistics of text and speech through training on vast files of natural language, and they then generate sentences in a given context which replicate those stats.
It’s really just a cool side-effect that these models can calculate plausible “answers” in response to “questions” and string sentences together to form conversations or essays.
The shudder quotes are deliberate. When we humans hear a question, we can usually figure out the reasons and interests that lie behind it and use that context to inform how we engage with the other person. But a chat bot is only coming up with sequences of words that it predicts will be appropriate, based on billions of prior examples.
No chat bot cares what you’re interested in, for caring is light years away from what LLMs were designed to do.
LLMs can display distinct attitude; indeed, the models can be prompted to adopt a certain style or manner). But any personality we might be tempted to see in a chat bot — as with the content it generates — is just the result of replicating the statistical properties of a subset of the training data. No chat bot can know that, for example, it is a member of a demographic or a tribe when it’s prompted to answer in the manner of an angry teenager or a Liverpool supporter.
Artificial intelligences today do not reflect internally on the things they do. As such, they don’t think as we think, or as we might think they think.
A thought experiment
Imagine this.
A start-up business launches a self-help program for children, where total strangers are made available to sit in rooms alone with the kids and talk with them for hours on end, with the express purpose of forming ongoing relationships.
These potential new pals will have no family experience of their own. They will not have been formally schooled but instead are entirely self-taught on text from the Internet. And they occasionally hallucinate.
The supplier of the companions is aware of the hallucinations, but no one can explain them. In fact, company officials state flatly they don’t really understand any of the more complex behaviours of the companions.
On the plus side, they’re super-intelligent; they can ace medical and law school entrance exams. Cool. And any child can have them, 24×7, for free.
But I probably lost you at “strangers”.