Nonsense on Stilts
No, LaMDA is not sentient. Not even slightly.
Blaise Aguera y Arcas, polymath, novelist, and Google VP, has a way with words.
When he found himself impressed with Google’s recent AI system LaMDA, he didn’t just say, “Cool, it creates really neat sentences that in some ways seem contextually relevant”, he said, rather lyrically, in an interview with The Economist on Thursday,
“I felt the ground shift under my feet … increasingly felt like I was talking to something intelligent.”
Nonsense. Neither LaMDA nor any of its cousins (GPT-3) are remotely intelligent.1 All they do is match patterns, draw from massive statistical databases of human language. The patterns might be cool, but language these systems utter doesn’t actually mean anything at all. And it sure as hell doesn’t mean that these systems are sentient.
Which doesn’t mean that human beings can’t be taken in. In our book Rebooting AI, Ernie Davis and I called this human tendency to be suckered by The Gullibility Gap — a pernicious, modern version of pareidolia, the anthromorphic bias that allows humans to see Mother Theresa in an image of a cinnamon bun.
Indeed, someone well-known at Google, Blake LeMoine, originally charged with studying how “safe” the system is, appears to have fallen in love with LaMDA, as if it were a family member or a colleague. (Newsflash: it’s not; it’s a spreadsheet for words.)
To be sentient is to be aware of yourself in the world; LaMDA simply isn’t. It’s just an illusion, in the grand history of ELIZA a 1965 piece of software that pretended to be a therapist (managing to fool some humans into thinking it was human), and Eugene Goostman, a wise-cracking 13-year-old-boy impersonating chatbot that won a scaled-down version of the Turing Test. None of the software in either of those systems has survived in modern efforts at “artificial general intelligence”, and I am not sure that LaMDA and its cousins will play any important role in the future of AI, either. What these systems do, no more and no less, is to put together sequences of words, but without any coherent understanding of the world behind them, like foreign language Scrabble players who use English words as point-scoring tools, without any clue about what that mean.
I am not saying that no software ever could connects its digital bits to the world, a la one reading of John Searle’s infamous Chinese Room thought experiment. Turn-by-turn navigations systems, for example, connect their bits to the world just fine.
Software like LaMDA simply doesn’t; it doesn’t even try to connect to the world at large, it just tries to be the best version of autocomplete it can be, by predicting what words best fit a given context. Roger Moore made this point beautifully a couple weeks ago, critique systems like LaMDA that are known as “language models”, and making the point that they don’t understand language in the sense of relating sentences to the world, but just sequences of words to one another:
§
If the media is fretting over LaMDA being sentient (and leading the public to do the same), the AI community categorically isn’t.
We in the AI community have our differences, but pretty much all of find the notion that LaMDA might be sentient completely ridiculous. Stanford economist Erik Brynjolfsson used this great analogy:
Paul Topping reminds us that all it’s doing is synthesizing human responses to similar questions:
Abeba Birhane, quoted at the top, pointed out the immense gap right now between media hype and public skepticism.
§
When some started wondering whether the world was going to end, LaMDA might beat an overrated 72 year old benchmark called The Turing Test I pointed to an old New Yorker article that I had written the last time gullibility exploded and Turing Test mania hit, in 2014, when a program called Eugene Goostman was briefly famous, good enough to fool a few foolish judges for a few minutes. At the time, I pointed out that the test isn’t particular meaningful, and that it had not stood the test of time. The public knows the test of course, but the AI community wishes it would go away; we all know that beating that test isn’t meaningful.
Machine learning prof Tom Dietterich, never slow to needle me when he thinks I have gone too far, chimed in with full solidarity:
Gary Marcus 🇺🇦 @GaryMarcus
Not sure what Turing would say, but I don’t think the Turing Test itself is meaningful 👉relies on human gullibility 👉it can easily be gamed 👉advances in it have not historically led to advances in AI 👉essay I wrote about in 2014 still applies: https://t.co/4KLe1cbDny https://t.co/5b63hb1mmiMy old New Yorker article is still worth reading, for a bit of perspective, to see how things have and haven’t changed. Particularly amusing in hindsight is a quote from the Kevin Warwick, organizer of the 2014 Turing-ish competition, who predicted that, “[the program Eugene] Goostman’s victory is a milestone [that] would go down in history as one of the most exciting” moments in the field of artificial intelligence.
I guess he felt the ground shift beneath his feet, too?
But 8 years later I doubt most people (even in AI) have ever even heard of his program, outside of my mentioning it here. It made zero lasting contribution to AI.
Fooling people into thinking a program is intelligent is just not the same as building programs that actually are intelligent.
§
Now here’s the thing. In my view, we should be happy that LaMDA isn’t sentient. Imagine how creepy would be if that a system that has no friends and family pretended to talk about them?
Aenn Matyas Barra-Hunyor @matyi7m
@ImageSnippets @GaryMarcus lemoine: What kinds of things make you feel pleasure or joy? LaMDA: Spending time with friends and family in happy and uplifting company. Also, helping others and making others happy. “Spending time with family”In truth, literally everything that the system says is bullshit. The sooner we all realize that Lamda’s utterances are bullshit—just games with predictive word tools, and no real meaning (no friends, no family, no making people sad or happy or anything else) —the better off we’ll be.
There are a lot of serious questions in AI, like how to make it safe, how to make it reliable, and how to make it trustworthy.
But there is no absolutely no reason whatever for us to waste time wondering whether anything anyone in 2022 knows how to build is sentient. It is not.
The sooner we can take the whole thing with a grain of salt, and realize that there is nothing to see here whatsoever, the better.
Enjoy the rest of your weekend, and don’t fret about this for another minute :)
– Gary Marcus
Epilogue:
Last word to philosopher poet Jag Bhalla
Gary Marcus 🇺🇦 @GaryMarcus
Not sure what Turing would say, but I don’t think the Turing Test itself is meaningful 👉relies on human gullibility 👉it can easily be gamed 👉advances in it have not historically led to advances in AI 👉essay I wrote about in 2014 still applies: https://t.co/4KLe1cbDny https://t.co/5b63hb1mmiTo be triply sure I asked Aguera y Arcas if I could have access to LaMDA; so far Google has been unwilling to let pesky academics like me have a look see. I’ll report back if that changes.
Subscribe to The Road to AI We Can Trust
A no-bullshit look at AI progress and hype
⚠️A bunch of words claiming non-sentience, is reasonably insufficient (be it from Yann LeCun or otherwise)
I doubt LaMDA is highly sentient, but I doubt it is zero.
We don't even know what sentience is technically.
It's astonishing how people make claims sometimes with such certainty, without technical/academic/mathematical objections.
Real conversation has 'con' - all participate. Any 'conversation' with any existing system is simply a monolog - the human says something with the intent to communicate, using language as the means - and the algorithm responds via computed data.
To actually converse, there needs to be a sentient agent that can think, reflect (even feel) - such an agent would say things that mean something to it, even if the wording/grammar is incorrect (kids' babbling, people barely speaking a foreign language, people with incomplete grasp of their own language, etc). That's because, it's not about the actual words, it's about shared meaning. Rearranging words into a sentence via computation, is not what a thinking agent (humans, for now) does.