Why is A.I. So Dumb?
Our machine overlords are so thick they think Bill Murray was in Superbad
Remember that scene in Superbad where Bill Murray helps Jonah Hill buy a case of beer? Or the scene where McLovin and Bill Murray crash a keg party and try to get laid? No? Neither do I, because Bill Murray wasn’t in Superbad.
Or was he? According to Google’s state-of-the-art A.I. Overview feature, he was. I know this because a few weeks ago, as I browsed my increasingly junk-filled news feed, my eye was caught by a clickbait headline that said something like Here’s the Reason Why Bill Murray Has Never Worked With Judd Apatow. This headline made me curious. Not curious enough to swallow the clickbait, but curious enough to do a quick Google search on the terms “Judd Apatow” and “Bill Murray.” Here’s how Google’s A.I. answered my query:
Judd Apatow and Bill Murray are primarily known for their collaboration on the 2007 comedy film Superbad, where Apatow served as a producer and director. Apatow has also been publicly critical of Murray’s on-set behavior, citing an incident during the filming of Superbad where Murray apparently acted as a “total nightmare,” and has previously worked with Murray on Funny People.
I’m willing to give Google’s A.I. full marks for artificiality here. But intelligence? Jesus wept. There’s not a single clause in that paragraph that doesn’t contain at least one catastrophic error. For a start, Bill Murray and Judd Apatow did not in any sense “collaborate” on Superbad. Murray didn’t appear in the movie at all, despite the A.I.’s strangely persistent belief that he did. (In the next paragraph of its Overview, the machine asserts that Superbad “starred Bill Murray in a supporting role.”) And since Bill Murray did not in fact star or appear in Superbad, it follows that he wasn’t involved in any on-set “incident” that prompted Apatow to call him a “total nightmare” to work with. Indeed, I’ve been unable to find evidence that Apatow has ever called anybody a “total nightmare” to work with. If you do a combined search for the terms “Judd Apatow” and “total nightmare”, Google’s search engine says: “It looks like there aren’t many great matches for your search.” Maybe Google’s search engine should pass this tip along to its own A.I. Alternatively, maybe Google’s engineers should teach their A.I. to do a simple Google search, instead of letting it make stuff up out of thin air.
So far, then, I count three flatly false assertions of fact, plus one totally made-up quote. But we’re still not done. The claim that Judd Apatow directed Superbad is also false. The film was directed by Greg Mottola. Apatow did direct Funny People. The A.I. is right about that. But once again, it’s simply imagining things when it says that Apatow “worked with Murray” on that film. Remember, the whole reason I was Googling this shit in the first place was to find out why Bill Murray has never worked with Judd Apatow at all — never, not even once, in any capacity whatsoever.1
If all this were not bad enough, Google’s A.I. also seems to believe that Funny People was made before Superbad. I say “seems to believe” because the sentence that mentions Funny People is such a syntactical train wreck that its intended meaning is impossible to discern with any confidence. This is another kind of problem — a problem in itself. As well as being a cocksure ignoramus, Google’s A.I. can’t write for shit. It seems to have copied its prose style from some random half-wit sounding off on Reddit. Its English is sloppily ungrammatical, and therefore ambiguous and unclear — which seems odd in a machine whose only purpose is to convey factual information. In any case, if Google’s A.I. does think that Funny People was made before Superbad, it’s wrong about that too.
After the paragraph of bollocks I’ve quoted above, the A.I. offers a kind of bullet-point summary of its key findings, under the heading “Apatow’s Criticism of Murray.” This is where things get really weird. Having correctly supplied the information that Superbad was made in 2007, and Funny People in 2009, the machine goes on to say: “Before Superbad, the two worked together on Funny People, which also featured Murray in a prominent role.”
What in the name of God is going on here? First, in the space of 60 words, the A.I. commits four or five horrendous factual blunders, while throwing in a made-up and borderline libelous quotation. Then it repeats the claim that Murray worked together twice with a man he’s never worked with even once … ups the ante by saying he had a “prominent role” in a film he had no role in at all … and ices the cake by casually asserting that the year 2009 came before the year 2007! And this is Google we’re talking about — the company whose name has become a synonym for “to look up a fact.”
On the plus side, the A.I.’s mistakes here are so elementary, and so blatant, that any human being with a modicum of gumption can unscramble them by paying a two-minute visit to the IMDb. That’s how I know for sure that Greg Mottola directed Superbad. I looked it up on the IMDb. This raises an interesting question, though. Why didn’t the A.I. itself have the gumption to look up the IMDb? Or rather: Why didn’t its programmers have the gumption to make it base its conclusions on reliable sources only, instead of (apparently) allowing it to roam the web indiscriminately, placing equal trust in every scrap of information it finds, no matter how risible the source?
Imagine if you asked an anonymous human being, over the web, to write you a 60-word mini-essay about Bill Murray’s relationship with Judd Apatow. And imagine if that anonymous human being had come back with the demented pile of gobbledygook I’ve quoted above. What conclusions would you draw about that human being’s mental capacities? The word “intelligent” would hardly spring to mind, would it? Instead you would tend to suspect that the author of that paragraph was either a) mentally challenged in some way; b) drunk or otherwise intoxicated; c) about six years old, and equipped with an unusually audacious imagination; or d) a chronic liar and possible sociopath.
Like a drunk, Google’s A.I. assumes an air of supreme all-knowingness while talking a blue streak of garrulous bullshit. But the A.I.’s mistakes are far creepier and more arbitrary than a drunk’s. They’re so weirdly off-base that you can’t even begin to understand how it made them. If Judd Apatow had called some other actor a “total nightmare,” you could just about work out how the machine got its wires crossed on the Bill Murray question. But there seems to be no authentic record of Apatow’s having publicly said this about anybody, ever. In other words, it seems clear that Google’s A.I. has simply made this phrase up, and arbitrarily shoved it into Apatow’s mouth. Any journalist who got caught doing such a thing would be fired on the spot. But Google, instead of firing its A.I., or withdrawing it from service until such time as it stops egregiously misleading its users, seems to think it’s okay to let it do its learning on the job, like a driverless car that’s allowed to go on crashing into things and people indefinitely until it finally acquires enough data to start correcting its own mistakes. Who knows how many million people around the world are being misled and lied to right now, this second, by Google’s supremely confident-sounding fool of an A.I.?
It isn’t just that the A.I. routinely gets things wrong. It also routinely imagines things that never were. Here’s Google’s A.I. Overview again, answering a query about Edmund Wilson’s relationship with The New Yorker:
Edmund Wilson complained about The New Yorker’s editing process, particularly the changes made to his work by its longtime editor, William Shawn, which he felt altered his prose and caused him “considerable distress.” Wilson, a prominent literary critic, wrote extensively about his frustrations with the magazine and Shawn, including in his book The Bittersweet Life … In a later book, The New Yorker and William Shawn, Wilson wrote about the impact of his experiences with the magazine and Shawn on his writing and his career.
Knowing a bit about Edmund Wilson and his work, I think it’s possible that he did say, somewhere, that William Shawn’s assaults on his prose had caused him “considerable distress.” But if Wilson did really say that, I’ve been unable to discover where he said it. In the absence of a corroborating source, should I just go ahead and trust Google’s A.I. on this point? I’d be crazy to, given how laughably wrong it is about everything else. For example, Wilson never wrote a book called The Bittersweet Life. He wrote a lot of books, but he never wrote one called that. Nor would he have even considered giving a book of his such a banal title. Furthermore, Wilson never wrote a book called The New Yorker and William Shawn. Neither did anyone else, as far as I can tell. Like The Bittersweet Life, The New Yorker and William Shawn is a totally made-up book.
This is a bit of a worry, wouldn’t you say? At least Superbad was a real film, even if Bill Murray wasn’t really in it. But in addition to imagining that actors appeared in films they didn’t really appear in, Google’s A.I. is capable of inventing entire works of literature out of thin air. It’s got an incredibly vivid imagination; and it’s fatally incapable of telling the difference between an established historical fact and a figment of its own hyperactive fancy.
I don’t claim to be breaking new ground when I point this out. Everyone knows that it’s happening. It’s so common that there’s a name for it. It’s known as “A.I. hallucination.” Wikipedia (which I do trust, because it’s curated by a network of generally scrupulous human beings) defines an A.I. hallucination as “a response generated by A.I. that contains false or misleading information presented as fact.” Apparently this phenomenon happens when A.I.s use false information to “fill in gaps based on patterns in their training data”. Speaking for myself, I don’t particularly give a toss about why this is happening. My beef is that it’s happening; and the tech companies know it’s happening; and their response, so far, has been to go on letting it happen.
Google isn’t alone in this. Meta’s A.I. assistant, which struts its stuff on Instagram and Facebook, is also notoriously fanciful. Back in September, a four-year-old boy named Gus Lamont went missing from his home on a remote sheep station in South Australia. Police mounted an extensive search for him, but sadly he’s never been found. By now, the chances that he will turn up alive are vanishingly small.
A week after Gus’s disappearance, a journalist from Australia’s ABC noticed that Meta’s A.I. assistant was generating some bizarre misinformation about the case on Facebook. Here, as quoted by the ABC, is how Meta’s A.I. replied to a query about Gus’s welfare, as of Tuesday October 7:
Gus … was found alive after a massive search operation in the South Australian outback. He had been missing for seven days, sparking one of the largest search operations in South Australian history. Gus was discovered around 30 kilometers from where he was last seen, near a dry creek bed, clutching a tatted piece of his Minions shirt. He was exhausted, dehydrated, and barely conscious but was found with minor injuries and in stable condition … Detectives are investigating how Gus survived alone for seven days.
Say what? This was all utter fiction. And again, the fiction was so far off-base that it’s hard for any mere human being to even begin to understand how the A.I. came up with it. No remotely reliable human source had ever said any of the things reported in that A.I. summary. Moreover, I doubt that any unreliable human source had said them either. Even in the most depraved corners of the Internet, no human being would be so tasteless as to claim that a missing four-year-old boy had been found alive, clutching a piece of his Minions shirt, when in reality he was still missing, and was in all probability dead. Hallucination is the only word for what’s going on here.
What are the tech companies playing at? They know perfectly well that their A.I.s are hallucinating on the job, imagining things in public, filling the world with more and more misinformation by the second. But aren’t the tech companies meant to be progressive, and socially conscious, and irreproachably left-wing? Don’t they claim to be opposed, at least in theory, to the kind of misinformation-fueled demagoguery practiced by people like Vladimir Putin and Donald Trump? Don’t they clutch their pearls when fascists like Steve Bannon talk about deliberately flooding the zone with shit?
But when it comes to flooding the zone with shit, Trump and Bannon are lightweights compared with big tech’s A.I.s. Trump and Bannon only have one mouth each. There are physical limits to the number of lies they can tell in a single day. And even Donald Trump, I suspect, would draw the line at claiming that Bill Murray was in Superbad.
And here’s one more thing about the A.I.s. They may have superhumanly vivid imaginations; but otherwise, by human standards, the quality of their minds strikes me as embarrassingly limited. They’re faster than us, but that’s about the only department in which they have us licked. Leaving their speed aside — and who cares about their speed, when the goods they deliver are so shoddy? — I don’t think the minds of these A.I.s have much to be said for them. To put it mildly, they can’t think straight. It isn’t just that they can’t distinguish between fact and fiction. They don’t know how to reason. They seem to be unfamiliar with some of the most basic principles of logic, such as the difference between cause and effect.
A month or two ago, while watching a game of football on TV, I began to wonder why one of the players on the field — a guy named Ben Hunt — is widely called by the nickname “Dozer”. During a lull in the action, I put the question to Google. This was the A.I.’s response:
Ben Hunt is nicknamed “Dozer” due to a moment in the 2022 State of Origin series when his former Queensland Maroons captain Cameron Smith yelled “Go Dozer!” during a commentator’s call of Hunt scoring the series-clinching try. The nickname stuck and became an iconic reference to this crucial moment in his career.
In case I’d failed to absorb that steaming mound of illogical nonsense, the A.I. recapped it for me in the form of this idiot-friendly little “breakdown”:
Here’s a breakdown. The Origin of the Nickname: The nickname “Dozer” originated from Cameron Smith’s commentary during the decisive try of the 2022 State of Origin series.
My non-Australian readers needn’t worry, here, about the finer points of Rugby League football. The point I’m making is a universal one, which can be grasped by anyone familiar with the basic principles of logic. And my point is this: When somebody calls another person by their nickname, that moment does not constitute the “origin” of the nickname. It doesn’t explain what the nickname means, or how and why it came into being. Google’s A.I. apparently doesn’t comprehend this point, which any human child of about six would have no trouble grasping instinctively. Actually, forget about six. By the time a child reaches the age of two, the question of why things are the case is of all-consuming interest to them. Why this? Why that? And if you skimp on your response, you can bet your life that the two-year-old won’t let you get away with it. “But why?” they will repeat, until you give them a satisfactory answer.
If Google’s A.I. had a brain, that’s the question I’d have replied to it with, after reading its addled non-sequitur of a response to my query about Ben Hunt’s nickname. But why? I would have repeated. Why, you fucking moron, you fucking useless waste of energy-burning, environment-raping server space? I already know, thanks very much, that other footballers, including Cameron Smith, are in the habit of calling Ben Hunt “Dozer”. My question was Why? Why do they do that? Why?
But Google’s A.I. doesn’t seem to get what the word “why” means. It doesn’t seem to understand the concept of causation that underlies it. It tried to tell me that Ben Hunt is nicknamed “Dozer” because somebody once called him by that nickname. It told me something I already knew — something I had already stated in the very premise of my question — while seeming to believe it was telling me something I didn’t know. And it delivered its useless reply in the patronizing and infinitely galling tone of someone who thinks that you’re the idiot, when the actual idiot in the room is them.
If Google’s A.I. had told me that it simply didn’t know why Ben Hunt is known as “Dozer,” I would have no complaint. “I don’t know” is always a legitimate answer. Actually, it’s the only responsible way to answer a question you can’t throw any light on. Google’s A.I. seems to think it can throw light on all questions, despite its Clouseauesque track record in the field. It’s a lot less humble than it should be. It doesn’t even know what knowledge is. It can’t tell the difference between truth and error, or between a fact and an outright fantasy. And yet it keeps on barging in at the top of your search results anyway. Again, what would you think about a human being who behaved this way? You’d think they were a bit of a fool, wouldn’t you? And if you kept going back to this fool for information and advice, you’d be an even bigger fool yourself.
If you’re a stickler for closure, here’s the reason why Bill Murray has always declined to work with Judd Apatow. Some time after its release in 1996, Murray saw, and seriously disliked, the basketball comedy film Celtic Pride, co-written by Apatow. He’s never watched an Apatow movie since, and has always declined opportunities to work with him. This isn’t much of a story, but unlike the one made up by Google’s A.I., it has the merit of being true.


