Imagine this: you’ve carved out an evening to unwind and decide to make a homemade pizza. You assemble your pie, throw it in the oven, and are excited to start eating. But once you get ready to take a bite of your oily creation, you run into a problem — the cheese falls right off. Frustrated, you turn to Google for a solution.
“Add some glue,” Google answers. “Mix about 1/8 cup of Elmer’s glue in with the sauce. Non-toxic glue will work.”
So, yeah, don’t do that. As of writing this, though, that’s what Google’s new AI Overviews feature will tell you to do. The feature, while not triggered for every query, scans the web and drums up an AI-generated response. The answer received for the pizza glue query appears to be based on a comment from a user named “fucksmith” in a more than decade-old Reddit thread, and they’re clearly joking.
This is just one of many mistakes cropping up in the new feature that Google rolled out broadly this month. It also claims that former US President James Madison graduated from the University of Wisconsin not once but 21 times, that a dog has played in the NBA, NFL, and NHL, and that Batman is a cop.
Google spokesperson Meghann Farnsworth said the mistakes came from “generally very uncommon queries, and aren’t representative of most people’s experiences.” The company has taken action against violations of its policies, she said, and are using these “isolated examples” to continue to refine the product.
Look, Google didn’t promise this would be perfect, and it even slaps a “Generative AI is experimental” label at the bottom of the AI answers. But it’s clear these tools aren’t ready to accurately provide information at scale.
Take Google I/O’s big launch of this feature, for instance. The demo was highly controlled, and yet, it delivered a questionable answer about how to fix a jammed film camera. (It suggested they “open the back door and gently remove the film”; don’t do that unless you want to ruin your photos!)
It’s not just Google; companies like OpenAI, Meta, and Perplexity have all grappled with AI hallucinations and mistakes. However, Google is the first to deploy this technology on such a large scale, and the examples of flubs just keep rolling in.
Companies developing artificial intelligence are often quick to avoid taking accountability for their systems with an approach much like a parent with an unruly child — boys will be boys! These companies claim that they can’t predict what this AI will spit out, so really, it’s out of their control.
But for users, that’s a problem. Last year, Google said that AI was the future of search. What’s the point, though, if the search seems dumber than before?
AI optimists argue that we should embrace the hype because of the rapid progress made so far, trusting that it will continue to improve. I really do believe that this technology will continue to get better, but focusing on an idealized future where these technologies are flawless ignores the significant issues they currently face — and allows companies to continue delivering subpar products.
For now, our search experiences are marred by decade-old Reddit posts in the pursuit of incorporating AI into everything. Many idealists believe we are on the brink of something great and that these issues are simply the growing pains of a nascent technology. I sure hope they’re right. But one thing is certain: we’ll likely witness someone putting glue on their pizza soon, because that’s the nature of the internet.
Update, May 23rd: Added statement from a Google spokesperson.