Dr. Geoffrey Hinton, a scientist who’s known as one of the”godfathers” of artificial intelligence technology has quit his job at Google in order to speak freely about the technology’s risks. In an interview with the New York Times, Hinton spoke on his deep misgivings about the technology. “It’s hard to see how you can prevent the bad actors from using it for bad things,” he said. “Look at how [AI technology was] five years ago and how it is now. Take the difference and propagate it forwards. That’s scary.”
Dr. Hinton has been involved in the world of AI since the ’70s, when as a graduate student at the University of Edinburgh, he was one of the earliest adopters of a “neural network,” a mathematical system that learns skills by analyzing data. In 2012, as a professor in Toronto, Hinton and two of his students — Alex Krishevsky and Ilya Sutskever, the latter of whom is currently ChatGPT creator OpenAI‘s chief scientist — created a neural network that could teach itself how to identify objects like cats, dogs and flowers after being digitally fed thousands of photos. Later that year, Google acquired the company that Dr. Hinton and his two students started for $44 million USD. Then, in 2018, Dr. Hinton and two other scientists were awarded the Turing Award, known as the “Nobel Prize of Computing,” for their neural network advancements. These breakthroughs eventually led to the creation of AI systems like ChatGPT and Google Bard.
However, the rapid advances in technology began to make Dr. Hinton uneasy. He told the Times that he saw Google as a “proper steward” of AI until 2022, when the search giant’s core business was threatened by Microsoft’s new Bing search engine that uses OpenAI and the company began a “code red” response to meet the challenge. He did, however, refute parts of the Times‘ report in a tweet this morning, stating Google “acted very responsibly” and that he left so he could “talk about the dangers of AI without considering how [it] impacts Google,” not so he could criticize the company. Google’s chief scientist Jeff Dean issued a statement of his own: “We remain committed to a responsible approach to AI. We’re continually learning to understand emerging risks while also innovating boldly.”
In the NYT today, Cade Metz implies that I left Google so that I could criticize Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly.
— Geoffrey Hinton (@geoffreyhinton) May 1, 2023
Dr. Hinton worries that, in the short term, the propagation of false photos, videos and text will prevent an average internet user from discerning “what is true anymore,” and that in the longer term its automation of simple tasks will upend the job market. “It takes away the drudge work … it might take away more than that,” he said. Down the road, Dr. Hinton could even see a sci-fi story come to life, where AI systems not only generate code but run that code on their own, making them autonomous and leading to the elimination of humanity as a whole. “The idea that this stuff could actually get smarter than people — a few people believed that,” he said to the Times. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
For much less disconcerting tech news, check out MSCHF’s new “Hot Chat 3000” online chatroom, which, yes, was trained with OpenAI.