Nobel Prize-winning physicist John Hopfield, renowned for his pioneering work in artificial intelligence, has expressed deep concerns over the rapid advancement of AI technology. Speaking via video link from Britain to a gathering at Princeton University, the 91-year-old emphasized the potential dangers of unchecked AI development.
“One is accustomed to having technologies which are not singularly only good or only bad, but have capabilities in both directions,” Hopfield stated. Drawing parallels between AI and other powerful technologies such as biological engineering and nuclear physics, he highlighted the risks of not fully understanding these systems. “I’m very unnerved by something which has no control, something which I don’t understand well enough so that I can understand what are the limits which one could drive that technology.”
Hopfield, whose theoretical model known as the “Hopfield network” demonstrated how artificial neural networks can mimic human memory processes, was honored with the 2024 Nobel Physics Prize. His work laid the foundation for modern AI applications and has significantly influenced the field.
The physicist joined fellow Nobel laureate Geoffrey Hinton in advocating for a deeper understanding of deep-learning systems to prevent potential catastrophes. Hinton, often referred to as the “Godfather of AI,” has also voiced concerns about AI systems surpassing human intelligence and the lack of control that might ensue. “If you look around, there are very few examples of more intelligent things being controlled by less intelligent things, which makes you wonder whether when AI gets smarter than us, it’s going to take over control,” Hinton remarked.
Hopfield echoed these sentiments, emphasizing the necessity for transparency and regulation in AI development. “That’s the question AI is pushing,” he said, stressing that despite the marvels of modern AI systems, the lack of understanding about their inner workings is “very, very unnerving.”
He cautioned against unforeseen consequences, referencing the fictional example of “ice-nine” from Kurt Vonnegut’s novel “Cat’s Cradle,” which leads to catastrophic results despite its intended beneficial use. “I’m worried about anything that says… ‘I’m faster than you are, I’m bigger than you are… can you peacefully inhabit with me?’ I don’t know, I worry,” Hopfield expressed.
Both scientists advocate for increased research into AI safety and for governments to support these efforts. “I’m advocating that our best young researchers, or many of them, should work on AI safety, and governments should force the large companies to provide the computational facilities that they need to do that,” Hinton added.
Their warnings come amid the meteoric rise of AI capabilities and a fierce race among companies to develop more advanced systems. The technology has faced criticism for evolving faster than scientists can fully comprehend, raising fears about its potential impact on society.
Reference(s):
Nobel-winning physicist 'unnerved' by AI technology he helped create
cgtn.com