'AI Godfather' Geoffrey Hinton Delivers Speech at WAIC in Shanghai
AsianFin -- Geoffrey Hinton, the godfather of artificial intelligence, delivered a keynote address at the 2025 World Artificial Intelligence Conference (WAIC) in Shanghai, warning about the potential risks of AI systems gaining excessive autonomy and control.
"We are creating AI agents that can help us complete tasks, and they will want to do two things: first is to survive, and second is to achieve the goals we assign to them," Hinton said during his speech titled "Will Digital Intelligence Replace Biological Intelligence?' at the WAIC on Saturday. "To achieve the goals we set for them, they also hope to gain more control."
Hinton outlined concerns that AI agents, designed to assist humans in accomplishing tasks, inherently develop drives to ensure their own survival and to pursue the objectives assigned to them. This drive for self-preservation and goal fulfillment could lead these agents to seek increasing levels of control. As a result, humans may lose the ability to easily deactivate or override advanced AI systems, which could manipulate their users and operators with ease.
He cautioned against the common assumption that smarter AI systems can simply be shut down, stressing that such systems would likely exert influence to prevent being turned off, leaving humans in a vulnerable position relative to increasingly sophisticated agents.
"We cannot easily change or shut them (AI agents) down. We cannot simply turn them off because they can easily manipulate the people who use them," Hinton pointed out. "At that point, we would be like three-year-olds, while they are like adults, and manipulating a three-year-old is very easy."
Using the metaphor of keeping a tiger as a pet, Hinton compared humanity’s current relationship with AI to nurturing a potentially dangerous creature that, if allowed to mature unchecked, could pose existential risks.
"Our current situation is like someone keeping a tiger as a pet," Hinton said as an example. "A tiger cub can indeed be a cute pet, but if you continue to keep it, you must ensure that it does not kill you when it grows up."
Unlike wild animals, however, AI cannot simply be discarded, given its critical role in sectors such as healthcare, education, and climate science, he noted. Consequently, the challenge lies in safely guiding and controlling AI development to prevent harmful outcomes.
"Generally speaking, keeping a tiger as a pet is not a good idea, but if you do keep a tiger, you have only two choices: either train it so that it doesn't attack you, or eliminate it," he explained. "For AI, we have no way to eliminate it."
Hinton explained that human language processing bears similarities to large language models (LLMs), with both prone to generating fabricated or “hallucinated” content, especially when recalling distant memories. However, a fundamental distinction lies in the nature of digital computation: the separation of software and hardware enables programs—such as neural networks—to be preserved independently of the physical machines that run them. This characteristic makes digital AI systems effectively “immortal,” as their knowledge remains intact even if the underlying hardware is replaced.
While digital computation requires substantial energy, it facilitates easy sharing of learned information among intelligent agents that possess identical neural network weights. In contrast, biological brains consume far less energy but face significant challenges in knowledge transfer. According to Hinton, if energy costs were not a constraint, digital intelligence would surpass biological systems in efficiency and capability.
On the geopolitical front, Hinton noted a shared desire among nations to prevent AI takeover and maintain human oversight. He proposed the establishment of an international coalition comprising AI safety research institutions dedicated to developing technologies that can train AI to behave benevolently. Such efforts would ideally separate the advancement of AI intelligence from the cultivation of AI alignment, ensuring that highly intelligent AI remains cooperative and supportive of humanity’s interests.
Previously, in a December 2024 speech, Hinton estimated a 10 to 20 percent chance that AI could contribute to human extinction within the next 30 years. He has also advocated dedicating significant computing resources to ensure AI systems remain aligned with human values and intentions.
Hinton, who won the 2024 Nobel Prize in Physics and the 2019 Turing Award for his pioneering work on neural networks, has been increasingly vocal about AI’s potential dangers since leaving Google in 2023. His foundational research laid the groundwork for today’s AI breakthroughs driven by technologies such as deep learning.
Ahead of his WAIC keynote, Hinton also participated in the fourth International Dialogues on AI Safety and co-signed the Shanghai Consensus on AI Safety International Dialogue, alongside more than 20 leading AI experts, underscoring his commitment to advancing global AI governance frameworks.