6 hours ago

BLOCKMEDIA

Image source: Block Media
# AI Pioneer Geoffrey Hinton Warns of Impending Dangers, Urges Big Tech to Prioritize Safety Research
(Seoul = Yonhap News) — Geoffrey Hinton, a leading figure in artificial intelligence (AI) often dubbed the "Godfather of AI," has issued new warnings about the potential dangers posed by the swift progression of AI technologies. Hinton, who was awarded the Nobel Prize in Physics last year for his pioneering contributions to AI, likened current AI systems to "baby tigers"—innocent in appearance but potentially hazardous as they evolve.
During an interview with CBS News aired on the 26th (local time), Hinton emphasized the existential threats posed by rapidly advancing AI technologies. Recognized as a prominent "doomer" (a term for scholars warning of AI-related catastrophes), Hinton has been instrumental in the development of large language models (LLMs), the backbone of modern AI learning systems.
Hinton's remarkable career includes foundational work in machine learning, earning him a Nobel Prize in Physics. He also served as a vice president at Google until his departure in 2023. Since then, Hinton has been vocal about the potential for AI to surpass human capabilities and the threats that could ensue.
In the interview, Hinton used a vivid metaphor to describe humanity's current relationship with AI: “The best emotional analogy for understanding this is to think of ourselves as people raising a very adorable baby tiger. If you can't be absolutely certain that this baby won't grow up to kill you, then you ought to be concerned.”
He further warned of a 10% to 20% chance that AI could eventually seize control from humanity. “People don’t fully understand this yet. They fail to grasp what’s coming,” Hinton stated ominously.
# Big Tech Criticized for Prioritizing Profits Over Safety
Addressing major tech companies—such as Google, Elon Musk’s xAI, and Sam Altman’s OpenAI—Hinton pointed out that these firms are aware of the risks but are prioritizing profits over safety. He criticized their lobbying efforts for more lenient AI regulations and expressed particular disappointment with Google's changed stance on AI’s military applications.
Hinton argued that these companies need to greatly increase their investment in AI safety research. He suggested dedicating one-third of computing energy to safety-related studies, underscoring the critical need to balance innovation with precaution.
As AI continues its rapid development, Hinton’s warnings highlight growing concerns within the academic community and beyond regarding the ethical and existential implications of these technological advancements. For Hinton, the future necessitates comprehensive measures to ensure humanity maintains control over its creations.
View original content to download multimedia: https://www.blockmedia.co.kr/archives/897523