At this year's Web Summit in Lisbon, MIT physicist Max Tegmark issued a critical warning about artificial intelligence development during an interview with RAZOR. As 2025 draws to a close, he emphasized that tech companies are accelerating efforts to create systems surpassing human intelligence without implementing essential safety protocols required in other high-risk industries.
Tegmark highlighted the distinction between current task-specific AI tools and the emerging push for Artificial General Intelligence (AGI) – systems capable of autonomous learning and decision-making. The ultimate concern lies in developing superintelligent machines combining advanced cognition, versatile capabilities, and physical autonomy, potentially creating uncontrollable scenarios.
While drawing parallels to aviation and medical safety standards, Tegmark noted AI currently operates in a regulatory vacuum. He urged governments worldwide to implement mandatory testing frameworks, particularly for systems that could operate beyond human oversight.
The scientist acknowledged growing international momentum for AI governance, citing increased public awareness and expert coalitions advocating control mechanisms. Through his Future Life Institute, founded in 2014, Tegmark continues pushing for balanced development that harnesses AI's potential in medical and scientific fields while mitigating existential risks.
Reference(s):
cgtn.com







