OpenAI Flags ‘High’ Cybersecurity Risks in Next-Gen AI Models
OpenAI warns next-gen AI models may pose high cybersecurity threats, outlines defensive strategies and new initiatives to counter risks.
News & Insights Across Asia
OpenAI warns next-gen AI models may pose high cybersecurity threats, outlines defensive strategies and new initiatives to counter risks.
Over 200 experts urge UN nations to establish AI ‘red lines’ by 2025 to prevent catastrophic risks, including autonomous weapons and mass surveillance.
AI pioneer Geoffrey Hinton warns there is a 10% to 20% chance that AI could lead to human extinction within 30 years, urging for increased government regulation to ensure the safe development of AI.
European banks are expressing concerns that the growth of AI is increasing their dependence on major U.S. tech companies, introducing new risks to the financial industry.
Current and former employees of OpenAI and Google DeepMind have issued an open letter warning about the risks of unregulated AI technology, calling for stronger oversight and transparency in the industry.