X Restricts Grok AI’s Image Tools Amid Global Backlash
X implements geoblocking and subscription requirements for Grok AI’s image tools following global backlash over explicit content generation.
News & Insights Across Asia
X implements geoblocking and subscription requirements for Grok AI’s image tools following global backlash over explicit content generation.
Britain enforces new laws criminalizing non-consensual AI-generated intimate images, targeting platforms like X amid the Grok chatbot controversy.
China begins 2026 campaign to regulate AI-altered videos targeting distorted cultural content and public morality concerns, with long-term oversight plans.
Beijing’s 2025 labor guidelines address AI-driven job displacement, clarifying legal protections for workers amid technological shifts.
Global financial regulators intensify AI monitoring to address systemic risks and herd behavior in banking, per FSB and BIS reports.
The UN General Assembly addresses AI safety with China proposing global guidelines to prevent weaponization and ensure ethical development.
Over 200 experts urge UN nations to establish AI ‘red lines’ by 2025 to prevent catastrophic risks, including autonomous weapons and mass surveillance.
Italy becomes the first EU nation to enact comprehensive AI legislation, prioritizing privacy, oversight, and youth protections while balancing innovation and cybersecurity.
China initiates nationwide AI governance campaign to combat misinformation and illegal content through platform accountability measures.
OpenAI’s call to restrict Chinese AI models sparks debate about fair competition, global governance, and the ethics of tech regulation in the AI development race.
Chinese universities are implementing stricter regulations on AI use in academic writing, addressing overreliance and misconduct, and promoting better academic evaluation and AI literacy.
South Korea temporarily suspends the Chinese AI app DeepSeek due to privacy law compliance issues, with plans to resume service once improvements are made.
AI pioneer Geoffrey Hinton warns there is a 10% to 20% chance that AI could lead to human extinction within 30 years, urging for increased government regulation to ensure the safe development of AI.
New tool by LatticeFlow AI reveals compliance gaps in major AI models, highlighting challenges Big Tech faces in meeting the EU AI Act’s stringent regulations.
New compliance tool reveals that leading AI models from big tech companies are falling short of the EU AI Act’s regulations, highlighting the challenges ahead for ensuring AI safety and compliance.
California Governor Gavin Newsom vetoes a proposed AI safety bill, sparking mixed reactions from lawmakers, tech leaders, and advocacy groups over the future of AI regulation.
California’s proposed AI safety bill ignites a fierce debate among tech giants, politicians, and Hollywood, as the state grapples with regulating AI innovation and potential risks.
The world’s leading tech companies are urging the EU to adopt a lenient approach in its upcoming AI Act to avoid hefty fines, sparking debates over innovation, transparency, and regulation.
The Cyberspace Administration of China has released draft regulations to standardize the labeling of AI-generated content, aiming to protect national security and public interests.
OpenAI and Anthropic have agreed to provide the U.S. government access to their new AI models for safety testing, marking a significant collaboration in AI regulation.