Ireland’s Data Protection Commission (DPC) has initiated an investigation into Google’s handling of European Union (EU) user data in the development of its advanced Artificial Intelligence (AI) model, Pathways Language Model 2 (PaLM 2).
The inquiry seeks to determine whether Google adequately safeguarded personal information before utilizing it to train PaLM 2, which underpins several of Google’s AI applications. The DPC’s action underscores the growing concern among regulators about how tech giants collect and process personal data for AI innovations.
Collaborative Effort Across Europe
The DPC, which plays a pivotal role in overseeing data protection for major U.S. tech firms operating within the EU due to Ireland’s regulatory jurisdiction, announced that this investigation is part of a wider collaborative initiative. Regulators across the EU and the European Economic Area (EEA) are working together to ensure that the processing of personal data for AI development aligns with stringent privacy laws.
Implications for Big Tech and AI
This probe reflects a broader scrutiny of how companies like Google balance technological advancement with legal obligations to protect user privacy. The outcome could have significant implications for AI development practices and set precedents for how personal data is used in training sophisticated AI models.
Follow-Up to Actions Against Social Media Platforms
The investigation follows a recent agreement with social media platform X, formerly known as Twitter. The platform committed not to use EU users’ data for training its AI systems unless users have had the opportunity to withdraw consent, an agreement that emerged after legal action by the Irish regulator.
The DPC’s proactive stance signals a rigorous enforcement of data protection regulations, emphasizing that compliance is non-negotiable for companies handling personal data within the EU.
Reference(s):
Top EU privacy regulator probes Google's use of EU data for AI model
cgtn.com