OpenAI_and_Anthropic_Partner_with_U_S__Government_for_AI_Safety_Testing

OpenAI and Anthropic Partner with U.S. Government for AI Safety Testing

Leading generative AI developers OpenAI and Anthropic have entered into agreements with the U.S. government to provide access to their new AI models for safety testing. Announced on Thursday, this collaboration aims to enhance the safety and reliability of AI technologies before they are publicly released.

The agreements were made with the U.S. AI Safety Institute, a part of the National Institute of Standards and Technology (NIST). This federal agency will offer feedback on potential safety improvements to the AI models, both before and after their public deployment. The institute intends to work closely with its counterpart at the UK AI Safety Institute, ensuring a comprehensive approach to AI safety across borders.

“These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI,” said Elizabeth Kelly, director of the U.S. AI Safety Institute.

The collaboration underscores a growing emphasis on voluntary commitments by leading AI developers to prioritize safety and ethical considerations in their innovations. “Our collaboration with the U.S. AI Safety Institute leverages their wide expertise to rigorously test our models before widespread deployment,” said Jack Clark, co-founder and head of policy at Anthropic. “This strengthens our ability to identify and mitigate risks, advancing responsible AI development.”

This initiative is connected to a White House executive order on AI announced in 2023, designed to provide a legal framework for the rapid deployment of AI models in the United States. While the U.S. government is eager to allow tech companies the freedom to innovate, this collaboration represents a balanced approach to ensuring safety without stifling progress.

In contrast to the U.S. approach, the European Union has passed an ambitious AI Act to more closely regulate the industry. Meanwhile, lawmakers in California, the home of Silicon Valley, have pushed through a state AI safety bill that awaits the governor’s signature to become law.

OpenAI CEO Sam Altman welcomed his company’s agreement with the U.S. government, emphasizing that it is “important” for regulation to take place at the national level. This was a subtle critique of the state-level legislation in California, which OpenAI opposes due to concerns it may hinder research and innovation.

As AI technologies continue to evolve rapidly, collaborations like these between AI developers and government agencies are crucial. They represent a proactive step toward ensuring that advancements in AI are both innovative and safe, benefiting society while minimizing potential risks.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top