AI Leaders OpenAI and Anthropic Partner with U.S. Government on Safety Testing

AI Leaders OpenAI and Anthropic Partner with U.S. Government on Safety Testing
OpenAI and Anthropic, two prominent artificial intelligence companies, have agreed to collaborate with the U.S. AI Safety Institute for pre-release testing of their new AI models. This partnership, announced by the National Institute of Standards and Technology (NIST), aims to address growing concerns about ethics in AI and safety risks associated with advanced AI systems.
The AI Safety Institute, established following the Biden-Harris administration’s executive order on artificial intelligence in October 2023, will conduct safety assessments on major new models from both companies. This initiative aligns with the government’s efforts to promote responsible AI development while researching potential labor market impacts.
OpenAI CEO Sam Altman confirmed the agreement, stating, “We’re committed to pre-release testing of our future models with the U.S. AI Safety Institute.” The company also reported doubling its weekly active users to 200 million over the past year. Amid this growth, reports suggest OpenAI is in talks for a new funding round led by Thrive Capital, potentially valuing the company at over $100 billion.
Anthropic, founded by ex-OpenAI executives and backed by Amazon, has also joined this collaborative research effort. The company, most recently valued at $18.4 billion, sees this as an opportunity to enhance its approach to AI safety. OpenAI, meanwhile, continues its partnership with Microsoft.
This development comes as the AI industry faces increased scrutiny. The Federal Trade Commission (FTC) and Department of Justice are reportedly considering antitrust investigations into OpenAI, Microsoft, and Nvidia. Additionally, California lawmakers recently passed an AI safety bill, now awaiting Governor Gavin Newsom’s decision, which would mandate safety testing for certain AI models.
Some tech companies have expressed concerns that overly strict regulations could hinder innovation. However, proponents argue that such measures are necessary to ensure the responsible development of AI technologies.
As the field of AI continues to evolve rapidly, this collaboration between industry leaders and government agencies represents a significant step towards balancing technological advancement with safety and ethical considerations. The outcomes of this partnership could shape the future of AI governance and development practices across the industry.