
US secures early access to AI models from major tech firms to assess national security risks before public release.
The United States government has reached an agreement with major technology companies, including Microsoft, Google, and xAI, to gain early access to advanced artificial intelligence models before their public release.
The arrangement will allow the Center for AI Standards and Innovation (CAISI), under the US Department of Commerce, to evaluate these models for potential national security risks. The initiative is designed to assess capabilities and identify threats such as cyberattacks or misuse in military contexts before wider deployment.
The agreement reflects increasing concern among US policymakers over the rapid advancement of AI technologies and their potential implications for national security. Officials aim to establish rigorous evaluation processes to better understand the risks associated with frontier AI systems.
This move builds on earlier agreements with leading AI developers and strengthens the government’s role in overseeing emerging technologies. CAISI has already conducted dozens of evaluations on advanced AI systems, including models not yet publicly available.
The development also aligns with broader efforts by US defence and regulatory bodies to expand oversight of AI deployment across critical sectors.
