How to Track the Fate of AI Security Testing Commitments: A Guide to Monitoring Government-Industry Agreements
Introduction
In early May 2023, the US Commerce Department published details of a voluntary agreement under which Microsoft, Google, and xAI pledged to submit their most advanced AI models to government scientists for security testing before public release. Within weeks, that page was taken down — a move that raised questions about transparency and the stability of AI safety commitments. This how-to guide walks you through the process of understanding what happened, verifying changes to official records, and staying informed about similar agreements in the future. Whether you are a researcher, journalist, or concerned citizen, these steps will help you navigate the complex landscape of government-industry AI safety collaboration.

What You Need
- Internet access for searching government websites and news archives.
- Familiarity with basic web browsing and use of search engines.
- Knowledge of AI frontier models (e.g., GPT-4, Gemini, Grok) is helpful but not required.
- Access to news sources like Reuters, The Next Web, or other tech journalism outlets.
- Optional: A note-taking tool or document editor to track your findings.
Step-by-Step Guide
Step 1: Gather Context About the Original Agreement
First, you need to understand what the agreement was. On May 5, 2023, the US Commerce Department posted a page stating that Microsoft, Google, and xAI had agreed to let government scientists test their frontier AI models for security flaws before public release. This was part of a voluntary safety pledge that aimed to reduce risks from advanced AI systems. Search for archived versions of the page using tools like the Wayback Machine or cached search results. Look for key details: the exact date, the companies involved, and the scope of testing. This background is critical because without it, you cannot fully appreciate the significance of the removal.
Step 2: Confirm the Removal of the Page
On the day you start your investigation, visit the original URL of the Commerce Department's page. If you are unsure of the exact link, search for phrases like "US Commerce Department AI security testing agreement Microsoft Google xAI" or similar terms. In this case, the page is no longer accessible. Document the HTTP status (e.g., 404 Not Found) or any redirect message. Take a screenshot and note the date and time. You can also use a tool like httpstatus.io to verify the removal. This step establishes the fact that the details have been deliberately deleted or hidden.
Step 3: Find Alternative Sources for the Original Content
Since the official page is gone, turn to news reports. The Reuters article from Monday (the day after the removal was noticed) contains the core facts. Other outlets like The Next Web may have republished the story. Use a news aggregator or search engine with keywords such as "Commerce Department removes AI testing agreement page." Read multiple reports to cross-check details. For the most reliable record, look for cached versions of the Reuters article or direct quotes from government officials. This step ensures you have the same information that was once on the government site.
Step 4: Analyze the Implications of the Removal
Now that you have the facts, consider why the removal matters. The deletion could signal a shift in policy, a lack of commitment to transparency, or simply an administrative mistake. Contrast the original agreement's promise of safety checks with the opaque removal. Note that the companies themselves have not publicly commented on the change. This step is about connecting the dots: the agreement was voluntary, and without public documentation, accountability decreases. You might also compare this with similar safety commitments from other governments (e.g., the UK AI Safety Summit) to see if transparency is consistent globally.

Step 5: Stay Updated and Track Future Changes
The removal does not necessarily mean the agreement is dead. Monitor official government pages, press releases, and reputable tech news for any reinstatement or new announcements. Set up Google Alerts for phrases like "Commerce Department AI security testing" or "Microsoft Google xAI safety pledge." Follow the companies' own blogs and press rooms. If the page reappears or if a formal statement is made, you will be the first to know. You can also engage with civil society organizations that track AI governance, such as the Center for AI Safety or the Partnership on AI. This final step transforms your research into ongoing vigilance.
Tips for Effective Monitoring
- Use the Wayback Machine regularly: Archive.org captures snapshots of web pages. Bookmark the URL of important AI safety pages and check them weekly.
- Cross-check multiple sources: News outlets can miss nuances. Compare coverage from Reuters, The Next Web, Wired, and TechCrunch for a fuller picture.
- Look for official statements: The Commerce Department may issue a statement explaining the removal. Search their press releases page or social media accounts.
- Document everything: Keep a timeline of changes. Note dates when pages go up, come down, or get modified. This can reveal patterns over time.
- Be wary of misinformation: Some may claim the removal is part of a cover-up. Stick to verifiable facts: the page existed, then it did not. Speculate only with evidence.
- Connect with experts: Follow AI governance researchers on platforms like LinkedIn or Twitter. They often share real-time updates and analysis that you can trust.
Related Articles
- Drone Crash Ignites Major Wildfire in Chernobyl Exclusion Zone, Complicating Firefighting Efforts
- Navigating the Moon's Colossal Crater: A How-To Guide for Artemis Mission Planners
- Amazon WorkSpaces Empowers AI Agents with Secure Desktop Access (Preview)
- NISAR Satellite Reveals Ground Sinking Crisis in Mexico City
- 7 Critical Insights Into Automated Failure Attribution for Multi-Agent Systems
- 10 Breakthroughs in Quantum Materials: How Rotated Crystals Conduct Electricity
- The Art of User Research: Crafting Compelling Stories from Data
- How to Identify Landslides Triggered by Cyclone Rains Using Satellite Imagery