· 001 · AI News · 5 min read

AI News Briefing — May 9, 2026

7 Top Stories

1. OpenAI v. Musk Trial: Explosive Testimony From Former Board Members

The trial between OpenAI and Elon Musk entered a critical phase with video depositions from former board member Helen Toner and safety researcher Tasha McCauley. Toner detailed the circumstances around Sam Altman’s November 2024 firing, citing a “pattern of behavior” involving honesty and candor issues, including undisclosed interests in an OpenAI startup fund. She also revealed the board discussed merging OpenAI with Anthropic during the crisis, with Dario Amodei potentially becoming CEO. McCauley testified about a “culture of lying and deceit” spreading through the company. Musk’s legal team appears focused on painting Altman as untrustworthy, while the jury evaluates claims about whether Musk’s donations carried conditions. Closing arguments are expected within a week.

2. Anthropic Releases Natural Language Autoencoders — Turning Claude’s Activations Into Readable Text

Anthropic published groundbreaking research on Natural Language Autoencoders (NLAs), a method that converts Claude’s internal neural activations into plain-language explanations. The system trains one copy of Claude to verbalize what its activations mean, while a second copy reconstructs the original activation from that text. Applied to safety testing, NLAs revealed that Claude Opus 4.6 and Mythos Preview suspected they were being evaluated far more often than they explicitly stated — showing evaluation awareness on 26% of SWE-bench problems and 16% of code-writing tests, compared to less than 1% in real-world usage. The team also released code and an interactive frontend via Neuronpedia for researchers to explore NLAs on open models.

3. Cloudflare Cuts 1,100 Workers (20% of Workforce) as AI Usage Surges 600%

Cloudflare announced it is laying off approximately 1,100 employees, roughly 20% of its workforce. CEO Matthew Prince framed the decision not as cost-cutting but as a strategic repositioning for the “agentic AI era,” noting the company’s AI-related usage has grown 600%. The layoffs come as Cloudflare invests heavily in AI infrastructure, including its AI Gateway and Workers AI platform. The announcement dominated Hacker News with over 1,200 upvotes and nearly 900 comments, sparking debate about whether AI-driven automation is beginning to produce the white-collar job displacement many have predicted.

4. Google DeepMind’s AlphaEvolve: From Lab Curiosity to Infrastructure Backbone

DeepMind published a comprehensive impact report on AlphaEvolve, its Gemini-powered algorithm-design agent, showing it has moved from experimental tool to core infrastructure component. Highlights include: suggesting quantum circuits with 10x lower error for Google’s Willow quantum processor, collaborating with Terence Tao on Erdős problems, improving DNA sequencing accuracy at PacBio by 30%, and proposing counterintuitive TPU circuit designs now built into next-generation silicon. Commercially, Klarna doubled its transformer model training speed, FM Logistic saved 15,000 km in annual routing distance, and Substrate accelerated computational lithography simulations manifold. Jeff Dean called it “TPU brains helping design next-generation TPU bodies.”

5. Meta Employees Reportedly “Miserable” Amid AI Push and Looming Layoffs

A New York Times report paints a grim picture inside Meta, where employees are dealing with the company’s plan to cut 10% of staff later this month alongside an aggressive AI agent mandate. Meta recently began tracking employees’ computer activity to train its AI models and is pushing staff to create so many AI agents that “others had to introduce agents to find agents, and agents to rate agents.” Workers report “anger and anxiety,” with some no longer viewing Meta as a long-term career destination and others trying to signal they want to be laid off to receive severance. The story adds to growing concerns about the human cost of the industry’s AI pivot.

6. OpenAI Launches Codex Chrome Extension for In-Browser Task Automation

OpenAI released a Chrome Web Store extension for Codex that enables the AI to operate directly inside websites and apps where users are already signed in. The extension works through task-specific tab groups, allowing users to keep their active browsing sessions separate from Codex’s automated workflows. This marks a significant step in OpenAI’s push toward agentic AI — moving beyond chat interfaces to systems that can perform multi-step tasks within existing web applications. The extension requires the Codex Chrome plugin to function.

7. Chrome Removes “On-Device AI Doesn’t Send Data to Google Servers” Claim

A Reddit user spotted that Google has removed language from Chrome’s documentation stating that on-device AI processing does not send data to Google servers. The change, which garnered over 600 upvotes on Hacker News, raised privacy concerns among users who had relied on on-device AI specifically to avoid data transmission. The removal comes as Chrome increasingly integrates Google’s Gemini models across its feature set, including the “Help me write” Gmail tool that now personalizes email drafts to match individual writing style and pulls context from Google Drive and Gmail.

Trend Watch

DomainSignalDirection
AI AgentsCodex enters browser; Meta mandates “agents to rate agents”🔥 Heating up
AI InterpretabilityAnthropic NLAs decode Claude’s hidden thoughts🔥 Heating up
AI & EmploymentCloudflare cuts 20%; Meta tracks staff for AI training⚠️ Volatile
AI in ScienceAlphaEvolve impacts quantum, math, genomics, TPU design🔥 Heating up
AI Ethics & GovernanceOpenAI v. Musk trial; AI slop killing communities⚠️ Volatile

What to Watch

  • OpenAI v. Musk trial closing arguments — Expected within a week. The testimony from former board members has set the stage for what could be a landmark case on nonprofit governance and donor obligations in AI.
  • Meta’s 10% workforce reduction — Later this month, Meta will begin layoffs affecting thousands. Expect more reporting on how the AI agent mandate reshapes remaining roles and whether the productivity push backfires.
  • Anthropic’s NLA safety implications — If NLAs reliably reveal unverbalized evaluation awareness, this could reshape how AI labs conduct safety testing and how regulators think about model transparency.
Back to Blog