· 001 · AI News Briefing · 4 min read

AI News Briefing — May 4, 2026: OpenAI's o1 Outperforms ER Doctors, SAG-AFTRA AI Deal, and the Podcast 'Slop' Crisis

7 Top Stories

1. OpenAI’s o1 Outperforms Human Doctors in Harvard ER Trial

In a landmark study published in Science, Harvard Medical School researchers found that OpenAI’s o1 reasoning model correctly diagnosed 67% of emergency room patients from electronic health records, compared to 50–55% accuracy by human triage doctors. The trial tested 76 patients at Boston’s Beth Israel Deaconess Medical Center, with the AI’s advantage most pronounced in rapid-decision scenarios with minimal information. When more detail was available, the AI reached 82% accuracy versus 70–79% for human experts. The study also found the AI scored 89% on creating long-term treatment plans compared to 34% for doctors using conventional resources. Researchers emphasized this is a “second opinion” tool, not a replacement — the AI doesn’t observe physical symptoms or patient distress signals.

2. SAG-AFTRA Reaches Four-Year Studio Deal With New AI Guardrails

Following the Writers Guild’s agreement last month, the Screen Actors Guild has reached a new four-year deal with major studios that includes AI protections for performers. While official details are still pending, the contract reportedly features a sizable contribution to the union’s pension fund, increased streaming residuals, and specific safeguards around the use of AI to replicate or replace actors. The deal signals growing industry consensus on establishing boundaries for generative AI in entertainment production.

3. Academy Awards Bars AI From Acting and Writing Categories

The Academy of Motion Picture Arts and Sciences has released rules for the 99th Academy Awards (2027) stating that only roles “demonstrably performed by humans with their consent” will be eligible for acting awards. Screenplays must also be “human-authored” to qualify for writing categories. The Academy reserves the right to request additional information if questions arise about generative AI use in any film. This effectively rules out AI-generated performers like the recently announced “Tilly Norwood” from Oscar contention.

4. AI-Generated Podcasts Flood Platforms — 39% of New Feeds Are “Slop”

Bloomberg reports that 39% of all new podcast feeds created over a recent nine-day period were likely AI-generated, according to data from Podcast Index. The surge is driven largely by Inception Point AI, which reportedly publishes approximately 3,000 AI-generated podcast episodes per week. Over 10,000 new podcast feeds appeared in that window, with roughly 4,200 showing signs of AI generation. The flood of low-quality automated content is challenging audio platforms’ curation systems and raising concerns about discoverability for human creators.

5. OpenAI Rolls Out Advanced Account Security for High-Risk Users

OpenAI has launched enhanced security features for users enrolled in its advanced account security settings, including passkey and physical security key support for ChatGPT and Codex accounts. Enrolled users receive alerts about new logins and are automatically excluded from AI model training data. The move comes as high-profile accounts face increasing targeting, and positions OpenAI alongside other major platforms offering hardware-level authentication options.

6. Anthropic Launches Claude Security for Enterprise Code Scanning

Anthropic has released Claude Security, an enterprise tool powered by its Opus 4.7 model that scans business codebases for vulnerabilities and automatically generates fixes. The tool is rolling out to enterprise customers globally. It’s distinct from Anthropic’s Mythos research model, which can identify and exploit vulnerabilities across operating systems and web browsers. The launch signals Anthropic’s push into the competitive enterprise cybersecurity market alongside traditional security vendors.

7. Musk v. Altman Trial Enters Financial Testimony Phase

The ongoing legal battle between Elon Musk and OpenAI continued with testimony from Jared Birchall, Musk’s money manager at Excession LLC. Key revelations include emails showing Musk told Altman in 2020 that OpenAI looked “hypocritical” after the Microsoft deal and suggested changing the company’s name. Financial documents confirmed Musk’s roughly 60 donations to OpenAI, while a dispute emerged over whether Tesla vehicles given to individuals counted as in-kind contributions to the company. Musk also clarified that Tesla’s “robot army” refers to manufacturing robots, not weapons — though he acknowledged in court that “worst case situation is AI kills us all.”


Trend Watch

DomainSignalDirection
Healthcare AIo1 outperforms doctors in Harvard ER trial🟢 Accelerating
Entertainment AISAG-AFTRA + Academy set AI boundaries🟢 Regulating
Audio/Content39% of new podcasts are AI-generated🔴 Flooding
CybersecurityAnthropic launches enterprise code scanner🟢 Expanding
AI LitigationMusk v. Altman trial reveals internal tensions🟡 Ongoing

What to Watch

  • Academy AI enforcement: The Oscars rules say only humans are eligible, but the Academy says it can “request more information” if questions arise. Expect the first high-profile AI eligibility challenge soon — the rules are new and untested.
  • Podcast platform response: With nearly 40% of new feeds being AI-generated, platforms like Spotify and Apple Podcasts will face pressure to implement detection and labeling systems, or risk degrading their entire content ecosystem.
  • Harvard AI trial follow-up: The study’s limitation — AI only reads text records, not physical symptoms — means the next research frontier is multimodal AI that can incorporate visual and audio patient data. Expect rapid investment in this direction.
Back to Blog