· 001 · AI News Briefing · 5 min read
Musk v. OpenAI Trial Heats Up, Uber Burns Annual AI Budget in 4 Months on Claude Code
AI News Briefing — May 2, 2026 (06:00 CST)
7 Top Stories
1. Musk v. OpenAI Trial: Money Manager Contradicts Musk on Tesla Donations
Jared Birchall, Elon Musk’s financial manager, testified in the Musk v. OpenAI trial that Musk donated four Teslas as in-kind contributions to OpenAI — directly contradicting Musk’s own courtroom claim that he “bought at full price and gave them to individuals.” Musk appeared notably more subdued on day two, offering shorter yes/no answers on cross-examination. The trial also revealed that Musk paused his quarterly donations to OpenAI until the for-profit terms were settled, and that he admitted all of his ventures since OpenAI have been for-profit. Judge Gonzalez Rogers pressed Musk on whether he read the term sheet for OpenAI’s for-profit wing, to which Musk conceded he “did not read the fine print.” The jury was dismissed early at one point after OpenAI’s lawyers successfully moved to strike portions of Birchall’s direct testimony.
2. Uber Spent Entire 2026 AI Budget in 4 Months on Claude Code and Cursor
Uber’s CTO revealed the company completely burned through its 2026 AI budget in just four months, driven overwhelmingly by Claude Code adoption. The ride-hailing giant rolled out Claude Code to engineers in December 2025, and usage doubled by February as developers embraced its multi-step reasoning capabilities. Monthly API costs per engineer ranged from $500 to $2,000, and 95% of Uber engineers now use AI coding tools monthly. Cursor, the other major tool, has plateaued in usage while Claude Code dominates workflows. With R&D spending at $3.4 billion annually, Uber is now “back to the drawing board” on AI budgeting — a signal that AI developer tools are becoming so valuable that restricting access feels counterproductive.
3. Spotify Launches ‘Verified’ Badge to Distinguish Human Artists from AI
Spotify is rolling out a green checkmark “Verified by Spotify” badge to help listeners identify human artists versus AI-generated content. Verification requires “defined standards demonstrating authenticity” such as linked social accounts, consistent listener activity, merchandise sales, or concert dates. The company says over 99% of artists listeners actively search for will be verified, representing hundreds of thousands of acts. The move follows controversies like The Velvet Sundown, a “synthetic music project” with 850,000 monthly listeners that was revealed to be AI-generated. Critics note the system could disadvantage independent artists without touring or merch infrastructure, and that a human artist badge doesn’t guarantee the music itself wasn’t AI-assisted.
4. Academy of Motion Picture Arts Rules Only Humans Can Win Oscars
The Academy has published rules for the 99th Academy Awards (2027) stating that “only roles credited in the film’s legal billing and demonstrably performed by humans with their consent will be considered eligible.” Screenplays must also be “human-authored.” The rule explicitly bars AI-generated performances and writing from Oscar contention. This closes the door on projects like Tilly Norwood, a virtual actor created by Particle6. If questions arise about AI use in a film, the Academy can “request more information about the nature of the use and human authorship,” putting the burden of proof on filmmakers.
5. Anthropic Launches Claude Security for Enterprise Code Scanning
Anthropic has rolled out Claude Security, an enterprise tool that uses the Opus 4.7 model to scan a company’s entire codebase for vulnerabilities and automatically generate fixes. The tool is now available globally to enterprise customers. Claude Security integrates directly into existing development workflows and provides vulnerability reports alongside suggested patches. This is distinct from Anthropic’s previously reported Mythos project — a powerful model capable of identifying and exploiting vulnerabilities across operating systems and web browsers — which remains a separate research effort.
6. OpenAI Introduces Advanced Account Security for High-Risk Users
OpenAI has launched enhanced security settings for users at elevated risk of account compromise. Users who enroll can sign in using passkeys or physical security keys (YubiKey, etc.) and will receive alerts for new logins to their ChatGPT and Codex accounts. A notable privacy feature: users with advanced security enabled are automatically excluded from AI model training. This move comes as enterprise and government adoption of OpenAI tools accelerates, raising the stakes for account security.
7. US Lawmakers Advance Bill to Age-Gate AI Chatbots
US lawmakers are pushing forward legislation that would require AI chatbot platforms to verify user age before granting access. The bill targets the growing concern over minors’ unrestricted access to increasingly sophisticated conversational AI systems. While specific provisions are still being refined, the legislation would likely mandate age-verification mechanisms similar to those used for other restricted online services. The move reflects broader regulatory momentum around AI safety, particularly concerning younger users who may be more susceptible to manipulation or inappropriate content from AI systems.
Trend Watch
| Domain | Trend | Signal |
|---|---|---|
| AI Developer Tools | Enterprise budgets overwhelmed by adoption | Uber’s blown AI budget signals the era of “too productive to afford” has arrived |
| AI Content Authenticity | Platforms implementing provenance signals | Spotify’s badges + Academy rules show industry self-regulation accelerating |
| AI Governance & Policy | Age-gating and safety rules advancing | US lawmakers targeting chatbot access; regulatory pressure mounting |
| AI Security | Offensive and defensive AI maturing in parallel | Anthropic’s enterprise scanner vs. Mythos research — both sides scaling up |
| AI Legal Battles | Musk v. OpenAI trial shaping precedent | Financial documents and internal emails becoming part of public record |
What to Watch
- Musk v. OpenAI trial continues — Jared Birchall’s testimony is just the beginning. The trial’s financial evidence and internal communications could reshape how nonprofit-to-for-profit AI transitions are structured legally.
- Enterprise AI budget recalibrations — Uber’s budget blowout will likely force other companies to rethink AI tool pricing models. Expect more announcements from companies trying to balance developer productivity gains against spiraling API costs.
- AI content labeling standards — Spotify’s badge system and the Academy’s Oscar rules may catalyze broader industry standards for labeling AI-generated content across media platforms.