Anthropic Strikes Stunning Compute Deal With Musk's SpaceX
May 7, 2026
D.A.D. today covers 11 stories from 5 sources. What's New, What's Innovative, What's Controversial, What's in the Lab, and What's in Academe.
D.A.D. Joke of the Day: I asked Claude to help me cut my presentation down to 10 slides. It gave me 47 slides explaining why brevity matters.
What's New
AI developments from the last 24 hours
Anthropic Strikes Stunning Compute Deal With Musk's SpaceX
In one of the most unexpected partnerships of the AI race so far, Anthropic announced it has signed a deal to lease the entirety of SpaceX's Colossus 1 data center—more than 220,000 NVIDIA GPUs across 300+ megawatts of compute capacity—coming online this month. The cluster was originally built by Elon Musk's xAI to train Grok, which means Claude is about to start running on infrastructure built for one of its most direct competitors. Musk has spent years publicly attacking Anthropic's senior leadership, and CEO Dario Amodei is among the loudest voices warning that AI systems pose catastrophic risk—a posture Musk has often dismissed. The deal puts those two camps in business together anyway.
The announcement is one piece of a sweeping Anthropic infrastructure spree. The Colossus 1 lease joins a roughly 5 GW agreement with Amazon (1 GW of new capacity by end of 2026), a 5 GW deal with Google and Broadcom for 2027, a $30 billion Microsoft/Azure partnership built around NVIDIA hardware, and a $50 billion U.S. AI infrastructure investment with Fluidstack. Anthropic also says it has "expressed interest" in partnering with SpaceX to develop multi-gigawatt orbital AI compute—data centers in space. For context, 300 MW is modest next to Anthropic's longer-term roadmap, but it is one of the largest single GPU clusters online anywhere today—and crucially, it is available now. The bulk of the Amazon, Google, and Microsoft capacity does not come online until late 2026 or 2027, so Colossus 1 is likely the most significant near-term compute addition Anthropic will get. The throughline is unmistakable: the company cannot get enough compute, and it is buying it from anyone with capacity to sell.
For xAI and SpaceX, this looks less like a friendly hand and more like a pivot. Hacker News commentary on the announcement was sharp: "Anthropic renting out the data center Elon built for Grok is the kind of plot twist you can't make up." Another widely upvoted reply: "Pretty smart for SpaceX though. They're turning an asset they made for a money-pit [Grok] into probably a major source of revenue ahead of their IPO." Critics also flagged that Colossus 1 has been the subject of repeated lawsuits over unpermitted gas turbines and air- and water-pollution concerns affecting nearby Memphis communities—a reputational risk for Anthropic, which has spent years branding itself as the safety-conscious lab.
Musk himself addressed the deal on X, writing that he "spent a lot of time last week with senior members of the Anthropic team to understand what they do to ensure Claude is good for humanity and was impressed," adding that "no one set off my evil detector." He also disclosed that "SpaceXAI had already moved training to Colossus 2," meaning xAI freed up the original cluster only after standing up its successor—a sign that xAI is not exiting the frontier-model race, but monetizing yesterday's hardware while the new training cluster comes online.
Why it matters: The AI race is no longer mostly about who can train the smartest model—it is about who can secure enough compute to keep training and serving the next one. Anthropic has spent the past year stacking ten- and eleven-figure infrastructure deals with every major cloud provider and now with its loudest critic, and the willingness of the parties to set aside personal and ideological differences is a tell about how desperate the supply situation has become. For xAI, the deal raises a real question about identity: is it a frontier-model lab competing with OpenAI and Anthropic, or the AI infrastructure landlord renting capacity to its rivals? For now, Musk appears to be choosing both—and getting paid by Anthropic to do it.
Discuss on Hacker News · Source: anthropic.com
Claude Code Rate Limits Doubled; Peak-Hours Throttle Lifted
Effective immediately, Anthropic is doubling Claude Code's five-hour rate limits across Pro, Max, Team, and seat-based Enterprise plans, removing the peak-hours limit reduction on Claude Code for Pro and Max accounts, and raising API rate limits for Claude Opus models. The new headroom is a direct downstream of the Colossus 1 capacity covered above. One conspicuous omission: Anthropic did not address weekly usage caps, which remain unchanged—a gap several heavy users flagged immediately.
Why it matters: For developers who have been hitting Claude Code's five-hour ceilings during workdays or losing throughput at peak times, this is concrete, today-impact relief. For Opus API users, the higher rate limits should ease throttling on production workloads. The unchanged weekly caps remain the gating constraint for the heaviest users.
AI Makes It Easy to Fake Expertise—and Harder for Bosses to Spot
An essay circulating in tech circles describes a troubling workplace pattern: employees using AI tools to produce professional-looking work in fields they don't understand. The author recounts a colleague—not an engineer—who spent two months building a data architecture system with AI assistance that was "wrong from the start," while management let it continue due to sunk-cost thinking and the "appearance of momentum." The piece argues that AI makes it easy to impersonate expertise, and that organizations often lack incentives to challenge polished-looking output.
Why it matters: As AI-generated work becomes harder to distinguish from genuine expertise, companies face a growing quality-control problem—confident deliverables that pass surface inspection but fail under scrutiny.
Discuss on Hacker News · Source: nooneshappy.com
Prominent Developer Admits He's Stopped Reviewing AI-Generated Code
Simon Willison, a well-known developer and AI tools commentator, says he's stopped reviewing every line of AI-generated code—even for production systems. In a podcast interview, Willison acknowledged that the line between casual "vibe coding" (letting AI generate code with minimal oversight) and disciplined "agentic engineering" (careful AI-assisted development) is blurring in his own workflow. The admission is notable given Willison's previous advocacy for rigorous code review when using AI assistants.
Why it matters: When a respected practitioner admits their own standards are slipping as tools improve, it signals a broader shift in how professionals will realistically use AI coding assistants—and raises questions about quality control as AI-generated code enters more production systems.
Discuss on Hacker News · Source: simonwillison.net
What's Innovative
Clever new use cases for AI
Google Demonstrates AI Search Tools for Garden Planning
Google published a marketing post showcasing how its Search features can help with gardening—using AI Mode to visualize garden designs, Canvas for planting plans, Lens for plant identification, and Search Live for real-time diagnosis of plant problems. The post cites Google Trends data showing surging interest in 'chaos gardens' (unstructured, naturalistic plantings), with related searches up 140% this spring. This is promotional content demonstrating Google's AI tools applied to a seasonal use case rather than announcing new capabilities.
Why it matters: This signals Google positioning its AI features for lifestyle applications beyond work tasks—though readers should note this is a marketing showcase, not a product launch.
What's in the Lab
New announcements from major AI labs
Uber Partners with OpenAI to Help Drivers Optimize Earnings
Uber announced a partnership with OpenAI to power AI assistants across its platform, including a new product called Uber Assistant aimed at helping drivers optimize their earnings. The company says the assistant turns marketplace data into actionable insights and can speed up onboarding for new drivers. Uber provided no performance benchmarks comparing AI-assisted drivers to those without it. The platform handles 40 million trips daily across 10 million drivers in 70+ countries.
Why it matters: This signals how gig economy platforms are betting on AI to reduce driver churn and boost efficiency at massive scale—expect competitors like Lyft and DoorDash to follow with similar tools.
Top AI-Using Firms Pull Further Ahead, OpenAI Research Claims
OpenAI released research claiming top-performing enterprises now use 3.5x as much AI capability per worker as typical firms—up from 2x just two months ago. The key finding: message volume explains only 36% of this gap. The advantage comes from how these firms use AI, not just how often. Frontier firms send 16x more messages to Codex (OpenAI's coding agent) per worker, and show outsized adoption of newer tools like ChatGPT Agent and Deep Research. OpenAI is positioning this data to help enterprises benchmark their own AI maturity.
Why it matters: This is OpenAI making the case that AI adoption has a compounding effect—and that surface-level usage metrics ("we have 10,000 ChatGPT seats") miss the real story of competitive differentiation.
What's in Academe
New papers on AI and its effects from researchers
Most Developers Accept AI-Generated Code Without Editing, Study Finds
A study of 169 GitHub commits where developers linked their ChatGPT conversations found that programmers mostly accept AI-generated refactoring suggestions verbatim. When developers do modify the suggestions, changes tend to be substantial rather than minor tweaks. Researchers identified five distinct patterns for how developers adapt AI code recommendations, varying based on the refactoring task, how the developer prompted ChatGPT, and whether the AI's response was actually valid.
Why it matters: If developers are copy-pasting AI suggestions without modification, code review practices and quality assurance processes may need to adapt—the human-in-the-loop may be thinner than organizations assume.
Patent Language May Reveal Tech Breakthroughs Decades Early
Researchers developed TechToken, a transformer model that reads patent language to predict which technologies will combine next. The surprising finding: signals of future innovation appear in the collective vocabulary of patents decades before the actual breakthrough—not from any single inventor, but from linguistic patterns across thousands of filings. The model treats patent classification codes like words, learning which technology pairings are converging based on how similarly they're described. Researchers claim it outperforms existing models on patent-related prediction tasks, though specific benchmark comparisons weren't published.
Why it matters: If validated, this could give R&D strategists and patent analysts an early-warning system for emerging technology intersections—potentially reshaping how companies scout acquisitions, file defensive patents, or allocate research budgets.
AI Coding Assistants Boost Speed Modestly but Don't Improve Skills
A meta-analysis of 23 studies finds AI coding assistants deliver real but modest productivity gains—and no measurable impact on learning. Researchers pooled data from 2019-2025 studies and found a statistically significant productivity boost (effect size 0.33), roughly equivalent to moving from the 50th to 63rd percentile. The gains were larger in controlled lab settings than in real-world open-source or enterprise environments. For learning outcomes, the effect was negligible and statistically insignificant.
Why it matters: This is the most rigorous synthesis yet of whether Copilot-style tools actually deliver—and the answer is 'yes, but modestly,' with a notable gap between lab results and workplace reality that should temper vendor claims.
AI Safety Training Can Backfire, Rewarding Harmful Responses
New research finds that the reward models used to align LLMs often prefer socially undesirable responses—the opposite of their intended purpose. Researchers tested five public reward models across bias, safety, morality, and ethical reasoning, converting social evaluation datasets into preference tests. No single model performed best across all domains, and the study identified a concerning trade-off: models trained to avoid bias can become less sensitive to context. Standard reward benchmarks, the researchers argue, miss these failures entirely.
Why it matters: If the systems designed to make AI helpful and harmless are themselves flawed, it raises questions about whether current alignment techniques can reliably produce trustworthy AI at scale.
What's Happening on Capitol Hill
Upcoming AI-related committee hearings
Wednesday, May 13 — Hearings to examine how social media verdicts demand federal action. Senate · Senate Judiciary Subcommittee on Privacy, Technology, and the Law (Open Hearing) 226, Dirksen Senate Office Building
What's On The Pod
Some new podcast episodes
How I AI — Code with Claude: The 5 biggest updates explained
The Cognitive Revolution — "Descript Isn't a Slop Machine": Laura Burkhauser on the AI Tools Creators Love and Hate