March 10, 2026

D.A.D. today covers 12 stories from 6 sources. What's New, What's Innovative, What's Controversial, What's in the Lab, and What's in Academe.

D.A.D. Joke of the Day: My AI assistant said it would finish my report in seconds. Three hours later, I realized "seconds" was a request for more time.

What's New

AI developments from the last 24 hours

OpenAI Reportedly Pulls Back From Oracle Data Center Deal Over Chip Timing

OpenAI is reportedly pulling back from expanding its Stargate data center partnership with Oracle in Texas because it wants newer Nvidia chips than the Blackwell processors planned for the site. The underlying problem: Nvidia now ships new GPU generations annually instead of every two years, and data centers take 12-24 months to build. By the time infrastructure is ready, the hardware may already be a generation behind. Oracle, which has funded its buildout with over $100 billion in debt, has seen its stock drop 23% year-to-date.

Why it matters: This signals a brutal new dynamic for AI infrastructure—chip improvements are outpacing construction timelines, potentially stranding billions in data center investments before they're even operational.


AI Product Offered Artists Royalties. Artists Refused. Product Died.

Tess.Design, an AI image marketplace that paid artists 50% royalties when users generated images in their fine-tuned styles, shut down in January 2026 after less than two years. The founders' retrospective reveals the challenge: of 325 cold emails to artists, only 6.5% agreed to participate and had models trained, while zero of 11 illustration agencies would sign on. The 25 founding artists received advances of $300-$4,000, but the ethical licensing model couldn't sustain itself commercially.

Why it matters: This is a data point for anyone watching the AI-and-creative-rights debate: even with generous royalty splits and legal protections, artist adoption was low and the economics didn't work—suggesting 'ethical AI art' may need a different business model entirely.


Analysis Disputes Forbes Claim That Anthropic Loses $5,000 Per Heavy Claude User

A technical analysis disputes a widely circulated Forbes claim that Anthropic loses $5,000 monthly on heavy Claude Code Max subscribers. The rebuttal argues Forbes confused retail API pricing with actual compute costs. Using OpenRouter data—where competitors serve similar-sized models at roughly 10% of Anthropic's API rates—the author estimates real compute costs around $500/month for power users, not $5,000. Anthropic has said fewer than 5% of subscribers would hit the new weekly usage caps that prompted the original story.

Why it matters: The correction matters for anyone evaluating AI subscription economics: a 10x gap between claimed losses and likely reality suggests the 'AI companies are bleeding money on power users' narrative may be significantly overstated.


What's Innovative

Clever new use cases for AI

Programming Language Designed for AI Fits Entirely in a Context Window

Mog is an open-source programming language designed specifically for AI agents to modify themselves. The key innovation: its entire specification fits in 3,200 tokens—small enough for an LLM to hold the complete language definition in context while writing code. Built in Rust, Mog compiles to native code and uses capability-based permissions, meaning a host application controls exactly which functions AI-generated code can execute. The project targets developers building AI systems that need to dynamically extend their own capabilities through self-written plugins or scripts.

Why it matters: This is developer infrastructure—but it signals a shift toward AI systems that can safely write and run their own code, a capability enterprises will eventually need to evaluate for both its productivity potential and its security implications.


YC-Backed Startup Tackles File Management for AI Agents

Terminal Use, a Y Combinator-backed startup, launched a platform for deploying AI agents that work with files and folders. The service handles infrastructure plumbing—sandboxed environments, state persistence, file management—so teams can run coding agents, research agents, or document processors without building that layer themselves. The pitch: treat file systems as a core feature rather than an afterthought, letting agents maintain workspaces across sessions and share files between different AI processes. No performance benchmarks or customer data available yet.

Why it matters: This is developer infrastructure, but if your team is experimenting with AI agents that read, write, or process documents, it signals that the tooling layer for agent deployment is maturing—potentially making these workflows easier to operationalize.


Uncensored Image Generator Tests Hugging Face Content Policies

A Hugging Face Space offering uncensored NSFW image generation appeared on the platform, built on Flux architecture. The space, created by an anonymous user, represents the ongoing tension between open-source AI platforms and content moderation. No details on capabilities or safeguards were provided. Hugging Face has previously removed similar projects that violated its terms of service.

Why it matters: This is developer-community activity, not a product launch—but it signals the continuing cat-and-mouse dynamic between AI platforms trying to enforce content policies and users circumventing restrictions on open-source tools.


Research Technique Could Make AI Image Tools Finally Place Objects Where You Want Them

Researchers developed CoCo, a framework that improves AI image generation by having the model write executable code as an intermediate planning step. Instead of generating images directly from text prompts, the system first produces code that specifies layouts and structure, renders a draft image, then refines it. The approach showed substantial gains on benchmarks measuring structured and complex image generation—improvements of 41-69% over direct generation methods. The technique addresses a persistent frustration: getting AI image tools to reliably place objects where you actually want them.

Why it matters: If this approach reaches commercial tools, it could make AI image generation far more predictable for business users who need precise control over layouts, compositions, and complex scenes—less trial-and-error prompting to get the image you envisioned.


What's in the Lab

New announcements from major AI labs

Messenger Scans Links for Malware Without Breaking Encryption

Meta published technical details on how Messenger's Advanced Browsing Protection scans links for malware without compromising end-to-end encryption. The system uses private information retrieval—a cryptographic method that lets your device check URLs against Meta's watchlist of millions of malicious sites without the server learning what links you're clicking. On-device AI models handle initial filtering before any server queries occur. Meta says the approach preserves the privacy guarantees of encrypted messaging while still offering real-time protection against phishing and scams.

Why it matters: This is Meta's answer to the longstanding tension between encrypted messaging and user safety—showing how platforms can add protective features without building backdoors that privacy advocates have long warned against.


OpenAI Acquires AI Security Testing Startup Promptfoo

OpenAI is acquiring Promptfoo, a startup that builds tools to help companies find and fix security vulnerabilities in AI systems before deployment. The deal signals OpenAI's push to address enterprise security concerns as more companies integrate AI into critical workflows. Promptfoo's platform lets organizations test AI applications for issues like prompt injection, data leakage, and unreliable outputs—problems that have slowed enterprise AI adoption. Financial terms weren't disclosed.

Why it matters: As OpenAI competes for enterprise contracts, offering built-in security testing could differentiate its platform from rivals and reduce friction for companies wary of AI risks.


What's in Academe

New papers on AI and its effects from researchers

Training AI Models on Their Own Feedback May Have a Ceiling, Researchers Warn

New research suggests a fundamental limit for training AI models using unsupervised reinforcement learning with verifiable rewards—a technique that lets models improve by checking their own answers rather than relying on human feedback. The key finding: all intrinsic reward methods follow a predictable "rise-then-fall" pattern and eventually collapse. The problem is that these methods essentially sharpen the model's existing instincts, which fails when confidence doesn't correlate with correctness. The researchers argue only external verification methods tied to real computational checks may escape this trap.

Why it matters: This is research-stage work, but it challenges a popular assumption that self-improvement loops can scale indefinitely—labs betting on unsupervised training may need to rethink their approaches.


New Test Shows Best LLMs Stall At 34% Accuracy On Analyzing Big Dataset

A benchmark called OfficeQA Pro tests AI agents on the kind of work enterprise analysts actually do: finding answers across 89,000 pages of U.S. Treasury documents spanning nearly a century. The results are sobering. Frontier models from Anthropic, OpenAI, and Google scored below 5% accuracy when relying on their training alone, under 12% with web search, and just 34% average even when given direct access to the documents. The benchmark requires parsing both text and tables across 26 million numerical values—the kind of grounded, multi-document reasoning that enterprises need but AI still struggles to deliver reliably.

Why it matters: This quantifies a gap many enterprises have felt intuitively: current AI is far better at generating plausible text than at the careful, evidence-based document analysis that compliance, finance, and legal work actually require.


DARPA's AI Security Tools Go Local, Already Finding Critical Bugs

Researchers released OSS-CRS, a framework that makes DARPA's competition-winning AI cybersecurity tools usable outside their original cloud environment. The original systems from the AIxCC competition—where AI autonomously found and fixed software vulnerabilities—were effectively locked to competition infrastructure. The ported first-place system (Atlantis) has already discovered 10 previously unknown bugs across 8 open-source projects, including 3 rated high severity. The framework runs locally, making these government-funded security tools accessible to organizations managing their own code.

Why it matters: This is developer and security team infrastructure—but if your organization maintains open-source dependencies or runs vulnerability scanning, watch for these tools to appear in commercial security products.


What's On The Pod

Some new podcast episodes

How I AIMastering Midjourney: How to create consistent, beautiful brand imagery without complex prompts | Jamey Gannon

AI in BusinessOperationalizing Customer Service at Scale with Outcome-Driven Agentic AI - with Craig Walker of Dialpad

The Cognitive RevolutionTry this at Home: Jesse Genet on OpenClaw Agents for Homeschool & How to Live Your Best AI Life