March 28, 2026

D.A.D. today covers 13 stories from 5 sources. What's New, What's Innovative, What's Controversial, What's in the Lab, and What's in Academe.

D.A.D. Joke of the Day: My AI wrote a cover letter so good I didn't get the job—they hired it instead.

What's New

AI developments from the last 24 hours

Anthropic Confirms 'Claude Mythos' After Data Leak Exposes Most Powerful Model Yet

Anthropic confirmed it is testing a new AI model called "Claude Mythos" after Fortune reported that draft materials about the system were left in an unprotected, publicly accessible data store on the company's website. An Anthropic spokesperson called the model "a step change" in AI performance and "the most capable we've built to date." Nearly 3,000 unreleased files were identified by two cybersecurity researchers who assessed the exposure at Fortune's request. The draft documents indicated Mythos outperforms Claude Opus 4.6 across cybersecurity, coding, and academic reasoning benchmarks, and would occupy a new tier above Opus. Anthropic attributed the exposure to a configuration error in an external content management tool. The leaked materials also flagged significant cybersecurity risks—warning the model could allow attacks to scale faster than defenders could counter them—and described a planned private summit for European business leaders at a U.K. country manor, with CEO Dario Amodei attending.

Why it matters: This is a two-layered story. First, a leading AI safety company accidentally exposed sensitive product plans through a basic security misconfiguration—an uncomfortable irony. Second, the model itself reportedly poses cybersecurity risks significant enough that Anthropic is giving defense organizations early access before a public launch. The combination of the leak and the model's described capabilities will fuel the debate over whether frontier AI labs can manage the risks of their own products.


GitHub Using Private Code for AI Training Unless Users Opt Out

GitHub is opting users into allowing their private repositories to be used for AI training, with an opt-out deadline of April 24. Users can disable this through GitHub's Copilot features settings page. Community reaction has been sharply negative—users on Hacker News express frustration about the lack of direct notification, with particular concern that long-time paying customers may miss the deadline entirely. Some questioned whether GitHub had already been using private repos for training.

Why it matters: This is a significant policy shift for enterprise customers who chose private repos specifically to protect proprietary code—and the opt-out deadline gives limited time to act.


Stanford Tool Sandboxes AI Agents to Prevent Accidental File Deletion

Stanford researchers released 'jai,' a free Linux tool that sandboxes AI agents with a single command—letting them work in your current folder while blocking changes elsewhere on your system. The tool addresses a real problem: AI coding assistants and agents can accidentally delete files or wipe directories when given broad system access. Three isolation modes offer different tradeoffs between convenience and security. Early users on Hacker News called it 'excellent' and suggested this kind of containment should be the default for agent systems.

Why it matters: As AI agents gain the ability to execute code and modify files, lightweight guardrails like this could become essential—especially for teams experimenting with autonomous coding tools but wary of giving them free rein.


Microsoft Executive Says He's Pushing to End Mandatory Account Requirement

A Microsoft vice president publicly acknowledged he dislikes Windows 11's mandatory Microsoft account requirement and says he's "working on it" internally. Scott Hanselman's statement on X suggests some executives are pushing to relax the setup requirement, though he noted any change requires navigating internal stakeholders who benefit from the current policy. No concrete plan has been announced.

Why it matters: If Microsoft loosens this requirement, enterprises and IT departments would regain flexibility in device deployment—but Hanselman's candid admission about internal resistance suggests this is more wish than roadmap for now.


Anthropic Publishes Guide for Customizing Claude Code on Team Projects

Anthropic published a configuration guide for Claude Code's .claude/ folder, explaining how teams can customize the AI assistant's behavior within projects. The folder houses CLAUDE.md files (project-specific instructions), custom slash commands, and permission settings. Key recommendation: keep instruction files under 200 lines to maintain effectiveness. Teams can set project-wide configurations that apply to all developers or personal settings that stay local.

Why it matters: For teams already using Claude Code, this is practical documentation for standardizing how the assistant behaves across your codebase—useful for enforcing coding standards or security practices without repeating instructions.


What's Controversial

Stories sparking genuine backlash, policy fights, or heated disagreement in the AI community

Anthropic Still in Trouble Despite Court Win Over Pentagon, Lawyers Say

A California federal judge temporarily blocked the DOW from labeling Anthropic a national security supply chain risk—a designation never before applied to a U.S. company—but tech lawyers and lobbyists told Politico the victory may be short-lived. The government tagged Anthropic after the company refused to let its Claude AI model be used for domestic surveillance or autonomous weapons. In her 43-page order, Judge Rita Lin ruled the administration acted improperly, documenting how three contractors cut ties with Anthropic and over $180 million in pending deals fell through. The problem: the designation rests on two separate statutes, and the second must be decided by the D.C. Circuit Court of Appeals, where two of three judges were appointed by Trump and have historically given wide latitude to executive national security powers. Senior DOW official Emil Michael said the designation stands. Legal experts say Anthropic could spend months or years fighting the label—losing revenue the entire time—even if courts ultimately side with the company.

Why it matters: The precedent here extends well beyond Anthropic. If the government can designate a leading AI company a supply chain risk for placing ethical limits on its own product, it sends a clear signal to the rest of the tech sector: impose guardrails on military AI at your own financial peril. The D.C. Circuit ruling will effectively determine whether AI companies retain meaningful control over how their tools are deployed.


What's in the Lab

New announcements from major AI labs

230-Year-Old Manufacturer Rolls Out ChatGPT to 650-Person Workforce

STADLER, a 230-year-old manufacturing company, says it has deployed ChatGPT across its entire workforce to streamline knowledge work. The company claims the rollout is saving time and boosting productivity, though no specific metrics or implementation details were provided. This is an OpenAI customer spotlight—treat accordingly.

Why it matters: Legacy industrial companies adopting generative AI signals mainstream enterprise acceptance, but without concrete results, this is marketing, not proof of concept.


Google Releases Gemini 3.1 Flash Live for Real-Time Voice Conversations

Google released Gemini 3.1 Flash Live, a voice model designed for real-time conversation. The company claims improved quality, speed, and natural rhythm, with better tonal understanding. On ComplexFuncBench Audio, a benchmark for multi-step voice tasks, it scored 90.8%—though Google didn't specify the previous model's score for comparison. The model can now follow conversation threads twice as long as before. It's available through the Gemini Live API for developers, in enterprise customer experience tools, and in consumer products including Search Live.

Why it matters: Voice-first AI is becoming a serious interface option—this positions Google to compete for enterprise call centers and consumer voice assistants as the technology matures.


What's in Academe

New papers on AI and its effects from researchers

Training Method Could Cut Molecular Simulation Costs by 10,000x

Researchers developed Hi-MLIP, a machine learning approach for simulating atomic interactions that captures how energy changes across molecular configurations. The key innovation is HINT, a training method that reduces the need for computationally expensive calculations by 100 to 10,000 times. The technique improves predictions for chemical transition states and thermodynamic properties like Gibbs free energy, with results approaching "chemical accuracy"—the threshold where simulations become reliable enough to trust. Early tests on hydrogen-based materials matched experimental measurements for superconducting temperatures.

Why it matters: This is specialized computational chemistry research—relevant if your organization does materials discovery or drug development, where faster accurate simulations could accelerate R&D timelines.


Transformer Model Aims to Speed Up Urban Wind Simulations for City Planners

Researchers have developed AB-SWIFT, a transformer-based model that can simulate 3D wind flow patterns around urban buildings without running full computational fluid dynamics (CFD) simulations. CFD calculations are notoriously slow and expensive—they're used for everything from designing HVAC systems to assessing pedestrian comfort and pollution dispersion in cities. The team trained their model on randomized urban geometries and claims it outperforms existing AI approaches for atmospheric flow modeling.

Why it matters: This is specialized engineering infrastructure—relevant if your organization does urban planning, building design, or environmental consulting, where faster wind modeling could cut project timelines significantly.


Short Video Clips Can Now Generate 2-Minute AI Videos, Researchers Claim

Researchers have developed PackForcing, a technique that lets AI video models trained only on 5-second clips generate coherent 2-minute videos—a 24x extrapolation beyond their training data. The method uses a memory management strategy that keeps GPU requirements bounded at 4 GB regardless of video length, running on a single H200 GPU. On VBench evaluations, it achieved state-of-the-art scores for temporal consistency and dynamic motion. This is academic research, not a product, but it addresses a core limitation: training on long videos is expensive, and current models struggle to maintain coherence past their training length.

Why it matters: If these results hold in production systems, video AI tools could generate much longer content without the quality degradation and ballooning compute costs that currently limit them.


PixelSmile Promises Fine-Grained Control Over Facial Expressions in Photos

Researchers have developed PixelSmile, an AI framework for editing facial expressions in images with granular control. The system claims to adjust expressions along a continuous spectrum—not just switching between "happy" and "sad" but dialing intensity levels—while preserving the person's identity. The team also released a new dataset (FFE) with detailed emotional annotations and a benchmark for evaluating such tools.

Why it matters: Fine-grained expression editing could enable more nuanced AI-generated content for marketing, entertainment, and virtual avatars, though it also raises obvious concerns about synthetic media and deepfakes.


Hybrid Memory Approach Helps AI Video Tools Remember Characters Who Leave the Frame

Researchers have proposed "Hybrid Memory," an approach for AI video generation that addresses a persistent problem: keeping characters consistent when they leave the frame and return. Current video models struggle to "remember" what a person or object looked like after they've been off-screen. The new architecture, called HyDRA, uses compressed visual tokens and relevance-based retrieval to maintain subject consistency across longer clips. The team also released HM-World, a 59,000-clip dataset specifically designed to test this hide-and-reappear challenge.

Why it matters: For anyone using AI video generation, this targets a real frustration—characters changing appearance mid-scene—though it's still research-stage work without production tools available yet.


What's On The Pod

Some new podcast episodes

AI in BusinessWhat Global Tariff Uncertainty Means for Supply Chain Leaders - with Edmund Zagorin of Arkestro and Michael Shin of Trinity Rail Industries