May 15, 2026

D.A.D. today covers 12 stories from 5 sources. What's New, What's Innovative, What's Controversial, What's in the Lab, and What's in Academe.

D.A.D. Joke of the Day: My AI assistant has really improved my work-life balance. Now I do no work and spend all my life re-prompting.

What's New

AI developments from the last 24 hours

With Trump in Beijing, Anthropic Calls for Tighter China Chip Controls

Anthropic published a policy paper arguing the US has "a once-in-a-generation opportunity" to lock in a 12-24 month AI lead over China by 2028 — but only if Washington tightens chip export controls, disrupts the "distillation attacks" through which Chinese labs reverse-engineer American models, and accelerates global adoption of US AI. The paper sketches two 2028 scenarios: in one, US frontier labs sit years ahead and "democracies set the rules and norms" of AI; in the other, the CCP catches up via smuggled chips, offshore data centers, and continued distillation, and "automated repression at scale" goes global. Anthropic argues current controls are working — citing an analysis that Huawei will produce just 4% of NVIDIA's aggregate compute output in 2026 — but warns that smuggling routes, foreign data-center access, and gaps in lithography-equipment controls are eroding the lead.

Why it matters: The timing is conspicuous. President Trump is in Beijing this week with a delegation that includes NVIDIA CEO Jensen Huang, who has spent the past year publicly arguing the opposite of Anthropic's case — that US chip restrictions hurt American firms more than they slow China. Anthropic's broader relationship with the Trump White House is strained, with AI czar David Sacks having criticized the company over safety-focused regulation — but on distillation specifically, the administration is moving in Anthropic's direction: the White House Office of Science and Technology Policy recently issued a memo on distillation attacks, and a House Foreign Affairs Committee bill targeting the practice passed out of committee unanimously. The paper reads as Anthropic's bid to amplify the issue where it already has Washington backing, while pressing the harder case for tighter chip controls just as the opposing argument is being made in Beijing.


MIT Warns Federal Cuts Could Shrink Graduate Programs by 20%

MIT President Sally Kornbluth warned that federal research funding cuts are hitting the institute hard: campus sponsored-research activity has shrunk 10% year-over-year, new federal research awards are down more than 20%, and graduate student enrollment for next year is falling nearly 20%—potentially 500 fewer students. An 8% tax on endowment returns adds further budget pressure. The declines span most of MIT outside the Sloan business school and electrical engineering/computer science master's programs.

Why it matters: MIT is a bellwether for U.S. research universities; sustained funding cuts and enrollment drops there signal a potential slowdown in the academic AI talent pipeline that feeds major labs and tech companies.


What's Controversial

Stories sparking genuine backlash, policy fights, or heated disagreement in the AI community

Meta Engineers Revolt Over Software That Records Their Screens to Train AI

A protest is brewing inside Meta over the Model Capability Initiative (MCI), mandatory software the company began installing on US employees' laptops last month to record screens, keystrokes, and mouse activity in certain apps — data Meta plans to use to train agentic AI on "real examples of how people actually use" computers, Wired reports. An engineer's internal post denouncing the program was seen by nearly 20,000 coworkers this week, and a petition demanding MCI's withdrawal has been circulating internally since early May. The program has emerged as the leading driver of an "unprecedented" unionization effort at Meta's UK offices, according to United Tech and Allied Workers organizer Eleanor Payne. Sixteen current and former employees told Wired the rollout has fueled record-low morale; some workers are quietly protesting by delaying installation and tolerating the resulting nag notifications. Meta declined to comment to Wired on the union allegation. Layoffs are reportedly scheduled to hit next week.

Why it matters: Most companies racing to build agentic AI have paid volunteers to record their computer use. Meta appears to be the first major lab to compel its own workforce to generate that training data — and the backlash suggests workers see "build AI to do my job by training it on me" as a categorically different bargain than ordinary corporate monitoring. Reporting by Wired's Paresh Dave.


ArXiv Reportedly Banning Researchers Who Submit AI-Hallucinated Citations

ArXiv is reportedly implementing a 1-year ban for researchers who submit papers with hallucinated references—fake citations that don't exist, often generated by AI tools. According to social media posts circulating in research communities, repeat offenders would need peer-reviewed acceptance elsewhere before submitting again. The policy hasn't yet appeared on arXiv's official help pages, so details remain unconfirmed. Community reaction has been largely positive, with researchers calling it 'incredibly good for science,' though some raised questions about enforcement as AI-generated content becomes harder to detect.

Why it matters: If confirmed, this signals that major research infrastructure is starting to treat AI-generated fabrications as serious misconduct—a standard that could spread to journals, conferences, and eventually corporate research practices.


Disable Your Car's Data Collection? Toyota Owner Posts a Tutorial

A technical walkthrough details how to physically remove the modem and GPS modules from a 2024 Toyota RAV4 Hybrid to stop the vehicle from transmitting telemetry data. The post catalogs privacy concerns with connected cars: manufacturers sharing driving behavior with insurers, employees accessing camera footage, and security researchers demonstrating remote vehicle takeovers. The author claims the car remains fully functional after removal, with a bypass kit restoring microphone features. This reflects growing backlash against vehicle data collection, which Mozilla research found includes surprisingly invasive categories across 25 automakers.

Why it matters: As AI-powered telematics become standard in vehicles—enabling insurance pricing, fleet management, and predictive maintenance—some owners are pushing back by physically disabling connectivity, a trend that could pressure automakers on data practices.


What's in the Lab

New announcements from major AI labs

ChatGPT's Coding Tool Now Works From Your Phone

OpenAI's Codex coding agent is now available in preview on the ChatGPT mobile app. Users can monitor and manage Codex tasks running on their laptops or remote environments directly from their phones—reviewing outputs, approving commands, switching models, and starting new work. OpenAI says more than 4 million people now use Codex weekly. The mobile integration means developers and technical users can keep AI-assisted coding moving without being tethered to their workstations.

Why it matters: For teams using AI coding tools, this removes a friction point: you can approve a code review or kick off a new task from anywhere, not just your desk.


Shopee's Parent Company Calls AI Coding Tools a 'Structural Multiplier'

Sea Limited, the Singapore-based tech conglomerate behind Shopee and Garena, is deploying OpenAI's Codex across its engineering organization. The company frames AI coding tools not as productivity boosts but as a 'structural multiplier' that changes how teams operate. Internal data shows 87% of Codex users are weekly active, and 73% of top-rating developers would recommend it to colleagues. Sea's engineering leadership says developers use the tool to 'think better, not just type faster'—focusing on architecture and experimentation rather than routine implementation.

Why it matters: A major Asian tech company publicly betting on AI coding tools as infrastructure rather than novelty signals where enterprise software development is heading—and provides a real-world adoption benchmark for executives weighing similar rollouts.


ChatGPT Now Tracks Conversation History to Flag Mental Health Risks

OpenAI announced safety updates to ChatGPT designed to better detect emerging risk in sensitive conversations. The system now tracks what the company calls "safety summaries"—contextual information carried across separate conversations to identify subtle or evolving cues related to suicide, self-harm, or potential harm to others. OpenAI claims the update helps distinguish between benign requests and those signaling genuine risk, though the company provided no data on accuracy or false-positive rates.

Why it matters: This represents a shift toward persistent safety monitoring across sessions—raising questions about how much conversational context AI systems should retain and who decides when a conversation crosses into "high risk."


What's in Academe

New papers on AI and its effects from researchers

GPT-5.4 Predicts What People Want in Life Better Than Other Humans Can

A study from researchers including Harvard economists found that GPT-5.4 predicted individual Americans' life preferences better than other humans did. When shown pairs of hypothetical life stories and asked which they'd prefer, the model's valuations of income, longevity, and working conditions closely matched human responses. The researchers suggest LLMs could serve as a scalable, low-cost tool for studying how people make major life tradeoffs—potentially useful for policy modeling, market research, or survey design.

Why it matters: If validated, this suggests AI models have absorbed enough about human values to simulate preference research at a fraction of traditional survey costs—a significant capability for anyone doing consumer research or policy analysis.


Economists Build Research Datasets for Pennies Using AI Agents

Economists tested whether AI agents could automate the tedious work of building research datasets from public sources—and found promising early results. Their method, called Deep Research on a Loop, used AI to update a tax policy database for eight Latin American countries, producing 129 sources and 136 evidence records. The system fully covered 22 qualitative data fields, though quantitative estimates had documented gaps. Total cost: roughly equivalent to a standard LLM subscription, comparable to a few hours of human research-assistant time.

Why it matters: If AI can reliably handle dataset construction—one of the most time-consuming parts of empirical research—it could significantly accelerate economic analysis and reduce costs for think tanks, consulting firms, and policy shops that rely on this work.


Degree Requirements in AI Hiring Tools May Face Legal Challenges

A new NBER paper argues that AI hiring tools encoding bachelor's degree requirements—what the authors call 'algorithmic credentialism'—may face legal challenges under existing civil rights law. The paper contends that because degree screens disproportionately filter out qualified workers without college credentials (roughly half the U.S. workforce), they could trigger disparate-impact obligations requiring employers to prove job-relatedness. The counterintuitive argument: AI's ability to assess skills directly actually undermines the efficiency rationale for using degrees as a proxy, making credential requirements harder to legally defend.

Why it matters: Companies using AI in hiring may need to audit whether degree filters can withstand legal scrutiny—or shift toward skills-based screening as a safer alternative.


Data Centers Boost Local Jobs and Wages but Raise Electricity Costs

A new NBER study finds data centers deliver measurable local economic benefits—but with tradeoffs. Researchers tracked facility-level data across U.S. counties and found that data center growth increases total employment, construction jobs, house prices, tax revenue, and local wages. The catch: electricity prices also rise. The study used an instrumental variable approach (comparing counties based on proximity to existing fiber infrastructure and historical college-educated populations) to isolate effects from the fact that data centers don't locate randomly.

Why it matters: As AI drives explosive demand for computing infrastructure, this research gives policymakers and business leaders empirical grounding for the economic development vs. resource strain debates that accompany every proposed data center project.


What's On The Pod

Some new podcast episodes

AI in BusinessThe Architecture Shift Behind Reliable Enterprise AI - with Ravi Marwaha of Arango

AI in BusinessWhy Manufacturing's Most Valuable Data Isn't in Any System — with Anand Gnanamoorthy of Ingersoll Rand