May 4, 2026

D.A.D. today covers 5 stories from 2 sources. What's New, What's Innovative, What's Controversial, What's in the Lab, and What's in Academe.

D.A.D. Joke of the Day: I asked Claude to help me be more decisive. It gave me three excellent options.

What's New

AI developments from the last 24 hours

Opinion: AI Coding Tools May Be Eroding Developer Skills

An opinion piece argues that agentic coding—where AI handles implementation while developers orchestrate—poses risks that previous programming shifts didn't. The author contends that unlike the move from assembly to higher-level languages, AI coding tools are already degrading critical thinking skills, citing reports of experienced developers experiencing 'brain fog' and teams grinding to a halt during Claude Code outages. The piece warns of cognitive atrophy, skill erosion, and vendor lock-in, though it relies primarily on anecdotal evidence rather than quantified studies.

Why it matters: As AI coding assistants become standard in enterprise workflows, the dependency question—what happens when the tool goes down, or when junior developers never build foundational skills—deserves serious consideration alongside productivity gains.


Open Source Project Claims Claude-Level Coding at 17x Lower Cost

A project called DeepClaude claims to replicate Claude Code's agent loop using DeepSeek V4 Pro at 17x lower cost. The tool aims to provide similar AI coding assistance at a fraction of the price. However, community reaction has been skeptical—one user says the pricing claims don't hold up, another suggests the project was hastily built with inaccurate cost comparisons, and others question whether the underlying model matches Claude's performance for complex coding tasks.

Why it matters: This is developer plumbing with disputed claims—worth watching if AI coding costs are a concern for your team, but verify the economics before committing.


What's in Academe

New papers on AI and its effects from researchers

AI Is Changing Work, Not Replacing It — At Least For Now: Three New Papers

Three NBER working papers released Monday converge on a common empirical finding—AI is mostly changing work rather than replacing the people doing it—while flagging the conditions under which that pattern could break.

1. In American firms (Bonney, Breaux, Dinlersoz, Foster, Haltiwanger, Pande — U.S. Census Bureau and University of Maryland). Drawing on the Census Bureau's Business Trends and Outlook Survey from late 2025, the authors find AI adoption has reached 18% of American firms (32% employment-weighted), with 50-60% adoption among very large firms in information, professional services, and finance. One of the paper's most analytically novel findings, made possible by jointly measuring AI use at firm, function, and worker levels: in 36% of firms where workers use AI there is no formal firm-level adoption (a bottom-up, grassroots pattern), while 19% of firms with formal AI use show no worker-task use (a top-down approach facing implementation lags). The labor finding: only ~5% of AI-using firms report any AI-driven headcount change at all, with cuts and additions roughly balanced (2.0% of firms report decreases, 2.3% report increases, firm-weighted). Augmentation dominates—44% of AI-using firms report augmenting worker tasks, while pure task substitution occurs in only 5%. But: the authors hedge that substitution intensity is rising. The share of substituting firms reporting a "large number" of replaced tasks has climbed from 2.5% to 7% since the prior survey. Full paper: nber.org/w35141

2. In structural biology (Hill, Stein — Northwestern Kellogg and UC Berkeley). Examining AlphaFold2's real-world impact, the authors find the AI protein-prediction tool shifted basic research toward previously understudied proteins by 15-40%, but did not reduce experimental structure determination or accelerate early-stage drug development. AlphaFold appears to be complementing wet-lab work, not substituting for it. But: the authors raise two caveats themselves. First, this lack of substitution may not be efficient—it could reflect tenured researchers' preference for the methods they're uniquely trained to perform, rather than the irreplaceability of experiment. Second, in their words, "AI is not yet a substitute for this experimental work, but it may one day replace it as the models improve and researchers trust the output more." Full paper: nber.org/w35143

3. When AI automates AI research—a singularity threshold (Davidson, Halperin, Houlden, Korinek — Forethought, University of Virginia, and Columbia University). This paper is theoretical modeling, not empirical observation—but it formalizes the question of where today's pattern might break. The authors derive an analytical condition under which feedback loops (AI improving AI, plus higher output funding more research) overcome the diminishing returns that normally slow innovation. Their stylized simulation: fully automating software research and just 5% automation elsewhere produces a growth singularity in roughly six years. More conservative thresholds also tip the system: 13% automation across all sectors, or 20% in hardware research alone. (Disclosure: co-author Anton Korinek consults for Anthropic.) Full paper: nber.org/w35155

Why it matters: Read together, the picture is more nuanced than any single finding suggests. Two empirical studies show AI is currently augmenting work, not replacing workers—but neither author team treats that as a steady state. The Census paper documents rising substitution intensity even within an augmentation-dominant regime. The AlphaFold authors flag that researcher inertia may be doing some of the "complementing" work, and that experimental work could eventually be replaced. And Davidson et al. specify the analytical conditions under which the augment-not-replace pattern would tip toward something else entirely—not a prediction, but a checklist for what to watch.


Telling AI Your Goal Can Bias Its Outputs, Researchers Find

New research from economists at the University of Maryland, Emory University, and Lancaster University finds that telling an AI model what you're trying to accomplish can bias its outputs—even when you don't intend it to. Researchers testing LLMs on financial prediction tasks discovered that when models knew the downstream use case, they generated intermediate data that looked good on historical examples but failed on new data. The bias was strong enough that careful prompt design couldn't fully eliminate it, and even casual conversational hints about purpose triggered the effect. The authors frame this as a human accountability issue in research design, not an algorithmic flaw.

Why it matters: For anyone using AI to generate analysis or predictions, this suggests that how you frame your request—including context you might not think twice about—can systematically skew results toward what the model thinks you want to hear.


AI Stock Pickers Chase Headlines and Underperform, Study Finds

New research from finance economists Bruce Carlin (Rice University), Ryan Israelsen (Michigan State University), and Christopher Wazzan (UC Berkeley) tested what happens when you let LLMs manage a stock portfolio, collecting daily recommendations over time. The finding: AI-managed portfolios clustered around momentum stocks, large caps, and growth companies—essentially chasing whatever's in the news. Using established finance methodology, the researchers found these portfolios did not generate statistically significant abnormal returns. The AI recommendations also tended toward undiversified holdings, concentrating risk rather than spreading it.

Why it matters: For anyone tempted to let ChatGPT pick their stocks, this is early empirical evidence that LLMs may pattern-match to media attention rather than generate genuine investment insight.