April 26, 2026

D.A.D. today covers 8 stories from 6 sources. What's New, What's Innovative, What's Controversial, What's in the Lab, and What's in Academe.

D.A.D. Joke of the Day: My AI assistant asked for a performance review. I said it exceeded expectations. Now it wants equity.

What's New

AI developments from the last 24 hours

Public Backlash Against AI Escalates From Protests to Violence

Two violent incidents in April—a Molotov cocktail attack on Sam Altman's home and shots fired at an Indianapolis councilman's residence over data center support—signal escalating public hostility toward AI. Stanford's 2024 AI Index quantifies the disconnect: 73% of AI experts see positive long-term job effects versus just 23% of the public. A Gallup survey shows Gen Z excitement about AI dropped from 36% to 22% in one year, while anger rose from 22% to 31%. Virginia projects residential electricity rates could climb 25% by 2030 due to data center expansion.

Why it matters: The gap between industry optimism and public sentiment is widening into active resistance—creating real political and operational risks for AI deployment, from regulatory pushback to community opposition to infrastructure projects.


OpenAI Offers $25K to Hackers Who Break GPT-5.5 Bio-Safety Filters

OpenAI launched a red-teaming challenge offering $25,000 to the first person who finds a 'universal jailbreak' that bypasses bio-safety guardrails on GPT-5.5 across five undisclosed test questions. Participation requires an existing ChatGPT account, signing an NDA, and being on OpenAI's vetted list of trusted bio red-teamers. Community reaction on Hacker News has been skeptical—critics note the winner-take-all structure means potentially one payout regardless of submission volume, and some characterized the NDA requirement as undermining transparency.

Why it matters: The program signals OpenAI is stress-testing biological safety controls ahead of broader GPT-5.5 deployment, but the closed, NDA-bound structure raises questions about whether crowdsourced security research can be both effective and transparent.


What's Innovative

Clever new use cases for AI

Amateur Claims ChatGPT Solved 60-Year-Old Math Problem That Stumped Experts

A 23-year-old with no advanced math training claims to have solved a 60-year-old Erdős problem using a single prompt to GPT-5.4 Pro. The problem—proving that a specific mathematical sum for "primitive sets" approaches exactly one as numbers grow infinitely large—had stumped professional mathematicians including Stanford researchers. Terence Tao, a Fields Medal winner, commented that human mathematicians "collectively made a slight wrong turn at move one," while the AI apparently found a novel approach. The solution was posted on erdosproblems.com and reportedly uses an entirely new method.

Why it matters: If verified, this would be a significant milestone for AI-assisted mathematical discovery—not just checking proofs but generating original solutions to problems that defeated domain experts for decades.


What's in the Lab

New announcements from major AI labs

Google Opens First Austrian Data Center to Meet European AI Demand

Google announced its first data center in Austria, located in Kronstorf, which will create 100 direct jobs. The company says the facility will support growing demand for its AI capabilities in Europe. Google is packaging the announcement with sustainability commitments—a water quality fund for the local Enns river, solar panels, and heat recovery systems—along with a partnership with a local university. The move continues Big Tech's European data center buildout as AI workloads drive unprecedented compute demand.

Why it matters: For European enterprises, local data centers can mean lower latency and easier compliance with EU data residency requirements—relevant if your organization uses Google Cloud or Workspace.


What's in Academe

New papers on AI and its effects from researchers

On Contested Economics, 18 of 20 AI Models Lean Pro-Intervention — Claude Is the Outlier

Most studies of political bias in AI models have asked chatbots their opinions on contested social or political questions. A new study from KAIST and HKUST asks something harder, and more consequential for institutions: when AI models predict the real-world effect of an economic policy — for instance, whether a minimum-wage increase reduces employment (the market-oriented prediction) or sustains it (the intervention-oriented prediction) — are their errors systematically skewed in one direction? Drawing 1,056 "ideologically contested" causal questions from top-tier economics journals, the researchers compared predictions from 20 leading AI models against the empirical findings those journals actually documented. The result: 18 of 20 models were significantly more accurate when the real-world finding happened to align with pro-intervention predictions (i.e., where the data showed government action working as advocates predicted), with accuracy gaps of 10 to 21 percentage points. When the models got it wrong, their errors leaned pro-intervention too. The only models that didn't show this asymmetry: Anthropic's Claude Sonnet 4.6 and Opus 4.6, which slightly leaned the other way. The bias was largest in healthcare and welfare-related questions, smallest in taxation, and persisted even after controlling for question difficulty. One-shot prompting didn't fix it.

Why it matters: As AI gets plugged into policy analysis, economic reporting, and corporate decision support, this kind of directional skew can quietly shape conclusions users believe are simply "what the data shows." A chatbot expressing a political opinion is easy to discount; a chatbot wrongly predicting the empirical effect of a policy is much harder to detect. The finding that newer models are trending toward less bias suggests labs are aware of the issue — but the gap remains large.


MIT Researchers Argue AI Assistants Have a "Fantasia Problem"

There's a familiar frustration with AI assistants: you ask for a polished output, get one quickly, and then realize you don't actually want what you asked for — because you hadn't fully worked out what you wanted in the first place. A new MIT position paper argues this isn't a user error but a structural failure of how AI systems are designed today, and it has a name for the pattern: a "Fantasia interaction," after the 1940 Disney film in which Mickey Mouse's enchanted broom executes its instruction to carry water so faithfully that it floods the room. The authors argue current AI is trained to follow instructions assuming users know exactly what they want — but behavioral research shows people typically engage with AI before their goals are fully formed, prompting quickly and revising after seeing failures. The paper identifies three failure modes: premature execution (the AI commits before the user has worked out what they want, forcing them to retroactively edit a polished output); false satisfaction (the interaction feels successful but misses the real problem — for example, asking for productivity tips when the underlying issue is burnout); and anchoring (the AI's first output disproportionately shapes the user's subsequent thinking). The authors propose a new alignment paradigm in which AI actively helps users form and refine their intent, rather than rushing to execute.

Why it matters: Most AI alignment discussion focuses on preventing harmful outputs or improving instruction-following. This paper argues we've been solving the wrong problem. As AI gets embedded in higher-stakes work — policy analysis, hiring, investment decisions, healthcare — the question of whether AI helps people think (versus just polishing what they typed first) becomes more consequential. The "Fantasia" framing will be useful vocabulary for anyone evaluating whether AI tools serve their team or merely accelerate them toward poorly-formed conclusions.


New Benchmarks Test AI Tools for Spreadsheets and Meeting Summaries

Two of the most common ways businesses use AI today — summarizing meetings and analyzing spreadsheet data — are also two of the hardest to evaluate. How do you know whether the summary missed something important, or which model handles your particular data best? Two new academic papers try to bring rigor to those questions, and reach a similar conclusion: there is no "best" AI model, only the best one for a specific task. TEmBed, a spreadsheet-and-database benchmark, found that performance varies significantly depending on whether models are working at the level of individual cells, rows, columns, or whole tables. A separate paper from Cisco's Webex team evaluated AI meeting-summary models across 114 meetings (city council sessions, internal enterprise meetings, and White House press briefings) and found gpt-4.1-mini led on accuracy (avoiding fabrications) while gpt-5.1 led on completeness and coverage. The starkest finding: every model's accuracy collapsed on the White House press briefings, where dense factual material outran the reference data the evaluation was built around.

Why it matters: For enterprises shopping for AI meeting summarizers, data-analysis copilots, or other domain-specific tools, these papers reinforce that "smartest model" leaderboards aren't a substitute for benchmarking on your own data. Different models win on different metrics, and the same model can perform brilliantly on one content type and badly on another.


Technique Shrinks Medical Training Datasets While Preserving Model Accuracy

Researchers proposed Bezier Trajectory Matching (BTM) for dataset condensation—the process of creating small synthetic datasets that can train models nearly as well as full-sized ones. Instead of using standard training trajectories to guide synthetic data creation, BTM uses simpler mathematical curves that provide cleaner supervision signals. Tested on five clinical datasets, BTM matched or outperformed existing methods, with the biggest gains in scenarios with rare conditions and tight data budgets. The approach also cuts storage requirements for the trajectory data needed during condensation.

Why it matters: For healthcare organizations facing data storage limits or privacy constraints, better dataset condensation could eventually mean training useful models on much smaller, synthetic versions of sensitive patient data—though this remains research-stage work.


What's Happening on Capitol Hill

Upcoming AI-related committee hearings

Thursday, April 30 — Senate Judiciary business meeting includes consideration of S.3062, which would require AI chatbots to implement age verification measures and make certain disclosures. Senate Judiciary, 216 Hart Senate Office Building.