March 23, 2026

D.A.D. today covers 10 stories from 3 sources. What's New, What's Innovative, What's Controversial, What's in the Lab, and What's in Academe.

D.A.D. Joke of the Day: My AI assistant said it couldn't help with my taxes because it's "not a financial advisor." Meanwhile, it's been confidently redesigning my entire business strategy for six months.

What's New

AI developments from the last 24 hours

Rust Project Surveys Contributors on AI Tools—No Consensus Emerges

The Rust programming language project published a collection of perspectives from contributors and maintainers on AI tool usage, gathered over three weeks in February. The document reveals no consensus: some developers find AI valuable for navigating unfamiliar codebases, code review, and research tasks, while others remain skeptical. The project explicitly states it has no official position yet and is using this survey as groundwork for potentially forming one.

Why it matters: Major open-source projects are starting to grapple formally with AI's role in software development—how Rust lands could influence norms across the broader developer ecosystem.


AI Tools Help Non-Technical Scammers Build Polished Fraud Sites

The telltale sign of spam—sloppy design and broken English—is disappearing. AI coding tools now let non-technical scammers create polished, professional-looking fraudulent emails and websites. An Anthropic report found non-programmers building functional ransomware with LLMs, with some programs selling for up to $1,200. Security platform Guard.io has documented 'VibeScamming'—using AI agents to generate convincing scam infrastructure. The visual quality bar that once helped users spot fraud is eroding.

Why it matters: Your employees can no longer rely on poor grammar or amateur design to flag suspicious emails—security training and technical controls matter more than ever.


Solo Developer Automates Mobile App Testing With Claude in 90 Seconds

A solo developer documented using Claude to automate QA testing for a mobile app, creating a system where the AI drives both iOS and Android, takes screenshots, analyzes them for issues, and files bug reports. The striking finding: Android setup took 90 minutes while iOS took over six hours—a reflection of the platforms' different automation tooling. Android's WebView exposes a protocol socket enabling full programmatic control; the resulting Python script sweeps all 25 app screens in about 90 seconds.

Why it matters: For teams running lean, this suggests AI-assisted mobile QA is now practical for individual developers—though platform parity remains a real friction point.


What's Innovative

Clever new use cases for AI

Document Editor 'Revise' Promises to Learn Your Writing Style—Skeptics Ask Why Not Just Use ChatGPT

Revise, a new document editing tool, lets users work alongside AI agents from OpenAI, Anthropic, and xAI for proofreading, revision, and PDF-to-rich-text conversion. The tool claims to learn user preferences over time and offers custom prompt shortcuts. Community reaction on Hacker News is lukewarm—users find it visually polished but question whether an $8/month subscription beats simply pasting text into Claude or ChatGPT. Others asked about team features and suggested supporting local open-source models instead.

Why it matters: The skeptical reception highlights a growing challenge for AI wrapper tools: justifying subscription fees when the underlying models are already accessible through their native interfaces.


What's in Academe

New papers on AI and its effects from researchers

Fed Researchers Map Out AI Effects on Jobs, Productivity

Economists from the Federal Reserve Banks of Atlanta and Richmond and Duke's Fuqua School of Business surveyed nearly 750 corporate executives about AI's real impact on their companies. Researchers cite a productivity paradox: productivity gains are real but only a fraction as large as estimated by top officials. Study authors suggest the data reflect a lagging indicator; leadership projections are actually ahead of the data, in their view. On jobs, the picture is nuanced: aggregate employment is barely moving (less than 0.4 percent decline expected), but larger companies anticipate AI-driven workforce reductions while smaller firms actually expect modest headcount growth. The real shift is compositional—routine clerical roles are declining while demand for skilled technical positions is rising, both within firms and across the economy. The researchers developed an index ranking which job functions face the most negative AI exposure, with office and administrative support roles at the top.

Why it matters: This is the most comprehensive executive survey yet on AI's actual workplace impact—and it suggests the story isn't mass layoffs but a reshuffling: if your team's work is routine and clerical, that's where the pressure is building.


Research Model Generates Hour-Long Multi-Voice Conversations From Scripts

Researchers released MOSS-TTSD, a model that converts dialogue scripts into spoken conversations with multiple voices. The system can generate up to 60 minutes of multi-speaker audio in a single pass, handling up to 5 distinct speakers with zero-shot voice cloning—meaning it can mimic a voice from a short sample without additional training. It works in English and Chinese. The team claims it outperforms existing open-source and proprietary alternatives, though specific benchmark comparisons weren't detailed in the release.

Why it matters: This is research-stage work, but the capability to generate hour-long multi-voice conversations from scripts has obvious applications for audiobook production, podcast creation, and training content—worth watching as the technology matures.


Framework Exposes Blind Spots in AI Image Manipulation Detection

A research paper proposes PIXAR, a framework for detecting AI-edited images that identifies a significant flaw in current detection methods: existing benchmarks look for edits inside broad regions but miss that many pixels within those regions are actually untouched, while subtle edits outside them go undetected. The framework introduces pixel-level analysis tied to semantic understanding—essentially teaching AI to spot not just where changes occurred, but what kind of edit was made (object removal, color changes, face-swapping, etc.). Testing revealed current detection tools substantially over- and under-score edits using older methods.

Why it matters: As AI-generated and edited images proliferate, better detection tools could prove critical for media verification, legal evidence, insurance claims, and content moderation—this research suggests current approaches have significant blind spots.


AI Video Tool Claims to Track Multiple Faces Without Mix-Ups

Researchers have developed LumosX, a framework for generating personalized videos featuring multiple people while keeping their faces and attributes correctly matched throughout. The system uses new attention mechanisms to track which face belongs to which person—a persistent problem when AI video tools try to depict several individuals at once. The team claims state-of-the-art results on their benchmark, though they haven't released specific performance numbers yet.

Why it matters: As AI video generation matures toward commercial use in marketing and entertainment, reliably handling multiple people without face-swapping errors becomes essential for professional-quality output.


ESA Releases Benchmark for Detecting Hidden Backdoors in AI Models

The European Space Agency ran a competition challenging 200+ teams to find hidden backdoors in AI forecasting models used for spacecraft telemetry. The concern: attackers could embed triggers in training data or model weights that cause manipulated predictions when activated—a serious risk for safety-critical systems. ESA has now published the competition materials, including the benchmark dataset and top solutions, as a public resource for AI security research.

Why it matters: As AI models move into high-stakes infrastructure—spacecraft, power grids, medical devices—this highlights a security vulnerability that's harder to detect than traditional software bugs, and signals growing institutional focus on AI supply chain risks.


Open-Source Tool Aims to Spot Heart Blockages in Under a Second

Researchers released ODySSeI, an open-source framework that automatically detects, outlines, and assesses severity of blockages in coronary angiography images—the X-ray videos cardiologists use to spot heart disease. Trained on data from 2,149 patients across three continents, the system claims a 2.5-fold improvement in lesion detection over baseline methods and processes images in under a second on standard hardware. A web interface is live for testing. The open-source release means hospitals could integrate it without licensing fees.

Why it matters: Automated analysis of cardiac imaging could reduce diagnostic variability between physicians and speed up treatment decisions during catheterization procedures—though clinical validation and regulatory clearance would still be required before deployment.


What's On The Pod

Some new podcast episodes

The Cognitive RevolutionYour Agent's Self-Improving Swiss Army Knife: Composio CTO Karan Vaidya on Building Smart Tools