March 22, 2026

D.A.D. today covers 8 stories from 3 sources. What's New, What's Innovative, What's Controversial, What's in the Lab, and What's in Academe.

D.A.D. Joke of the Day: I asked Claude to help me cut my presentation down to 10 slides. It gave me 10 slides and 47 "brief additional considerations.

What's New

AI developments from the last 24 hours

Age Verification Laws Could Force OS-Level Identity Checks

An opinion piece argues that age verification laws spreading across Europe, the US, UK, and Australia are quietly transforming the internet from open to permissioned access. The author contends these systems—now expanding from adult sites into social media, gaming, and messaging—function less as child safety measures than as identity infrastructure, since VPNs and fake credentials easily bypass them. Technical evidence cited: systemd, the Linux initialization system, has reportedly added an optional birth date field to its user database in response to these laws, suggesting OS-level identity layers may be coming.

Why it matters: This signals a potential shift toward persistent digital identity requirements baked into operating systems—a regulatory trend that could affect how companies verify users, handle data, and design products for global markets.


AI-Accelerated Development Can't Rush What Still Takes Time

Developer Simon Willison published an essay arguing that AI-accelerated development is colliding with a fundamental truth: some things just take time. He points to YC batch companies that disappear without proper shutdowns, open source projects abandoned after a week of commits, and the slow work of building customer trust. His thesis: the speed-obsessed culture enabled by AI coding tools is producing more short-lived software and broken relationships, when tenacity remains what actually defines success.

Why it matters: As AI tools compress development timelines, this frames a counterargument worth considering: faster building doesn't mean faster trust, and the gap between what's technically possible and what's strategically wise may be widening.


The Hidden Divide AI Coding Tools Are Revealing on Engineering Teams

A widely-shared essay examines why some developers feel alienated by AI coding assistants while others embrace them. The piece contrasts two perspectives: developers who value coding as craft—the hands-on act of writing code—versus those who view it as means to an end. The author argues LLM tools didn't create this divide; they revealed it. The uncomfortable implication: the market increasingly rewards speed over craftsmanship, potentially sidelining developers who find meaning in the coding process itself.

Why it matters: This tension is playing out on your engineering teams right now—understanding it helps explain resistance to AI tool adoption that isn't about the technology.


What's in Academe

New papers on AI and its effects from researchers

3D Generation Method Aims to Build Objects Part by Part

Researchers have proposed DreamPartGen, a framework for generating 3D objects from text prompts that understands objects as collections of meaningful parts rather than undifferentiated shapes. The system models how parts relate to each other—like how a chair seat connects to its legs—aiming to produce more accurate and functionally coherent 3D models. The researchers claim state-of-the-art performance in geometric accuracy and text-matching, though the paper doesn't provide specific benchmark numbers.

Why it matters: This is research-stage work, but part-aware 3D generation could eventually matter for product design, game development, and e-commerce visualization—anywhere you need AI-generated 3D assets that make structural sense, not just look plausible.


AI Models Struggle With Dates in Languages That Have Less Training Data

Researchers released MultiTempBench, a benchmark testing whether AI models can handle dates, time zones, and calendar math across five languages and three calendar systems (Gregorian, Hijri, and Chinese Lunar). Testing 20 large language models revealed a key finding: in languages with fewer training resources like Hausa and Arabic, the way models break words into tokens creates a bottleneck—dates get fragmented into meaningless pieces, degrading accuracy. In well-resourced languages like English and German, what matters more is how coherently the model represents time internally.

Why it matters: For global teams relying on AI for scheduling, compliance dates, or document processing across regions, this suggests current models may be systematically less reliable when working with non-Western calendars or lower-resource languages—a gap worth testing before deployment.


Dataset Tests Whether AI Can Spot Factory Defects in Real-World Conditions

Researchers released VID-AD, a dataset designed to test whether AI can spot manufacturing defects under real-world conditions—background clutter, changing lighting, motion blur. The dataset covers 10 factory scenarios with over 10,000 images, each requiring the system to detect logical problems (wrong quantity, misplaced parts, incorrect relationships) rather than just obvious visual flaws. The accompanying detection framework uses text descriptions instead of pixel-level features, which researchers say shows consistent improvement over existing methods, though specific performance numbers weren't released.

Why it matters: Quality control AI often fails when factory conditions aren't pristine—this research directly targets that gap for manufacturers considering automated inspection.


Vision AI Performs Worse on Multiple Choice Than Open Questions

Researchers found that vision-language models—the AI systems that analyze images and answer questions about them—perform worse when asked multiple choice or yes/no questions compared to open-ended ones, even when the visual reasoning required is identical. The culprit: these models pay significantly less attention to the actual image when questions are constrained. The team calls this "selective blindness" and proposes a fix using learnable prompt tokens to maintain consistent visual grounding regardless of how questions are framed.

Why it matters: For anyone using AI image analysis in workflows—from document processing to product categorization—this suggests that how you phrase your prompts may matter as much as what you're asking, and that multiple choice formats designed for efficiency might actually undercut accuracy.


Audio AI Models May Ignore Sound Entirely, Persian Benchmark Suggests

Researchers released PARSA-Bench, the first benchmark for testing audio AI models on Persian language and culture—16 tasks covering speech understanding, poetry meter detection, and traditional Persian music. The surprising finding: text-only models consistently beat their audio counterparts, suggesting current audio-language models may not actually use sound information beyond what a transcript provides. Even more striking, all models performed at random-chance level on detecting Persian poetry meter, regardless of model size.

Why it matters: This exposes a gap in how well AI handles non-English languages and suggests audio models may be less capable than assumed—relevant for any organization considering multilingual voice AI deployment.