March 31, 2026

D.A.D. today covers 10 stories from 4 sources. What's New, What's Innovative, What's Controversial, What's in the Lab, and What's in Academe.

D.A.D. Joke of the Day: My AI assistant said it couldn't help with my taxes because it's "not a financial advisor." Neither is my brother-in-law, but that never stopped him.

What's New

AI developments from the last 24 hours

Free Browser Tool Teaches Anthropic's Coding Assistant Without Installation

A developer released a free interactive learning platform for Claude Code, Anthropic's command-line coding assistant. The browser-based tool includes 11 modules with terminal simulators, configuration builders, and quizzes—no installation or API keys required. Community reaction was mixed: some questioned whether dedicated training is necessary when the tool accepts natural language, while others suggested experienced developers should just dive in. One user flagged that Claude Code consumes API quota rapidly—reportedly burning through 10% of a Max5 plan's session quota in 10 minutes on a single prompt.

Why it matters: As AI coding assistants proliferate, learning resources are becoming their own cottage industry—but the quota consumption warning may be more practically useful than the training itself.


What's Controversial

Stories sparking genuine backlash, policy fights, or heated disagreement in the AI community

Philadelphia Courts to Ban Smart Glasses Starting Monday

Philadelphia courts will ban all smart glasses and AI-integrated eyewear starting Monday, with violators facing arrest and contempt charges. The ban covers any eyewear with recording capability—including prescription glasses—and aims to protect witnesses and jurors from intimidation. Philadelphia joins Hawaii, Wisconsin, and North Carolina courts with similar restrictions. The move comes as Meta's Ray-Ban smart glasses gain traction (reportedly 7 million pairs sold in 2025) and after a Los Angeles judge ordered Mark Zuckerberg to remove his pair during a recent trial.

Why it matters: As AI-enabled wearables become mainstream consumer products, expect more institutions—courts, workplaces, government buildings—to grapple with policies around always-on recording devices that are increasingly difficult to distinguish from regular glasses.


White House App Contains Huawei Tracker Despite US Sanctions, Analysis Finds

An analysis of federal government mobile apps found extensive tracking and permission requests that appear to exceed functional necessity. The White House app (version 47.0.1, released March 27) reportedly contains three embedded trackers including Huawei Mobile Services Core—notable given U.S. sanctions against Huawei. The FBI's myFBI Dashboard contains four trackers including Google AdMob. CBP's Mobile Passport Control requests 14 permissions including background location and biometrics, with faceprint data retained up to 75 years. A Treasury audit previously found IRS2Go launched before completing required privacy assessments.

Why it matters: The presence of a sanctioned company's tracker in an official White House app raises questions about federal app security vetting—and highlights the gap between government warnings about foreign tech and its own software practices.


What's in the Lab

New announcements from major AI labs

New Claude Code Capability: Fuller Use Of Your Computer

Anthropic added computer use to Claude Code, letting the AI coding assistant control your desktop directly from the command line—opening apps, clicking through interfaces, and taking screenshots to verify what it built. The feature bridges the gap between writing code and testing it visually: Claude can compile a macOS app, launch it, click through the UI, and confirm the result without the developer leaving the terminal. Safety measures include per-app approval each session, tiered controls (browsers are view-only, terminals click-only), and a global Esc key to abort instantly. Available on Pro and Max plans, macOS only, in research preview. Meanwhile, Anthropic has also expanded Claude Code's auto mode—which lets Claude make permission decisions autonomously instead of requiring approval for every action—to Enterprise and API users.

Why it matters: This is a meaningful expansion of what AI coding assistants can do—moving from writing code to actually testing it visually. For developers building native apps or working with GUI tools, it eliminates the constant context-switching between terminal and application.


Meta Releases Open-Source AI for Greener Concrete Mix Design

Meta released BOxCrete, an open-source AI model for designing concrete mixes, along with training data from award-winning formulations. The tool uses Bayesian optimization to help engineers develop mixes that meet performance specs while reducing environmental impact. Meta frames this as supporting domestic production—the US imports 20-25% of its cement, and the industry pours roughly 400 million cubic yards of concrete annually. The model is available on GitHub.

Why it matters: A notable example of AI applied to heavy industry materials science rather than office productivity—concrete production accounts for roughly 8% of global CO2 emissions, making optimization tools potentially significant for both sustainability goals and supply chain resilience.


What's in Academe

New papers on AI and its effects from researchers

AI Agents Handle Reasoning But Struggle With Real Customer Support Demands

Researchers released CirrusBench, a benchmark for evaluating AI agents on customer service tasks using real cloud support tickets rather than synthetic test cases. The framework introduces metrics focused on resolution efficiency—how quickly and smoothly an agent resolves issues—not just whether it gets the right answer. Early experiments found that leading models handle reasoning well but struggle with complex, multi-turn customer interactions and fall short of the efficiency standards real support operations require.

Why it matters: Companies piloting AI for customer support now have a benchmark that measures what actually matters in production: not just accuracy, but whether the agent resolves issues without excessive back-and-forth.


Long-Document Processing Could Get 4× Faster With New Attention Technique

Researchers have developed HISA (Hierarchical Indexed Sparse Attention), a technique that speeds up how large language models handle long documents. The method replaces a computational bottleneck in attention mechanisms with a faster two-stage search. Testing on DeepSeek-V3.2 showed 2× speed improvement at 32K tokens and 4× at 128K tokens, with virtually identical output quality (99%+ agreement with the original). The technique requires no retraining to implement.

Why it matters: This is infrastructure research, but if adopted by model providers, it could mean faster and cheaper processing of long documents—contracts, reports, codebases—in the AI tools you already use.


'Deep Research' AI Tools Drop 3-10 Points When Tasks Include Charts and Images

Researchers released MiroEval, a benchmark for evaluating AI 'deep research' agents—the systems that investigate complex questions across multiple sources. The benchmark tests 100 real-world research tasks and found that when AI agents must process images, charts, or other visual content alongside text, their performance drops 3-10 points compared to text-only tasks. The study also found that how an AI agent conducts its research process reliably predicts the quality of its final answer. Testing covered 13 different systems, with MiroThinker-H1 scoring highest overall.

Why it matters: As enterprises adopt AI research assistants, this benchmark offers a way to compare them on realistic tasks—and flags that multimodal research (the kind involving documents with charts and images) remains a weak spot across the field.


Off-the-Shelf Vision AI Improves Chip Design Layout by 32%

Researchers have developed VeoPlace, a framework that uses general-purpose vision-language models to improve chip floorplanning—the process of arranging components on semiconductor layouts. The approach uses VLMs' visual reasoning to guide where large circuit blocks should be placed, without requiring any model fine-tuning. On benchmarks, VeoPlace outperformed the previous best learning-based method on 9 of 10 tests, with wirelength reductions (a key efficiency metric) exceeding 32% in some cases.

Why it matters: This suggests general-purpose AI vision models may have unexpected utility in specialized engineering domains like semiconductor design—potentially accelerating chip development workflows without requiring custom-trained models.


Researchers Propose Smarter Training Method for AI Image Generators

Researchers have proposed Stepwise-Flow-GRPO, a refinement to how reinforcement learning trains image and video generation models. The core idea: instead of treating every step in the generation process equally, assign credit based on which steps actually improved the output. The team claims this produces faster training and better sample efficiency than current methods. No benchmark numbers were provided in the abstract.

Why it matters: This is deep infrastructure research—if it pans out, future image generators could train faster and cheaper, but it won't affect how you use tools like Midjourney or DALL-E anytime soon.


What's On The Pod

Some new podcast episodes

How I AIHow to turn Claude Code into your personal life operating system | Hilary Gridley