April 16, 2026

D.A.D. today covers 12 stories from 4 sources. What's New, What's Innovative, What's Controversial, What's in the Lab, and What's in Academe.

D.A.D. Joke of the Day: My AI gave me five budget options, so I asked which one was best. It said, "They're all excellent choices." Great—now I have two managers who won't commit.

What's New

AI developments from the last 24 hours

YouTube Now Lets You Block Short-Form Videos Entirely

YouTube now lets users set their Shorts time limit to zero minutes, effectively removing the TikTok-style short videos from the app. The feature, available in time management settings on Android and iOS, is live for all parents and rolling out to regular accounts. Early user reports suggest the block may be incomplete—some say Shorts still appear on the home page, with the zero limit only preventing you from swiping to additional videos after clicking one.

Why it matters: For professionals who find Shorts a productivity drain, this offers a native way to reclaim the YouTube app without third-party workarounds—though the implementation may not be as thorough as advertised.


Gemini Gets a Mac App, Joining ChatGPT and Claude on the Desktop

Google released a native macOS desktop app for Gemini, available free to all users on macOS 15+. The app offers a keyboard shortcut (Option + Space) for quick access, screen and window sharing for context-aware assistance, and local file integration. Community reaction is mixed: some praise Google's shipping pace and the native (non-Electron) build, while others note it arrives over a year after ChatGPT's Mac app launched in May 2024. Critics flag the mandatory Google login and lack of model selection options.

Why it matters: Mac users now have three major AI assistants with native desktop apps—ChatGPT, Claude, and Gemini—competing on convenience and integration, giving professionals more options for AI access without switching to a browser.


What's Controversial

Stories sparking genuine backlash, policy fights, or heated disagreement in the AI community

EFF Alleges Google Disclosed User Data to ICE Without Promised Advance Notice

The Electronic Frontier Foundation filed complaints with California and New York attorneys general alleging Google violated its long-standing policy of notifying users before disclosing their data to law enforcement. The case involves a Cornell Ph.D. student whose Google account data—including IP addresses and session logs—was reportedly turned over to ICE in April 2025 following a brief appearance at a pro-Palestinian protest. Google had promised for nearly a decade to give users advance notice and opportunity to challenge such requests. The student says he received notification only after disclosure had already occurred.

Why it matters: This tests whether tech companies' privacy commitments are enforceable promises or unenforceable marketing—with implications for how millions of professionals trust their data with major platforms.


Ollama Accused of Obscuring Open-Source Origins

A widely-circulated critique accuses Ollama, the popular tool for running AI models locally, of systematically obscuring its dependence on llama.cpp, the open-source inference engine that powers it. The article alleges that Ollama's binary distributions lacked required MIT license notices, that a GitHub issue requesting license compliance went 400+ days without maintainer response, and that the company has forked away from llama.cpp entirely—allegedly reintroducing bugs the original project had already fixed. Community reaction on Hacker News suggests this tension "has been known in the local LLM community for a long time."

Why it matters: For teams running local models via Ollama, this signals potential instability ahead—and raises questions about whether alternatives like LM Studio or direct llama.cpp usage might offer more transparent, better-maintained foundations.


What's in the Lab

New announcements from major AI labs

OpenAI Offers Security Firms Early Access to Cyber-Focused Model

OpenAI announced a 'Trusted Access for Cyber' program, offering security firms and enterprises access to a specialized model called GPT-5.4-Cyber along with $10 million in API grants. The company says the initiative aims to strengthen global cyber defense through collaboration with leading security firms. No performance benchmarks or specific capabilities were disclosed for the cyber-focused model.

Why it matters: This signals OpenAI is moving into specialized enterprise verticals—cybersecurity being a high-stakes, high-margin market—while positioning AI as defensive infrastructure rather than just a productivity tool.


Google Claims Its Latest Text-to-Speech Model Sounds Most Natural Yet

Google released Gemini 3.1 Flash TTS, a text-to-speech model now available in preview for developers via Gemini API and Google AI Studio, with enterprise access through Vertex AI. Google says it's their most natural-sounding TTS model yet, citing a 1,211 Elo score on the Artificial Analysis TTS leaderboard—a benchmark based on blind human preference tests. The model supports 70+ languages, handles multi-speaker dialogue natively, and adds audio tags for controlling vocal style, pace, and delivery.

Why it matters: For teams building voice interfaces, customer service bots, or video content, this adds another competitive option in the rapidly improving TTS space—with fine-grained controls that could reduce post-production editing.


OpenAI Adds Sandbox Execution for Building Secure AI Agents

OpenAI updated its Agents SDK with native sandbox execution, designed to help developers build secure, long-running AI agents that can work across files and external tools. The update adds infrastructure that lets agents run safely in isolated environments for extended periods without compromising the host system. This is developer tooling, not a consumer feature.

Why it matters: As companies move from simple chatbots to AI agents that take actions autonomously, secure execution environments become critical—this positions OpenAI's toolkit for enterprise agent deployments where security concerns have slowed adoption.


What's in Academe

New papers on AI and its effects from researchers

AI Models Learn 3D Spatial Reasoning Without Expensive Human Labeling

Researchers introduced SpatialEvo, a framework that trains AI models to understand 3D spatial relationships without requiring costly human-labeled data. The system computes ground truth directly from raw 3D point clouds and camera positions rather than relying on manual annotation. The approach reportedly achieves top scores at both 3B and 7B parameter scales across nine benchmarks, with the researchers claiming gains in spatial reasoning without sacrificing general visual understanding.

Why it matters: Better 3D spatial reasoning could improve AI applications in robotics, autonomous vehicles, and AR/VR—fields where understanding physical space is essential but training data has been prohibitively expensive to produce.


GPT-5.2 Scores Under 10% When Reasoning Requires Hundreds of Steps

A new benchmark called LongCoT exposes a significant weakness in today's most advanced AI models: sustained reasoning over many interdependent steps. The test includes 2,500 expert-designed problems in chemistry, math, computer science, chess, and logic—each requiring chains of reasoning spanning tens to hundreds of thousands of tokens. GPT-5.2 scored just 9.8% accuracy; Gemini 3 Pro managed 6.1%. The twist: models handle each individual step fine but fall apart when those steps compound over longer horizons.

Why it matters: This suggests current AI assistants may be unreliable for complex, multi-step professional tasks—financial modeling, legal analysis, research synthesis—where errors cascade through dependent calculations.


Splitting Robot 'Brains' in Two Improves Complex Physical Tasks

Researchers developed HiVLA, a new architecture for robot control that separates planning from execution: a vision-language model handles high-level instructions ("pick up the red cup, then place it on the shelf") while a separate system manages precise motor movements. The key insight is that training robots end-to-end on physical tasks tends to degrade their reasoning abilities. By keeping components separate, each can improve independently. The team reports the approach outperforms current methods, especially for complex multi-step tasks in cluttered environments.

Why it matters: This is robotics research, not a product—but the underlying problem (AI that reasons well loses capability when fine-tuned for specific tasks) affects enterprise AI deployments too, and modular architectures may become the standard solution.


Multi-Agent System Aims to Automate AI Model Fine-Tuning

Researchers built TREX, a multi-agent system that attempts to automate the entire LLM fine-tuning process—from analyzing requirements and researching relevant literature to preparing training data and running experiments. The system uses two AI modules (Researcher and Executor) that collaborate and model the experimentation process as a search tree, exploring different approaches and reusing successful results. The team created FT-Bench, a 10-task benchmark based on real-world scenarios, and reports that TREX "consistently optimizes" model performance, though the paper provides no specific numbers.

Why it matters: If the approach proves robust, it could eventually reduce the specialized expertise needed to customize AI models—though without published benchmarks, this remains a research proposal rather than a proven tool.


Zoom-and-Enhance Technique Helps AI Find Tiny Buttons on Cluttered Screens

Researchers developed UI-Zoomer, a method that helps AI models more accurately locate buttons, fields, and other interface elements on screens. The technique detects when a model is uncertain about an element's location, then automatically zooms in on that area—mimicking how humans squint at small text. It requires no additional training and showed double-digit accuracy gains on standard benchmarks (up to 13.4% improvement) across multiple model architectures.

Why it matters: More reliable screen-element detection could improve AI agents that automate desktop and mobile tasks—a prerequisite for tools that can navigate software interfaces on your behalf.


What's Happening on Capitol Hill

Upcoming AI-related committee hearings

Thursday, April 16Hearing: China’s Campaign to Steal America’s AI Edge House · Unknown Committee (Hearing) 390, Cannon House Office Building


What's On The Pod

Some new podcast episodes

AI in BusinessScaling Customer Experience with Operationalized Agentic AI - with Shezan Kazi of Dialpad

The Cognitive RevolutionWelcome to AI in the AM: RL for EE, Oversight w/out Nationalization, & the first AI-Run Retail Store

AI in BusinessTurning Computer Vision Into Real‑World Value at Enterprise Scale – with Joseph Nelson of Roboflow