April 15, 2026

D.A.D. today covers 17 stories from 5 sources. What's New, What's Innovative, What's Controversial, What's in the Lab, and What's in Academe.

D.A.D. Joke of the Day: My AI gave me five different answers to the same question. My wife said, "Finally, something that understands marriage.

What's New

AI developments from the last 24 hours

EFF Warns AI Surveillance Network Tracks 100,000 Cameras Without Warrants

The Electronic Frontier Foundation is urging pushback against Flock Safety, whose AI surveillance cameras now number over 100,000 nationwide. The systems go beyond license plate reading to create "Vehicle Fingerprints"—tracking color, make, model, dents, and bumper stickers. Police can access the network without warrants. EFF documents over 3,000 law enforcement agencies using Flock products. The privacy concerns aren't hypothetical: a Kansas police chief allegedly used Flock cameras 228 times to stalk an ex-girlfriend, and a journalist driving across rural Virginia was captured by nearly 50 cameras from 15 different agencies.

Why it matters: This signals growing tension between AI-powered law enforcement tools and Fourth Amendment protections—a regulatory and legal fight that will shape how surveillance technology operates in American communities.


Anthropic Launches Autonomous Coding Sessions That Run Without Human Approval

Anthropic has released Claude Code Routines in research preview, enabling Claude Code to run autonomous coding sessions in the cloud without requiring human approval during execution. Routines can be triggered by schedules, API calls, or GitHub events—think automated code review when PRs are opened, documentation updates on a schedule, or alert triage when incidents fire. The feature targets repeatable DevOps and maintenance work that currently requires developer attention.

Why it matters: This moves AI coding assistants from interactive tools toward autonomous agents that can handle routine development work overnight or in response to events—a meaningful shift for teams exploring how to reduce toil without adding headcount.


Fiverr Allegedly Left Tax Documents With Social Security Numbers Publicly Searchable

A security researcher claims Fiverr has left customer files—including tax documents with Social Security numbers—publicly accessible and indexed by Google. The alleged exposure stems from Fiverr's use of public URLs rather than expiring links for files shared in client-freelancer messages. The researcher says they reported the issue to Fiverr's security team 40 days ago and received no response, prompting public disclosure. Hacker News users called the alleged leak of Form 1040s "egregious" and "brutal."

Why it matters: If confirmed, this represents a significant data protection failure at a major gig-economy platform, with potential regulatory consequences and a cautionary signal about how companies handle sensitive documents in messaging systems.


Backblaze Reportedly Stopped Backing Up Cloud Folders Without Telling Customers

A longtime Backblaze user reports discovering that the backup service has quietly stopped backing up OneDrive and Dropbox folders, as well as .git folders, without notifying customers. The exclusions allegedly don't appear in the app's visible exclusion list, making them easy to miss. The user notes that cloud sync services typically retain deleted files for only 30 days, while Backblaze advertises one-year retention—meaning affected users may have significantly less protection than they believed. The discovery reportedly surfaced through a Reddit thread, suggesting it may be affecting multiple customers.

Why it matters: If confirmed, this represents a significant gap between Backblaze's "unlimited backup" marketing and actual coverage—users relying on it for disaster recovery should verify their cloud sync folders are actually being backed up.


10,000 Concert Tapes From the 1980s Go Online Thanks to Internet Archive Volunteers

A Chicago music fan who has recorded over 10,000 concert tapes since the 1980s is working with Internet Archive volunteers to digitize the collection. About 2,500 tapes are now online, including rare recordings of Nirvana (from 1989, two years before their breakthrough), Sonic Youth, R.E.M., Phish, and Neutral Milk Hotel. Volunteer audio engineers are cleaning up recordings originally made on mediocre equipment. The project has sparked nostalgia for 90s bootleg culture and inspired others to upload their own recordings.

Why it matters: It's a reminder that the Internet Archive—currently fighting legal battles over digital lending—remains a unique repository for cultural preservation that no commercial platform would undertake.


What's Innovative

Clever new use cases for AI

Open-Source Platform Promises Cheaper AI-Powered Investment Research

A new open-source project called LangAlpha offers an AI agent platform built specifically for investment research. The Apache 2.0-licensed tool provides sandboxed workspaces where AI can execute code against financial data, with TradingView charts and live market feeds built into the interface. The creators claim to have solved a practical problem: when AI agents pull financial data through standard protocols, a single call for five years of daily prices can consume tens of thousands of tokens—expensive and inefficient. Their approach reportedly keeps prompt costs flat whether using 3 or 80 tools. Community reaction on Hacker News was cautiously interested, with one commenter warning users would "lose a lot of money" and others requesting clearer validation.

Why it matters: This is developer infrastructure for now, but signals growing interest in domain-specific AI agents—finance being an obvious early target given the data intensity and potential payoff.


Python Framework Claims to Be Built for AI Coding Agents

A developer introduced Plain, a new Python web framework positioned as designed for both human developers and AI agents. The project appears to be a Django-based alternative aimed at being easier for LLMs to write and work with. Community reaction on Hacker News has been skeptical—some describe it as an "arbitrary fork of Django" or "vibe-coded" (built primarily using AI assistance), while others see potential given Django's age and complexity. No benchmarks or evidence of agent-specific capabilities were provided.

Why it matters: The mixed reception illustrates growing tension around "AI-native" tools—frameworks explicitly designed for LLMs to generate—and whether they offer genuine improvements or just repackage existing code with buzzwords.


What's Controversial

Stories sparking genuine backlash, policy fights, or heated disagreement in the AI community

Surveillance Firm Tells California Resident: We Don't Own Your Data, Ask the Police

A California resident's attempt to delete their data from Flock Safety, which operates license plate reader networks for law enforcement, was refused. The company claims it's merely a "service provider" for its municipal and police customers, who technically own the surveillance data—meaning individuals must contact each agency directly to request deletion. Flock says its systems retain plate images for 30 days by default and don't capture names or addresses. Community reaction on Hacker News was skeptical, with one commenter arguing that if this interpretation holds, "the CCPA isn't worth the paper it's written on."

Why it matters: This case highlights a potential loophole in state privacy laws: companies operating surveillance infrastructure may deflect deletion requests by positioning themselves as processors rather than data controllers, leaving consumers to navigate a maze of government agencies to exercise their rights.


What's in the Lab

New announcements from major AI labs

OpenAI Creates Specialized Cybersecurity Model for Vetted Defenders Only

OpenAI announced GPT-5.4-Cyber, a specialized model available exclusively to vetted cybersecurity defenders through its Trusted Access for Cyber program. The company says it's expanding access to advanced AI capabilities for security professionals while simultaneously strengthening safeguards as AI cyber capabilities grow more powerful. No technical details or benchmarks were provided about what distinguishes this model from standard GPT offerings or what specific defensive capabilities it enables.

Why it matters: Signals OpenAI is creating tiered access to its most capable models based on use case and vetting—a governance approach that could become standard as frontier AI capabilities raise both defensive and offensive security stakes.


Chrome Now Lets You Save AI Prompts as One-Click Tools

Google is rolling out 'Skills' in Chrome, a feature that lets users save frequently used AI prompts and run them with one click across any browser tab. Instead of retyping the same instructions—summarize this article, extract key dates, rewrite for clarity—users can store them as reusable tools accessible from the browser. The feature works with Gemini in Chrome on desktop and is designed to reduce the friction of repetitive prompt entry.

Why it matters: This addresses one of the practical annoyances of daily AI use: copying and pasting the same prompts repeatedly. It signals browsers are becoming the interface layer for AI workflows rather than standalone chat windows.


Google Pledges $120 Million for Global AI Workforce Training

Google hosted its first AI for the Economy Forum in Washington D.C. with MIT FutureTech, announcing a package of workforce and research investments. The headline commitment: a $120 million Global AI Opportunity Fund for AI skills training. The company also unveiled research funding programs, Google Cloud credits for academics, and partnerships with Johnson & Johnson Foundation (rural healthcare AI training) and Jobs for the Future to recruit 100 companies into AI-related apprenticeships. Google says it has trained 100 million people globally in digital skills to date.

Why it matters: This is Google positioning itself as a constructive player on AI labor disruption—a strategic bet that shaping the policy conversation early matters as regulatory scrutiny intensifies.


Google DeepMind Releases Robotics Model for Spatial Reasoning

Google DeepMind released Gemini Robotics-ER 1.6, a reasoning model designed to help robots understand physical environments. The upgrade focuses on spatial reasoning—letting robots point at objects, count items, and detect whether tasks succeeded. Google claims "significant improvement" over previous versions but provided no benchmark numbers. The model is now available to developers through the Gemini API and Google AI Studio.

Why it matters: This signals Google's push into embodied AI—making language models useful for physical-world applications—though the lack of hard performance data makes it difficult to assess how much ground they've actually gained.


What's in Academe

New papers on AI and its effects from researchers

Rubric-Based Training Lifts Smaller AI Models to Near GPT-4 Accuracy

Researchers have developed rDPO, a training framework that improves how multimodal AI models learn from human preferences on visual reasoning tasks. Instead of simple right/wrong feedback, the method uses detailed checklists to score AI responses on specific criteria—essentially giving the model a rubric rather than just a grade. On benchmarks, this approach lifted a smaller model's judgment accuracy to near GPT-4 levels and improved downstream task performance by roughly 9% over simpler filtering methods.

Why it matters: This is research plumbing for now, but suggests future vision-capable AI assistants could get meaningfully better at complex visual tasks through smarter training techniques rather than just bigger models.


AI Agents Perform Better When They Share Files Instead of Talking

Researchers developed AiScientist, a system for autonomous machine learning research that coordinates multiple AI agents working on long-running engineering tasks. The key insight: instead of agents passing information through conversation, they read and write to shared files—a "File-as-Bus" architecture that maintains project state across extended work sessions. On benchmarks measuring AI's ability to reproduce research papers, the system scored 10.5 points higher than baselines. Removing the file-sharing mechanism dropped performance significantly, suggesting durable artifacts matter more than conversational handoffs for complex projects.

Why it matters: This is research infrastructure, but it signals how AI assistants may eventually handle multi-day projects autonomously—coordinating specialized sub-agents while maintaining coherent state, rather than forgetting context between sessions.


Training Small AI Models to Mimic Large Ones Often Fails—Here's Why

New research identifies why training smaller AI models to mimic larger ones sometimes fails. The technique, called on-policy distillation, only works when two conditions are met: the student and teacher models must share compatible reasoning patterns, and the teacher must offer genuinely new capabilities—not just better scores on the same training data. The study found that when distillation succeeds, student and teacher converge on a surprisingly narrow set of tokens (97-99% of probability mass). When models are from different families or the teacher lacks novel knowledge, the process stalls.

Why it matters: For organizations building or fine-tuning smaller models for cost efficiency, this provides a clearer framework for when distillation will actually work—potentially saving significant compute spent on approaches doomed to fail.


AI Models Score Higher on Policy Application Than Policy Recall

Researchers have released PolicyBench, a 21,000-case benchmark testing how well AI models understand public policy across US and Chinese systems. The benchmark evaluates three cognitive levels—memorization, understanding, and application—based on Bloom's taxonomy. Initial findings show AI models perform better on practical policy application tasks than on rote recall or conceptual questions—a counterintuitive result suggesting current AI may be more useful for policy analysis than policy lookup.

Why it matters: As governments and regulated industries explore AI for policy analysis, compliance, and research, benchmarks like this will help identify which models can actually reason about complex regulatory frameworks rather than just retrieve facts.


AI Code Repair Tools Struggle to Fix Common Security Flaws, Study Finds

Researchers released LogicDS, the first curated dataset of 86 real-world logical vulnerabilities (all with assigned CVEs), alongside a framework called LogicEval for testing how well automated repair tools—including LLMs—can fix them. The finding: current AI-based code repair struggles with these bugs. The main failure modes are sensitivity to how prompts are worded, losing track of surrounding code context, and difficulty pinpointing exactly where patches should go. These aren't exotic edge cases; logical vulnerabilities are common security flaws that static analysis tools routinely miss.

Why it matters: For teams evaluating AI coding assistants for security work, this is a useful reality check—LLMs can help with many coding tasks, but automatically fixing subtle logic bugs in production code remains beyond current capabilities.


What's Happening on Capitol Hill

Upcoming AI-related committee hearings

Wednesday, April 15Building an AI-Ready America: Understanding AI’s Economic Impact on Workers and Employers House · House Education and Workforce Subcommittee on Workforce Protections (Hearing) 2175, Rayburn House Office Building


Wednesday, April 15Hearings to examine S.465, to require the Federal Energy Regulatory Commission to reform the interconnection queue process for the prioritization and approval of certain projects, S.1327, to require the Federal Energy Regulatory Commission to establish a shared savings incentive to return a portion of the savings attributable to an investment in grid-enhancing technology to the developer of that grid-enhancing technology, S.3034, to amend the Federal Power Act to require the Federal Energy Regulatory Commission to review regulations that may affect the reliable operation of the bulk-power system, S.3192, to require Transmission Organizations to allow aggregators of retail customers to submit to organized wholesale electric markets bids that aggregate demand flexibility of customers of certain utilities, S.3269, to direct the Comptroller General of the United States to conduct a technology assessment focused on liquid cooling systems for artificial intelligence compute clusters and high-performance computing facilities, S.3947, to amend the Federal Power Act to establish a categorical exclusion for reconductoring within existing rights-of-way. Senate · Senate Energy and Natural Resources Subcommittee on Energy (Meeting) 366, Dirksen Senate Office Building


Thursday, April 16Hearing: China’s Campaign to Steal America’s AI Edge House · Unknown Committee (Hearing) 390, Cannon House Office Building


What's On The Pod

Some new podcast episodes

AI in BusinessTurning Computer Vision Into Real‑World Value at Enterprise Scale – with Joseph Nelson of Roboflow

AI in BusinessMaking Workforce Training Affordable with Tiered Storage - with Aaron Demory of Fearlus

How I AIClaude Cowork 101: How to automate your workday without touching code | JJ Englert (Tenex)