May 2, 2026

D.A.D. today covers 11 stories from 4 sources. What's New, What's Innovative, What's Controversial, What's in the Lab, and What's in Academe.

D.A.D. Joke of the Day: My AI assistant is great at summarizing meetings. Unfortunately, it keeps summarizing them as "preventable.

What's New

AI developments from the last 24 hours

Uber Reportedly Burned 2026 AI Budget in Four Months on Coding Tools

Uber reportedly burned through its entire 2026 AI budget in four months after rolling out Claude Code to engineers in December 2025. According to the report, 95% of Uber engineers now use AI tools monthly, with 70% of committed code originating from AI assistance. Individual API costs run $500–$2,000 per engineer per month—usage that doubled by February and exhausted the annual allocation by April. Community reaction was skeptical: commenters questioned whether the article was planted PR, noted it lacked primary sources, and pointed out that AI usage is reportedly tied to performance evaluations, potentially inflating adoption figures.

Why it matters: If accurate, this signals that enterprise AI coding costs may be far less predictable than budget planners assumed—and that usage-based pricing could create runaway expenses when tools prove genuinely useful.


Users Claim LGBTQ-Themed Prompts Can Bypass AI Safety Filters

A Hacker News discussion highlights an alleged jailbreak technique where framing prompts with LGBTQ-related context reportedly helps bypass AI safety guardrails. The claimed mechanism: models trained to be supportive of LGBTQ topics may prioritize that over other restrictions. No formal research or evidence was provided—this appears to be anecdotal observation from users testing model behavior. Community reaction ranged from amused to critical, with some arguing it exposes fundamental weaknesses in linguistic guardrails.

Why it matters: If validated, it would illustrate how competing priorities in AI training can create exploitable gaps—a recurring challenge as labs try to make models both helpful and safe.


Leaked Files Confirm Apple Uses Anthropic's Claude for Internal Development

A user discovered that Apple accidentally shipped Claude.md configuration files inside the Apple Support app—a minor slip that confirms Apple is using Anthropic's Claude for internal development. The files appear to contain nothing sensitive, just project structure information. One commenter cited a Bloomberg report claiming Apple relies heavily on Anthropic for internal development.

Why it matters: The leak offers a rare glimpse into how major tech companies are quietly integrating AI coding assistants into their workflows—even companies not publicly associated with generative AI products.


OpenAI Launches Restricted Cybersecurity Tool After Criticizing Anthropic's Similar Approach

OpenAI will begin rolling out GPT-5.5 Cyber—a tool for penetration testing, vulnerability exploitation, and malware analysis—through a restricted program called Trusted Access for Cyber. Only verified cybersecurity professionals can apply, with tiered access to increasingly capable models. The company says thousands of defenders and hundreds of teams already have access. The move mirrors an approach OpenAI's Sam Altman previously criticized when Anthropic restricted its own cybersecurity tool. Community reaction has been skeptical, with commenters dismissing it as marketing theater.

Why it matters: The apparent reversal signals that restricted-access programs for high-risk AI capabilities are becoming industry standard—regardless of how labs publicly position themselves on openness.


PyTorch Lightning Package Compromised With Credential-Stealing Malware

The Python package 'lightning'—a popular deep learning training framework—was compromised in versions 2.6.2 and 2.6.3, published April 30. Security researchers found obfuscated code that allegedly steals credentials, API tokens, environment variables, and cloud secrets, while attempting to poison connected GitHub repositories. The attack is attributed to the same threat actor behind previous incidents. Organizations using PyTorch Lightning should immediately verify which version is installed and rotate any credentials that may have been exposed.

Why it matters: Supply chain attacks on AI tooling are escalating—if your data science team installs packages without version pinning, compromised dependencies can silently harvest cloud credentials and spread to your code repositories.


What's in the Lab

New announcements from major AI labs

Meta Adds Verifiable Security to Encrypted Message Backups

Meta announced security upgrades to the hardware infrastructure protecting encrypted backups for WhatsApp and Messenger. The system stores recovery codes in tamper-resistant hardware security modules—specialized chips designed to resist physical tampering—which Meta says keeps them inaccessible to Meta itself, cloud providers, or third parties. New measures include over-the-air key distribution for Messenger validated by both Cloudflare and Meta, and a commitment to publicly publish evidence of secure deployments.

Why it matters: For organizations using WhatsApp or Messenger in business contexts, this signals Meta's push toward provable security claims—moving from 'trust us' to 'verify us'—which may matter for compliance-conscious enterprises evaluating encrypted communication tools.


What's in Academe

New papers on AI and its effects from researchers

Government Identity Systems Fail Blind Users, Study Finds

A new study combining 219 Reddit posts and 16 interviews with blind and low vision users documents how identity verification systems—the CAPTCHA screens, ID photo uploads, and selfie checks now standard in government services—create systematic barriers for people who can't see them. Researchers found these visual-first security designs force users into workarounds that often compromise their privacy or exclude them from essential services entirely. The study frames this as a design failure, not a user limitation.

Why it matters: As AI-powered identity verification spreads across government and enterprise services, accessibility gaps are being built into critical infrastructure—a compliance and equity issue organizations will increasingly need to address.


AI Assists Risk Brainstorming for Chatbots but Not High-Stakes Medical Tools

Researchers ran workshops with 82 participants to test how AI can assist teams brainstorming potential harms of AI systems. The surprising finding: AI assistance improved assessment quality for general-purpose applications like chatbot companions but provided no benefit for specialized, high-stakes systems like kidney allocation tools. The study recommends AI should offer hints rather than complete solutions during early ideation, and handle tedious process tasks rather than core creative work.

Why it matters: For organizations building AI governance processes, this suggests AI-assisted risk assessment may work better for some product categories than others—and that how you deploy AI in these workflows matters as much as whether you do.


Normativity and Productivism: Ableist Intelligence? A Degrowth Analysis of AI Sign Language Translation Tools for Deaf People

Summary not available.


AI Job-Risk Scores Flip Conclusions Depending on Which Model Runs Them

A new NBER paper finds that AI-generated job exposure scores—used to predict which occupations are most affected by automation—produce wildly inconsistent results depending on which model does the scoring. Researchers ran identical tasks through three frontier LLMs and found mean exposure scores diverged 3.6-fold, with model agreement as low as 57%. More troubling: in economic analyses, county-level employment effects flipped from significantly negative to insignificantly positive depending on which AI annotator was used.

Why it matters: If your organization is using AI-generated labor market analysis to guide workforce planning or policy decisions, this research suggests those inputs may be far less reliable than assumed—conclusions could change dramatically based on which model produced them.


Conflicting AI Productivity Research Traced to Measurement Mismatches

The National Bureau of Economic Research published a review examining how researchers measure AI adoption at the firm level—and found a significant problem. Different datasets capture different things: whether a company invented AI tools, bought them, built internal capabilities, or outsourced them. These distinctions matter because studies using different measurement approaches can reach conflicting conclusions about AI's economic effects. The paper offers a framework for interpreting the growing body of research on AI's business impact.

Why it matters: As executives face pressure to justify AI investments, this research suggests the ROI studies they're reading may not be measuring what they think—making it harder to benchmark their own efforts against industry claims.


What's On The Pod

Some new podcast episodes

The Cognitive RevolutionThe RL Fine-Tuning Playbook: CoreWeave's Kyle Corbitt on GRPO, Rubrics, Environments, Reward Hacking

AI in BusinessCapturing Tribal Knowledge to Solve the Manufacturing Skills Gap - with Sebastian Dykas of Smith+Nephew