May 3, 2026

D.A.D. today covers 10 stories from 3 sources. What's New, What's Innovative, What's Controversial, What's in the Lab, and What's in Academe.

D.A.D. Joke of the Day: My AI assistant said it couldn't help me lie on my resume. Then it offered to "reframe my experiences more compellingly." Guess it learned from HR.

What's New

AI developments from the last 24 hours

VS Code Reportedly Tags Commits as AI-Assisted Even When Copilot Wasn't Used

VS Code is reportedly adding 'Co-Authored-by Copilot' tags to git commits even when users didn't use the AI assistant to write the code. The behavior, flagged in developer discussions, has drawn sharp criticism. Users compared it to 'Sent from my iPhone' marketing tactics and called it 'growth hacking.' One commenter raised a more substantive concern: whether falsely attributing code to an AI could affect copyright status, since non-human authorship claims can complicate intellectual property rights.

Why it matters: If confirmed, this could create legal ambiguity around code ownership and signals how AI branding is being embedded into developer workflows—sometimes without clear consent.


Chinese Open-Weights Model Beats Claude and GPT-5.5 in Coding Competition

In an ongoing AI coding competition, Kimi K2.6—an open-weights model from Chinese startup Moonshot AI—won a programming challenge against frontier models from OpenAI, Anthropic, Google, and xAI. Xiaomi's MiMo V2-Pro placed second. GPT-5.5 came third; Claude Opus 4.7 finished fifth. The Word Gem Puzzle challenge required models to solve sliding-tile letter puzzles on grids up to 30×30 with 10-second time limits per round. Kimi went 7-1-0 in matches and posted the highest cumulative score.

Why it matters: Open-weights models from Chinese labs beating Western frontier systems in head-to-head coding tasks—even narrow ones—signals that the competitive gap may be closing faster than expected, giving enterprises more viable options outside the US majors.


GitHub Project Claims to Turn Claude Into a Design Tool—But Community Is Skeptical

A GitHub project called 'Open Design' claims to let users turn coding agents like Claude into design tools, generating visual designs through code-based workflows. The repository reportedly gained around 14,000 stars in its first week. Community reaction has been skeptical—commenters flagged the unusually rapid star growth as suspicious and criticized the README's promotional tone. Some users questioned whether the approach offers real advantages over image-generation tools already built into ChatGPT and other assistants.

Why it matters: The skepticism here reflects a broader pattern: as AI tools proliferate, distinguishing genuine capabilities from hype—and organic traction from manufactured buzz—is becoming a necessary skill for teams evaluating new tools.


Starlink Terminals Smuggled Into Iran to Bypass Three-Month Internet Blackout

A smuggling network is moving Starlink satellite terminals into Iran to circumvent a government internet blackout now in its third month, following US and Israeli airstrikes in late February. Human rights group Witness estimates over 50,000 terminals are now in the country, with one Telegram channel alone moving roughly 5,000 units over 2.5 years. Smugglers face up to 10 years in prison for distributing 10 or more devices. The blackout follows protests that, according to human rights monitors, have resulted in more than 6,500 deaths and 53,000 arrests.

Why it matters: This is satellite internet as geopolitical infrastructure—Starlink's ability to bypass national controls is becoming a factor in authoritarian crackdowns, raising questions about how governments and SpaceX will respond as the pattern repeats globally.


Maryland Becomes First State to Ban AI-Driven Grocery Price Increases

Maryland has become the first U.S. state to ban AI-driven price increases in grocery stores, according to the New York Times. The legislation targets algorithmic pricing tools that can adjust prices based on demand, time of day, or other factors. Details of enforcement mechanisms weren't immediately clear from available reporting. Community discussion was mixed—some questioned why the ban targets only groceries rather than broader algorithmic pricing practices.

Why it matters: This signals growing state-level appetite to regulate AI in consumer-facing applications, and could serve as a template for similar restrictions in other states or sectors like housing and healthcare where algorithmic pricing has drawn scrutiny.


What's in Academe

New papers on AI and its effects from researchers

Text Analysis Tool Maps Who Did What to Whom in Customer Feedback

Researchers have released TEA Nets, an open-source Python framework that extracts who did what to whom from text—mapping subjects, verbs, and objects into analyzable networks. The tool combines NLP with cognitive network science to detect emotional patterns and semantic structures. In testing on conspiracy theory texts, highly conspiratorial narratives linked personal pronouns to the same actions twice as often as low-conspiracy content, and connected person-focused elements through anger-eliciting language at statistically significant rates.

Why it matters: For teams analyzing customer feedback, social media sentiment, or internal communications at scale, this offers a more interpretable alternative to black-box sentiment scores—you can see the actual linguistic structures driving emotional signals.


General AI Assistants Outperform Specialized Tools on Data Visualization—But Cost More

Researchers tested eight AI agents across three design approaches—specialized tools, general coding assistants, and screen-control agents—on scientific visualization tasks like creating charts and processing data. General-purpose coding agents (like those in ChatGPT or Claude) achieved the highest success rates but consumed significantly more computing resources. Specialized agents proved more efficient but less adaptable. Screen-control agents handled individual steps well but faltered on multi-step workflows.

Why it matters: For teams using AI to generate reports or visualizations, this frames a real tradeoff: flexible tools cost more to run, while efficient ones may hit walls on complex tasks.


Personalized AI Exercises Cut Student Dropout From 30% to Under 1%

A study of 409 first-year computer science students found that LLM-generated personalized exercises dramatically reduced dropout on programming assignments. Standard worksheets saw 25-30% incompletion rates among struggling students; personalized versions hit over 99% completion across all learner types. Low-knowledge, low-motivation students scored 18% higher on correctness when materials were tailored to their profiles. The catch: high performers saw ceiling effects, suggesting personalization benefits those who need scaffolding most rather than accelerating top students.

Why it matters: For corporate training teams, this suggests AI-personalized learning materials could significantly improve completion rates in technical upskilling programs—particularly for employees struggling with new tools or concepts.


NVIDIA Opens Multimodal Model That Handles Audio, Video, and Text Together

NVIDIA released Nemotron 3 Nano Omni, a multimodal model that natively processes audio alongside text, images, and video. The company claims improvements over its predecessor across all input types, with particular strength in document understanding, long audio-video comprehension, and computer-use tasks. NVIDIA is releasing model weights in multiple precision formats along with portions of training data and code—a notable openness move for the company. The model uses token-reduction techniques that NVIDIA says deliver lower latency than comparable models.

Why it matters: NVIDIA's move into open foundation models—with training data included—gives enterprises more options for building multimodal applications in-house, competing directly with Meta's Llama approach.


Proposed Architecture Would Use Cheaper AI for Routine Desktop Tasks

Researchers propose a new architecture for AI agents that control computers: instead of running expensive large models at every click and keystroke, a lightweight model handles routine actions while monitors watch for trouble. When the agent gets stuck or reaches a critical decision point, it escalates to a more powerful model. The approach targets the inefficiency of treating every GUI interaction as equally complex. This is preliminary work without benchmark results.

Why it matters: If validated, this could make AI desktop automation significantly cheaper to run, potentially bringing down costs for enterprise deployments of computer-use agents.


What's On The Pod

Some new podcast episodes

The Cognitive RevolutionThe RL Fine-Tuning Playbook: CoreWeave's Kyle Corbitt on GRPO, Rubrics, Environments, Reward Hacking