Claim: Lab-Grown Human Brain Cells Play Video Game
March 9, 2026
D.A.D. today covers 8 stories from 4 sources. What's New, What's Innovative, What's Controversial, What's in the Lab, and What's in Academe.
D.A.D. Joke of the Day: I asked Claude to help me write a resignation letter. Now I'm employee of the month and somehow got a raise.
What's New
AI developments from the last 24 hours
Claim: Lab-Grown Human Brain Cells Play Video Game
Australian biotech company Cortical Labs demonstrated roughly 200,000 living human neurons grown on a microelectrode array—their CL1 "biological computer"—controlling the classic 1993 video game Doom. The system converts gameplay into electrical signals: when enemies appear onscreen, electrodes stimulate corresponding neural regions, and the neurons' responses are interpreted as player actions like movement or firing. Cortical Labs says the neurons exhibit "goal-directed learning" through reinforcement feedback, building on their 2022 achievement of teaching lab-grown neurons to play Pong. Performance is rudimentary—"a complete beginner who has never seen a keyboard," as one outlet put it—and technically minded commenters on Hacker News questioned whether the neurons are truly learning or the machine learning decoder is doing the heavy lifting.
Why it matters: This is a proof-of-concept, not a product—but it advances a genuinely novel computing paradigm that could eventually inform drug research and neural adaptation studies. The ethical debate is already heated: commenters are asking what happens when these systems scale beyond ant-level neuron counts.
Discuss on Hacker News · Source: youtube.com
Opinion: OpenAI's Own Charter Says It Should Step Aside for Rivals
An opinion piece argues OpenAI should honor its 2018 charter's "self-sacrifice clause," which commits the company to stop competing and assist rivals if a "value-aligned, safety-conscious project" approaches AGI. The author's case: Sam Altman's public AGI predictions have compressed dramatically—from roughly 10 years out in 2023 to claiming AGI has essentially arrived in early 2026—while competitors now lead key benchmarks. The piece cites Arena rankings showing Claude and Gemini models outperforming GPT variants across multiple categories including coding and expert-level tasks.
Why it matters: A pointed reminder that OpenAI's founding documents contain commitments that look increasingly awkward as the company pursues for-profit restructuring—fodder for critics questioning whether safety rhetoric matches corporate behavior.
Discuss on Hacker News · Source: mlumiste.com
Could AI Finally Make 'Literate Programming' Practical for Documentation?
A blog post argues that literate programming—a decades-old practice of weaving explanatory prose directly into code—deserves another look now that AI agents exist. The core idea: the approach always required tedious effort to keep documentation and code in sync, which limited adoption. The author suggests AI assistants can now handle that maintenance burden automatically, translating and summarizing to keep narratives current. No benchmarks or formal testing—just the author's experience using Claude and other AI tools with Emacs for runbooks and documentation.
Why it matters: If the thesis holds, teams struggling with documentation rot—where docs drift out of sync with code—might find AI-assisted literate programming a practical solution, though this remains unproven beyond anecdote.
Discuss on Hacker News · Source: silly.business
Users Report Mixed Experience as AI Chatbot Demand Shifts
A Hacker News thread claims Claude is struggling with demand as users leave ChatGPT, though the evidence is thin—mostly anecdotal reports and speculation. Community reaction is mixed: some users say they've actually switched away from Claude due to changing usage limits and pricing, finding OpenAI offers better value. Others suggest any demand surge may reflect broader AI adoption in workplace settings rather than a specific exodus from ChatGPT. One user countered the reliability narrative by citing data showing Claude has significantly less downtime than comparable services.
Why it matters: The discussion reflects how fluid user loyalty remains in the AI assistant market—pricing, limits, and reliability continue to drive switching behavior as professionals shop between major providers.
Discuss on Hacker News · Source: forbes.com
What's in Academe
New papers on AI and its effects from researchers
AI Copilot Boosts Junior Radiologists' Detection of Hard-to-Spot Birth Defects
Researchers trained an AI system on over 45,000 fetal ultrasound images from 22 hospitals to detect orofacial clefts—birth defects notoriously difficult to spot on prenatal scans. The system reportedly matches senior radiologist performance (above 93% sensitivity, 95% specificity) and substantially outperforms junior radiologists. The more interesting finding: when junior radiologists used the AI as a copilot, their detection sensitivity improved by more than 6%. A pilot study with 24 radiologists suggests the tool could accelerate expertise development for rare conditions clinicians rarely encounter during training.
Why it matters: This demonstrates a dual-use model for medical AI—not just diagnosis assistance, but structured skill-building for conditions too rare for trainees to see frequently.
Framework Promises AI Lab Assistants That Are Both Flexible and Auditable
Researchers interviewed 18 experts across 10 industrial R&D organizations and found that current AI systems force a trade-off: conversational flexibility OR reliable, reproducible execution—but no existing tool delivers both. After reviewing 20 systems, they propose 'schema-gated orchestration,' an architecture that lets users interact naturally with an LLM while enforcing strict boundaries on what actually gets executed. The key insight: separate the AI's conversational authority from its execution authority, so chatting freely doesn't compromise scientific reproducibility.
Why it matters: For teams using AI in regulated or research-intensive workflows, this framework could help square the circle between user-friendly interfaces and the audit trails compliance requires.
Deep Learning System Creates Radiation Treatment Plans in Under One Second
Researchers have developed AIRT, a system that generates radiation treatment plans for prostate cancer directly from CT scans in under one second—compared to several minutes for current automated tools. Trained on more than 10,000 cases, the system produces plans that matched existing commercial software (RapidPlan Eclipse) on key clinical metrics including tumor coverage and protection of surrounding organs. The approach generates deliverable VMAT plans (a standard radiation delivery technique) without requiring manual optimization steps.
Why it matters: Sub-second plan generation could meaningfully reduce bottlenecks in radiation oncology workflows, where treatment planning currently requires specialized physics staff and significant computation time—potentially expanding access in resource-constrained settings.
Same Evidence, Same Answers: Structured Retrieval Cut AI Radiology Disagreements 73%
A study testing 34 large language models on 169 expert radiology questions found that giving all models the same structured evidence—via a retrieval-augmented reasoning pipeline—dramatically reduced how much their answers varied. Median disagreement dropped roughly 73%, and cross-model accuracy improved modestly (0.74 to 0.81). The catch: when models agreed, they weren't always right. And 72% of incorrect answers carried moderate-to-high clinical severity. Longer responses didn't correlate with accuracy.
Why it matters: For healthcare organizations evaluating AI diagnostics, this suggests retrieval-augmented systems may produce more consistent outputs across different models—but consistency alone isn't a proxy for correctness, and error severity remains a serious concern.
Open Dataset Aims to Train AI That Anticipates Your Next Phone Action
Researchers have formalized 'next action prediction'—training AI to anticipate what you'll do next on your device based on usage patterns. They released an open dataset of 360,000+ labeled actions from 1,800 hours of phone usage across 20 users, plus a new model called LongNAP that combines learned patterns with real-time context. LongNAP outperformed baseline approaches by 39-79% on held-out data, with about a quarter of its high-confidence predictions closely matching what users actually did next.
Why it matters: This is foundational work for AI assistants that could proactively suggest your next app, message, or task—the difference between tools that wait for commands and tools that anticipate needs.
What's On The Pod
Some new podcast episodes
The Cognitive Revolution — Try this at Home: Jesse Genet on OpenClaw Agents for Homeschool & How to Live Your Best AI Life