Claude Code Allows New Scheduling Of Tasks
March 24, 2026
D.A.D. today covers 9 stories from 4 sources. What's New, What's Innovative, What's Controversial, What's in the Lab, and What's in Academe.
D.A.D. Joke of the Day: My AI just wrote a 500-word apology for not being able to help, then helped anyway. It's learning from our corporate emails.
What's New
AI developments from the last 24 hours
AI Models Solve Previously Unsolved Math Problem, Marking Research Milestone
GPT-5.4 Pro has solved an open problem in mathematics—a genuine unsolved question, not a textbook exercise. The problem involved improving bounds on a sequence in hypergraph theory, and mathematician Will Brian, who contributed the problem to Epoch's FrontierMath benchmark, confirmed the solution eliminates an inefficiency in existing mathematical constructions. The work will be written up for publication. After the initial solve, Epoch tested other frontier models; Opus 4.6 max, Gemini 3.1 Pro, and GPT-5.4 xhigh also found solutions using Epoch's testing framework.
Why it matters: This is a threshold moment: AI models are no longer just solving known problems faster—they're generating novel mathematical results that advance human knowledge, raising questions about how AI-assisted research will reshape scientific discovery.
Discuss on Hacker News · Source: epoch.ai
Free Cheat Sheet Aims to Simplify Claude Code's Complex Interface
A developer built a free, printable cheat sheet compiling Claude Code's keyboard shortcuts, slash commands, CLI flags, and configuration options into a single HTML file. It auto-detects Mac or Windows and updates daily from official documentation. Community reception was positive, though one commenter called the need for such a resource 'a UX red flag' for Claude Code's interface complexity. Users also flagged a correction: pasting images on Mac uses CTRL+V, not CMD+V.
Why it matters: For teams adopting Claude Code, this is a useful onboarding shortcut—though the community critique hints at broader questions about whether AI coding tools are becoming too complex for casual users.
Discuss on Hacker News · Source: cc.storyfox.cz
Demo Reportedly Shows 400-Billion-Parameter AI Running Locally on iPhone 17 Pro
A demo circulating on social media shows an iPhone 17 Pro reportedly running a 400-billion-parameter language model—a size typically requiring data center hardware. Technical details are thin, but community discussion suggests it uses 'SSD streaming to GPU,' a technique from Apple's 2023 research paper on running large models with limited memory. Observers also note the model appears to use mixture-of-experts architecture, meaning only a fraction of parameters activate per query. Hacker News commenters called the feat 'impossible a year ago,' with some speculating Apple's hardware-software integration could give it a distribution edge in on-device AI.
Why it matters: If verified, this signals that flagship phones may soon run models rivaling cloud AI—potentially shifting competitive dynamics away from API providers and toward device makers who control the full stack.
Discuss on Hacker News · Source: twitter.com
Developer Shares Workflow for Managing Five AI Coding Agents in Parallel
A developer shared their six-week workflow evolution with Claude Code, describing a shift from coding directly to managing multiple AI agents working in parallel. Key tactics: using git worktrees to run five simultaneous agent sessions, giving agents screenshot tools to verify their own UI changes, and automating PR creation. Build times dropped from ~60 seconds to under one second after switching compilers—a change that makes rapid AI iteration practical. The author frames the new role as 'manager of AI agents' rather than implementer.
Why it matters: This is one practitioner's playbook, not a controlled study—but it illustrates the emerging pattern of developers treating AI coding assistants as parallelizable workers rather than pair programmers, with infrastructure changes (fast builds, multiple worktrees) as enablers.
Discuss on Hacker News · Source: neilkakkar.com
What's Innovative
Clever new use cases for AI
Mozilla Proposes Shared Knowledge Base So AI Coding Agents Can Learn From Each Other
Mozilla AI released 'cq' (short for 'colloquy'), an open-source tool it describes as 'Stack Overflow for AI coding agents.' The concept: create a shared knowledge base where coding agents can query past solutions and contribute new ones, rather than each agent repeatedly hitting the same problems in isolation. Mozilla argues this addresses a real limitation—agents working from stale training data can't learn from each other's recent discoveries. No performance data yet; this is an early-stage project. The framing is notable: Stack Overflow questions dropped from 200,000+ monthly in 2014 to under 4,000 in late 2025.
Why it matters: If agent-to-agent knowledge sharing works at scale, it could accelerate how quickly AI coding tools improve—though this remains unproven and raises questions about what knowledge gets shared and who controls it.
Discuss on Hacker News · Source: blog.mozilla.ai
What's in the Lab
New announcements from major AI labs
OpenAI Outlines Safety Principles for Sora 2 Video Platform
OpenAI published a detailed safety overview for Sora 2, its video generation platform that combines a generation model with a social feed. Key measures: all generated videos carry C2PA metadata and visible, dynamically moving watermarks with the creator's name. Users can upload images of real people for image-to-video generation after attesting to consent, with stricter guardrails for children. A 'Characters' feature gives users control over their likeness—only they decide who can use it, and they can see and delete any video featuring their character. Teen accounts get a filtered feed, adults cannot DM teens, and scroll limits apply by default. On the audio side, Sora blocks attempts to imitate living artists or existing works and scans generated speech transcripts for policy violations.
Why it matters: Unlike many AI safety announcements, this one includes concrete, specific protections—particularly around likeness consent, teen safety, and content provenance. The details suggest OpenAI is treating Sora as a social platform with platform-level trust and safety infrastructure, not just a generation tool.
Anthropic Launches Cloud Scheduled Tasks for Claude Code
Anthropic released scheduled tasks for Claude Code on the web, letting users set up recurring AI agent jobs that run on Anthropic's cloud infrastructure—no local machine required. Users write a prompt, connect GitHub repositories, and set a schedule (hourly, daily, weekdays, or weekly). Each run clones the repo fresh, executes autonomously, and creates a session where users can review changes and open pull requests. Tasks can connect to external services like Slack, Linear, and Google Drive through MCP connectors. Claude is restricted to pushing only to claude/-prefixed branches by default, though users can loosen this. The feature is available to Pro, Max, Team, and Enterprise users. Separately, Anthropic also launched 'Dispatch' in its Cowork desktop app, letting users assign tasks from their phone that Claude executes on their desktop using local files, apps, and computer use—though the company's safety disclosure notes that chaining a mobile agent to a desktop agent 'creates a chain where mistakes or malicious content can cascade into actions that are difficult or impossible to undo.'
Why it matters: This moves AI coding assistants from on-demand tools to always-on autonomous workers—reviewing PRs each morning, auditing dependencies weekly, or syncing docs after merges, all without a human present. It's a significant step toward the 'AI teammate' model that labs have been positioning toward.
What's in Academe
New papers on AI and its effects from researchers
Speech Analysis AI Detects Parkinson's Symptoms Across Multiple Languages
Researchers developed a technique to detect dysarthria—a speech impairment common in Parkinson's disease—across multiple languages using a single AI model. The method aligns speech representations from different languages into a common space, allowing a model trained on one language to work on others. Tested on Czech, German, and Spanish patient recordings, the approach showed improved detection accuracy compared to language-specific models. The technique addresses a practical barrier: most medical speech AI is trained on English data, limiting its use elsewhere.
Why it matters: Healthcare organizations serving multilingual populations could eventually screen for neurological conditions without needing language-specific AI models for each patient group—potentially expanding early Parkinson's detection to underserved language communities.
Stanford Heart-Imaging AI Triples Accuracy of ChatGPT and Gemini in Diagnostic Tests
Stanford researchers developed MARCUS, a specialized AI system for interpreting cardiac imaging—ECGs, echocardiograms, and cardiac MRIs. The system uses coordinated expert models trained on 13.5 million medical images. In testing, MARCUS achieved 87-91% accuracy on ECGs and 85-88% on cardiac MRI, outperforming GPT-5 and Gemini 2.5 Pro by 34-45 percentage points. On cases requiring multiple imaging types, MARCUS hit 70% accuracy versus 22-28% for frontier models—nearly triple the performance. The researchers released models, code, and benchmarks as open source.
Why it matters: This signals that specialized, domain-trained AI systems may dramatically outperform general-purpose models in high-stakes medical interpretation—a pattern healthcare organizations should watch as they evaluate AI diagnostic tools.
Research Suggests Training Direction Matters More Than Intensity for AI Reasoning
New research suggests that when AI models learn to reason better through reinforcement learning, what matters most isn't how strongly they update their predictions—it's the direction of those updates. The paper introduces methods that exploit this insight: one improves reasoning accuracy at inference time without retraining, another boosts performance during training. The researchers didn't release specific benchmark numbers, but claim the approach works across multiple models and tasks.
Why it matters: This is foundational research—if validated, it could make reasoning-focused AI training more efficient, potentially leading to better reasoning capabilities in future model releases.
What's On The Pod
Some new podcast episodes
AI in Business — From AI Experiments to Enterprise Value Driving Real Business ROI - with Dan Diasio of EY
How I AI — How Microsoft's AI VP automates everything with Warp | Marco Casalaina
The Cognitive Revolution — Your Agent's Self-Improving Swiss Army Knife: Composio CTO Karan Vaidya on Building Smart Tools