Manifesto Proposes Rules For Agentic AI Within Businesses
March 21, 2026
D.A.D. today covers 9 stories from 3 sources. What's New, What's Innovative, What's Controversial, What's in the Lab, and What's in Academe.
D.A.D. Joke of the Day: My company replaced HR with AI. Now when I ask for a raise, I get a thoughtful 800-word response that somehow means no.
What's New
AI developments from the last 24 hours
Open-Source AI Coding Tool Claims 5 Million Users, But Privacy Concerns Emerge
OpenCode, an open-source AI coding agent, has launched a desktop app in beta for macOS, Windows, and Linux. The tool claims 5 million monthly developers and 120,000 GitHub stars, working across terminal, IDE, or desktop with support for 75+ LLM providers including Claude, GPT, Gemini, and local models. The project emphasizes privacy, saying it doesn't store code or context data. Community reaction is mixed: some users praise its flexibility and subagent system, while others flag that telemetry is on by default and requires code changes to disable. One user reports the tool is blocked from Anthropic's API.
Why it matters: Another serious open-source contender enters the AI coding assistant space, giving teams who want provider flexibility or on-premise deployment an alternative to commercial tools—though the telemetry defaults and reported API restrictions warrant scrutiny before enterprise adoption.
Discuss on Hacker News · Source: opencode.ai
Fitness App Data Revealed French Aircraft Carrier's Real-Time Location
Le Monde journalists pinpointed the exact real-time location of France's only aircraft carrier by checking a naval officer's public Strava profile. The sailor's morning run—seven kilometers around the deck—revealed the Charles de Gaulle was approximately 100 kilometers off Turkey's coast on March 13. This echoes a 2018 incident when US soldiers inadvertently exposed secret base locations through the same fitness app. Online discussion noted that while satellites can track carriers anyway, fitness apps provide adversaries free, real-time intelligence without any surveillance infrastructure.
Why it matters: Consumer apps that seem harmless individually can aggregate into serious operational security failures—a lesson that extends beyond militaries to any organization with sensitive locations or movements.
Discuss on Hacker News · Source: lemonde.fr
ArXiv to Split From Cornell After 34 Years, Raising Governance Questions
ArXiv, the open-access preprint server that hosts over 2 million research papers in physics, mathematics, computer science, and AI, has announced it will separate from Cornell University to become an independent entity. The repository has operated under Cornell's umbrella since its founding in 1991. Community reaction is divided: some see independence as a positive step for a critical research institution, while others worry about potential risks, including the possibility of eventual for-profit conversion. Some commenters speculate academic publishers could be involved, though no evidence supports this.
Why it matters: ArXiv is where AI researchers publish breakthrough papers before peer review—its governance structure affects how quickly the field shares knowledge and whether access remains free.
Discuss on Hacker News · Source: science.org
What's Innovative
Clever new use cases for AI
Developers Build Bluesky Client in 1957-Era Programming Language
A group of developers built a terminal-based Bluesky client written entirely in Fortran—a language dating to 1957 that's still widely used in scientific computing but rarely seen in modern web applications. The project connects to Bluesky's AT Protocol, letting users browse and post from the command line. Community reaction on Hacker News ranged from amused appreciation ('the world is a better place for this app') to genuine curiosity about Fortran's capabilities for modern network programming.
Why it matters: This is a hobbyist curiosity, not a workflow tool—but it demonstrates that Bluesky's open protocol is attracting experimental development across an unusually wide range of programming communities.
Discuss on Hacker News · Source: github.com
What's in Academe
New papers on AI and its effects from researchers
Training Method Targets AI's Blind Spot: The Hundreds of Underserved Languages
Researchers introduced Variable Entropy Policy Optimization (VEPO), a training technique designed to improve AI performance on low-resource languages—the hundreds of languages with limited digital text for training. The method uses reinforcement learning to address how models break words into pieces (often inefficiently for non-English languages) and how they balance between literal and natural-sounding translations. The team reports improvements across 90 translation directions in standard benchmarks, though the paper doesn't provide specific numbers in its abstract.
Why it matters: This is academic research at an early stage, but it signals continued progress on AI's uneven language coverage—relevant for global organizations working across linguistic markets.
Security Challenge Tests Whether "Synthetic" AI Data Actually Protects Privacy
A security research challenge at the SaTML 2025 conference tested whether AI-generated synthetic data—the kind companies create to share datasets without exposing real customer information—can actually keep that information private. Researchers developed new attack methods to determine whether specific individuals' data was used to train diffusion models that generate fake tabular data. The challenge focused on both single-table and multi-relational database scenarios, common in enterprise settings.
Why it matters: Companies increasingly use synthetic data to sidestep privacy regulations while preserving data utility; this research tests whether that approach actually holds up under adversarial conditions.
AI Agents Underperform Humans on Data Science Tasks Unless Paired With People
A benchmark called AgentDS tested AI agents on real-world data science tasks across six industries—healthcare, manufacturing, retail banking, and others. The results are humbling: AI-only systems performed at or below the median of the 80 human participants across 29 teams. The strongest results came from human-AI collaboration, not autonomous AI agents working alone. The benchmark includes 17 challenges designed to require domain-specific reasoning, exposing a gap between general AI capabilities and specialized business contexts.
Why it matters: For organizations expecting AI to handle complex, industry-specific analysis autonomously, this suggests the near-term value is in augmenting skilled workers rather than replacing them.
Framework Proposes Guardrails for AI Agents Running Business Processes
Researchers have published a manifesto paper laying out theoretical foundations for "Agentic Business Process Management"—a framework for governing AI agents that execute business processes autonomously. The paper proposes four core capabilities: constrained autonomy (agents operate within defined boundaries), explainability, conversational interaction, and self-modification. This is conceptual groundwork, not a working system—no benchmarks or implementations are included. The framework attempts to bridge traditional process management with the reality that organizations are increasingly deploying AI agents that make decisions independently.
Why it matters: As companies move from chatbots to autonomous AI agents handling real workflows, the question of how to govern them—ensuring they stay in bounds, explain themselves, and remain auditable—becomes urgent; this paper signals that BPM researchers are now taking that problem seriously.
Dataset Could Help Hospitals Identify Patients Needing Simpler Explanations
Researchers have released HEALIX, the first publicly available dataset for detecting patient health literacy levels from clinical notes. The dataset contains 589 annotated clinical notes labeled as low, normal, or high health literacy, drawn from real medical records. The team tested several open-source language models on the task using various prompting strategies, though specific accuracy numbers weren't disclosed. The goal: enable healthcare systems to automatically flag when patients may need simplified explanations or additional support.
Why it matters: Health systems exploring AI for patient communication could use this kind of detection to tailor discharge instructions, medication guidance, and follow-up materials—potentially reducing readmission rates tied to patient confusion.
What's On The Pod
Some new podcast episodes
AI in Business — Why Ensemble Architectures Win Against Real-Time Voice Risk - with Mike Pappas of Modulate
The Cognitive Revolution — Zvi's Mic Works! Recursive Self-Improvement, Live Player Analysis, Anthropic vs DoW + More!