Where Is Agentic AI Headed? YC's New Funding List Offers Clues
April 28, 2026
D.A.D. today covers 13 stories from 5 sources. What's New, What's Innovative, What's Controversial, What's in the Lab, and What's in Academe.
D.A.D. Joke of the Day: My AI keeps giving me "95% confidence" answers. So does my weatherman. I've started carrying an umbrella to meetings.
What's New
AI developments from the last 24 hours
YC Funding Targets: Company Brains, AI-Centric Service Providers, And New Chips
Where is agentic AI headed? A new list offers some breadcrumbs. Y Combinator, the Silicon Valley accelerator that funds about 600 startups a year, has published its Summer 2026 "Requests for Startups" — the ideas its partners explicitly want founders to tackle. The list is more than a funding signal: YC's RFS tends to lead what gets built, and what gets built next tends to shape what companies will be using a few years from now. The most striking theme is what YC partner Gustaf Alströmer calls "AI-native service companies" — not software tools that help humans do their jobs, but companies that just do the work. The targets named: insurance brokerage, accounting and tax, audit, compliance, healthcare administration. The argument: spending on services is "many times larger" than spending on software, and most of these services are already outsourced, which makes them easier to replace with AI. Several other items reinforce the trajectory: a "company brain" startup that turns scattered organizational knowledge (emails, Slack, tickets, databases) into a structured skills file agents can execute on; an "AI operating system" that makes a company's whole workflow queryable in real time; and "software for agents" — rebuilding every major category of consumer and enterprise software with machine-readable interfaces (APIs, MCPs, CLIs) on the premise, in YC partner Aaron Epstein's words, that "the next trillion users on the internet won't be people, they'll be AI agents." Round it out with agent-specific inference chips (current GPUs hit only 30-40% utilization on agent workloads) and the observation that Fortune 100 buyers now routinely sign multi-million-dollar deals with two-person AI startups within their first year of existence. The list also reaches well beyond knowledge work, into precision agriculture (computer vision identifying individual weeds in real time), counter-swarm drone defense, AI-personalized medicine drawing on genome scans and wearables data, and even industrial activity on the moon — a reminder of how broadly YC sees agentic AI's next wave landing, well past the office.
Why it matters: YC's RFS list, taken together, makes a pattern visible that's hard to see in any single AI news story. Three implications for professionals: (1) White-collar services that are already routinely outsourced — tax, audit, compliance, claims handling, healthcare administration — are squarely in the next AI-disruption window, and the disruptors won't be SaaS vendors but AI-native firms that bill for outcomes, not seats. (2) Your company's domain knowledge — currently scattered across email threads, Slack, support tickets, and senior employees' heads — is the bottleneck for AI automation, and somebody is going to build the platform that consolidates it. (3) The internet itself is being rebuilt for software agents to navigate, not humans. The companies that ship machine-readable interfaces first will catch the wave; those still designing for human button-clicks will pay for the migration later.
DeepMind Partners With South Korea to Build AI Research Campus
Google DeepMind announced a partnership with South Korea's Ministry of Science and ICT, including a new AI Campus in Seoul for joint research with Korean academia. The collaboration will focus on applying DeepMind's scientific AI models—including AlphaFold, already used by 85,000 Korean researchers—to life sciences, climate, and energy research. Google cited Korea's position as having the fastest-growing AI adoption rate among top 30 economies. The deal includes talent development initiatives alongside the existing 50,000 AI Essentials scholarships Google has provided in the region.
Why it matters: This signals DeepMind's push to embed its research tools in national science infrastructure, potentially creating long-term dependencies while positioning Google as a partner to governments on AI-driven scientific discovery.
Microsoft-OpenAI Exclusivity Ends, Clearing Path for $50B Amazon Cloud Deal
Microsoft and OpenAI have reportedly restructured their partnership, ending the revenue-sharing arrangement and removing the exclusivity provisions that bound the two companies together. The original deal gave Microsoft exclusive cloud rights to OpenAI's models in exchange for billions in infrastructure investment. The change has an immediate consequence: it clears the way for a $50 billion cloud deal Amazon and OpenAI signed in March, which made Amazon Web Services the exclusive third-party cloud provider for "Frontier," OpenAI's new enterprise platform for building and running AI agents. Microsoft had reportedly been weighing legal action over that arrangement, arguing it violated its own exclusivity with OpenAI. The restructuring announced this week resolves that dispute and leaves OpenAI free to deepen its AWS relationship. Community reaction has been cautiously positive — some note this removes a competitive handicap versus Anthropic, which has long been able to partner with any cloud provider.
Why it matters: Enterprises may soon be able to access OpenAI models through providers other than Azure — expanding procurement options and potentially creating pricing competition among cloud vendors. The bigger picture: the cloud-lab alliances that defined the past three years (OpenAI-Microsoft, Anthropic-Google-Amazon) are becoming less exclusive, with implications for how AI capabilities get distributed and priced.
Discuss on Hacker News · Source: bloomberg.com
ChatGPT Clears Federal Security Hurdle, Opening Door to Government Contracts
OpenAI has received FedRAMP Moderate authorization for ChatGPT Enterprise and its API, clearing a key hurdle for U.S. federal agencies to adopt its AI services. FedRAMP (Federal Risk and Authorization Management Program) is the government's security certification standard for cloud services—Moderate authorization covers systems handling sensitive but unclassified data, which includes most federal workloads. This puts OpenAI in position to compete for government contracts alongside Microsoft's Azure OpenAI Service, which already holds FedRAMP authorization.
Why it matters: Federal procurement is a massive market, and this certification signals OpenAI is serious about competing for government business—potentially reshaping how agencies deploy AI tools.
GitHub Copilot Switches to Pay-Per-Use Credits in 2026
GitHub will shift all Copilot plans to usage-based billing on June 1, 2026, replacing its current premium request model with 'GitHub AI Credits' tied to token consumption. Monthly subscription prices stay the same—$10 for Pro, $39 for Pro+, $19/user for Business—but now represent credit allotments rather than flat access. Basic code completions remain unlimited; credits apply to chat, agentic coding, and multi-model features. GitHub says the old model became unsustainable as AI coding assistants handle longer, more complex tasks. A billing preview tool arrives in May to help teams estimate costs.
Why it matters: Teams relying heavily on Copilot's advanced features—especially agentic coding sessions that chain multiple AI calls—should audit their usage before the switch; light users may see no change, but power users could face overages.
Discuss on Hacker News · Source: github.blog
Staring at Walls May Beat Apps for Restoring Focus, Blogger Claims
A blog post advocates for a focus technique: staring at a wall for 5-10 minutes using peripheral vision and 'mind blanking' to combat information overload and restore concentration. The author cites a 2012 study claiming people receive 34 GB of information daily, extrapolating to 87 GB today. Evidence is purely anecdotal. Community reaction has been skeptical—commenters note this sounds like 'reinvented mindfulness' and criticized the cited research as measuring media consumption, not actual cognitive processing.
Why it matters: This isn't AI news—it's a personal productivity tip with no rigorous evidence, though it highlights ongoing interest in low-tech solutions to digital overwhelm.
Discuss on Hacker News · Source: alexselimov.com
What's Innovative
Clever new use cases for AI
Why a Chatbot Frozen in 1930 Might Actually Be Useful
Researchers have built a 13-billion-parameter language model trained exclusively on text published before 1931 — essentially an AI that "knows" nothing about the modern world. No WWII, no UN, no civil rights movement, no internet, no smartphones. The team is running a live stream where Claude Sonnet prompts the vintage model to explore its knowledge boundaries. Early findings: the model shows measurably higher surprise at events from the 1950s-60s (as expected), and while it can handle simple coding tasks given examples, it dramatically underperforms models trained on modern web data. Performance improves slowly with scale.
Why it matters: This is more than an idle curiosity. A model with a hard pre-1930 cutoff is a useful tool for several real problems: novelists, screenwriters, and historians can use it to check whether period dialogue or reasoning is anachronistic; AI-safety researchers gain a controlled environment for studying how models hallucinate or extrapolate beyond their training data — central to AI reliability questions; and the project itself is a vivid demonstration to non-technical audiences of how much "what AI knows" depends on what it was trained on. The deeper research question: how would a model reason about contemporary moral or political problems without exposure to post-WWII frameworks like the Universal Declaration of Human Rights, post-Holocaust ethics, or post-colonial discourse? The answer would say something about how much of contemporary AI's worldview is a contingent product of recent training data versus timeless reasoning.
Discuss on Hacker News · Source: talkie-lm.com
What's in the Lab
New announcements from major AI labs
Google and Kaggle Offer Free Five-Day Course on Building AI Agents
Google and Kaggle are offering a free five-day online course on building AI agents, running June 15-19, 2026. The course covers creating agents that use natural language as the primary programming interface—what practitioners call 'vibe coding'—and includes hands-on projects taking participants from fundamentals to production-ready systems. Google says its previous intensive course reached over 1.5 million learners. Registration is open now.
Why it matters: Free structured training from a major AI lab offers a low-risk way for teams to upskill on agent-based workflows, which are becoming central to how enterprises deploy AI tools.
What's in Academe
New papers on AI and its effects from researchers
Psychologists Create Validated Scales for Measuring Human-AI Teamwork
Researchers have created two validated psychological scales for measuring how well humans and AI systems work together. The Perceived Cooperativity Scale and Teaming Perception Scale—tested across 409 participants in scenarios ranging from card games to LLM interactions—can reliably distinguish between high-quality and low-quality AI cooperation partners. The scales draw on joint activity theory and evolutionary cooperation theory to assess subjective teamwork quality.
Why it matters: As AI tools become embedded in team workflows, organizations will need standardized ways to evaluate whether these tools actually improve collaboration—these scales offer a research-backed starting point for that assessment.
LLMs Could Help Self-Driving Cars Interpret Traffic Laws Automatically
A new research paper proposes using LLMs to automatically extract legal requirements for autonomous vehicles from traffic laws—a task traditionally requiring painstaking manual work by legal and engineering teams. The approach grounds the AI's reasoning in a structured taxonomy of driving scenarios rather than letting it interpret laws freely. Tested on Chinese traffic regulations across nearly 6,000 driving scenarios, the method improved accuracy in matching laws to situations by 29% and boosted correct identification of mandatory and prohibited behaviors by 37-38%.
Why it matters: If the approach generalizes to other jurisdictions, it could accelerate how quickly AV companies adapt their systems to different regulatory environments—a significant bottleneck as autonomous vehicles expand globally.
VR Games May Pose Unique Safety Risks for Children, Researchers Warn
Researchers examined how extended reality (XR) game design—think VR headsets and immersive environments—may create unique safety risks for children that existing ethical frameworks don't adequately address. The study analyzed player forums, developer discussions, and interviews with young players to identify harmful design patterns specific to immersive gaming. The researchers argue that current safety approaches, built for traditional screens, fail to account for how XR's physical immersion and real-time social dynamics create new vulnerabilities. They're calling for child-centered design standards tailored to these environments.
Why it matters: As Meta, Apple, and others push VR/AR headsets into homes, this signals growing scrutiny of whether platform safety policies designed for smartphones and PCs are adequate for immersive tech increasingly used by kids.
Your Personality Shapes How You Respond to AI Recommendations
A user study (N=100) found that multi-agent AI systems—where multiple LLM components collaborate rather than a single model responding—produced movie recommendations users perceived as more novel and diverse. But the research surfaced a more interesting finding: personality traits significantly shaped how people responded. Conscientious users rated diverse recommendations more favorably; extraverts perceived less diversity from the same outputs. Users skeptical of AI explored recommendations less broadly, while those with prior AI experience engaged more.
Why it matters: As AI recommendations spread across enterprise tools—from content curation to product suggestions—this suggests personalization may need to account for user psychology, not just preferences.
AI Bias Compounds at Identity Intersections, 5,300-Incident Study Finds
A large-scale analysis of 5,300 reports from the AI Incident Database found that AI harms compound at identity intersections—up to three times higher for groups like adolescent girls, lower-class people of color, and upper-class political elites. The study challenges frameworks that assess bias along single dimensions (race OR gender). Surprisingly, age and political identity appeared in documented harms at rates comparable to race and gender, suggesting current fairness audits may have blind spots. Researchers used an LLM-based rubric to classify 1,513 harmed subjects with 98% accuracy.
Why it matters: For organizations conducting AI risk assessments or bias audits, this suggests single-axis testing (checking for racial bias, then gender bias separately) may miss the highest-harm cases—a methodological gap with compliance and liability implications.
What's Happening on Capitol Hill
Upcoming AI-related committee hearings
Thursday, April 30 — Senate Judiciary business meeting includes consideration of S.3062, which would require AI chatbots to implement age verification measures and make certain disclosures. Senate Judiciary, 216 Hart Senate Office Building.
What's On The Pod
Some new podcast episodes
AI in Business — How Digital Transformation Shortens the Path to Clinical Trials - with Dr. Gopalendu Pal of Target
How I AI — From a $6.90 newsletter to $3M API: How a non-coder built Memelord | Jason Levin
The Cognitive Revolution — AI in the AM: 99% off search, GPT-5.5 is "clean", model welfare analysis, & efficient analog compute