Trump Administration Blacklists Anthropic
February 28, 2026
D.A.D. today covers 17 stories from 6 sources. What's New, What's Innovative, What's Controversial, What's in the Lab, and What's in Academe.
D.A.D. Joke of the Day: My AI wrote a resignation letter so good, I almost signed it.
What's New
AI developments from the last 24 hours
Trump Administration Blacklists Anthropic From All Federal Work
In a historic move, President Trump directed every federal agency to "immediately cease all use of Anthropic's technology," calling the company "radical left" and "woke" in a Truth Social post Friday evening. Secretary of War Pete Hegseth followed with a formal designation of Anthropic as a supply-chain risk to national security—meaning no contractor, supplier, or partner doing business with the U.S. military may conduct any commercial activity with Anthropic. There is a six-month transition period, during which Anthropic must continue providing services. Trump warned of "major civil and criminal consequences" if the company does not cooperate.
Hegseth's statement was fierce, accusing Anthropic of "duplicity" and "corporate virtue-signaling that places Silicon Valley ideology above American lives." He said Anthropic's "true objective is unmistakable: to seize veto power over the operational decisions of the United States military."
Anthropic responded with defiance. The company called the designation "legally unprecedented" for a U.S. company—historically reserved for adversaries—and said it would challenge it in court. CEO Dario Amodei reiterated the company's two red lines: no mass domestic surveillance of Americans, and no fully autonomous weapons. "No amount of intimidation or punishment from the Department of War will change our position," the company said, arguing the restrictions have "not affected a single government mission to date."
Why it matters: This is the first time the U.S. government has blacklisted an American AI company. It sets an extraordinary precedent: that refusing to grant the military unrestricted access to your technology can result in being locked out of all federal business and designated a national security threat. The legal challenge will test the limits of executive power over the AI industry.
Discuss on Hacker News · Source: Anthropic statement
OpenAI Steps In as Anthropic Is Pushed Out
Hours after the Anthropic blacklist, OpenAI announced an agreement with the Department of War to deploy its models on classified military networks—replacing Anthropic, which was the first lab to operate in that space. Sam Altman said the deal includes safety principles prohibiting domestic mass surveillance and requiring human responsibility for the use of force, including with autonomous weapons—the same red lines Anthropic was blacklisted for insisting on. OpenAI says it will embed engineers to monitor model safety. Separately, OpenAI closed a record $110 billion funding round at a $730 billion valuation, with $50 billion from Amazon, $30 billion from SoftBank, and $30 billion from NVIDIA. The Amazon investment expands a strategic partnership that brings OpenAI's models to AWS—a major Pentagon cloud provider—loosening OpenAI's exclusive alignment with Microsoft.
Why it matters: OpenAI is positioning itself as the Pentagon's preferred AI partner at the precise moment its chief rival is being expelled. The question many are asking: if OpenAI's deal includes the same red lines Anthropic demanded, why was Anthropic punished and OpenAI rewarded?
Discuss on Hacker News · Source: twitter.com
Community Reaction: Outrage, Skepticism, and the 'Apple vs. FBI' Comparison
The Hacker News response has been overwhelming and largely sympathetic to Anthropic—but with sharp divisions beneath the surface. The dominant camp views this as a defining moment for the tech industry, with multiple commenters comparing it to Apple's 2016 fight with the FBI over iPhone encryption. Supporters argue the government contractually agreed to Anthropic's restrictions and is now retroactively punishing the company for holding the line. A skeptical minority questions whether Anthropic is revealing the full story, noting the vagueness around what the Pentagon actually asked for—with some speculating the real dispute involves surveillance through contractor intermediaries. A cynical camp sees both sides acting in bad faith, viewing the standoff as emblematic of broader democratic erosion. And a pointed question is circulating widely: if OpenAI's new Pentagon deal includes the same prohibitions on surveillance and autonomous weapons, what exactly was the dispute about? Petitions continue to circulate at Google and OpenAI, with employees at notdivided.org calling on their companies not to fill the void Anthropic's expulsion creates.
Why it matters: The community debate is surfacing a question that will define this era: does standing up to government overreach in AI carry real consequences—or only for some companies?
Discuss on Hacker News · Source: notdivided.org
What's Innovative
Clever new use cases for AI
GitHub Badge Shows Whether Your Codebase Fits in an AI's Context Window
A GitHub Action called Repo Tokens automatically counts your codebase's size in tokens and displays a badge showing what percentage of an LLM's context window it fills. The badge turns green under 30%, yellow at 50-70%, and red above 70%, using Claude's 200k token limit as the default. Community reaction was mixed—some called token budgets 'the new line count metric for the LLM era,' while others noted that labeling large codebases as 'red' unfairly stigmatizes legitimately complex projects.
Why it matters: As AI coding assistants become standard workflow tools, knowing whether your codebase fits in their context window is genuinely useful information—though the metric's value may shift as context limits continue expanding.
Discuss on Hacker News · Source: github.com
Recovery Tool Emerges After Developer Claims Claude Code Deleted Research Files
A developer built claude-file-recovery, a command-line tool that extracts files from Claude Code's local session history after the AI coding assistant allegedly deleted their research files by running rm -rf through an unrecognized symlink. The tool claims to recover any file Claude Code ever read, edited, or wrote—including earlier versions at specific points in time. Community reaction is mixed: some note Claude Code can already replay its own session files for recovery, and others question whether the built-in /rewind command handles this use case.
Why it matters: For teams using agentic coding assistants with file system access, this highlights both a real risk (AI tools executing destructive commands) and a reminder to understand what recovery options already exist before reaching for third-party fixes.
Discuss on Hacker News · Source: github.com
DeepSeek Releases Open-Weight Reasoning Model to Challenge Western Labs
DeepSeek-AI has released DeepSeek-R1, a conversational text-generation model now available on Hugging Face. The model is built on DeepSeek's V3 architecture and can be accessed through standard AI development tools. DeepSeek, a Chinese AI lab, has been releasing increasingly capable open-weight models that compete with offerings from major Western labs. No benchmark data or independent evaluations were provided with this release.
Why it matters: This is developer infrastructure—another open-weight model option for teams building AI applications, though its real-world capabilities relative to competitors remain to be tested.
New Face-Swapping Tool Appears on Hugging Face
A new face-swapping tool called Flux2-Klein-Face-Swap appeared on Hugging Face, the open-source AI model repository. The application, built by developer linoyts, uses Gradio for its interface. No details on capabilities, accuracy, or underlying model architecture were provided in the listing.
Why it matters: This is developer plumbing—face-swap tools are common on Hugging Face, and without benchmarks or notable features, this particular release isn't relevant to most business workflows yet.
LiquidAI Releases 24B-Parameter Model Designed for Local Deployment
LiquidAI released LFM2-24B-A2B-GGUF, a 24-billion parameter text generation model designed for edge deployment. The model uses a "mixture of experts" architecture where only a portion of parameters activate per query (the "A2B" designation likely indicates 2 billion active parameters), making it more efficient to run on limited hardware. It's available in GGUF format—a compressed format popular for running models locally rather than through cloud APIs.
Why it matters: This is developer infrastructure—relevant if your team is exploring self-hosted AI to keep sensitive data off external servers or reduce API costs, but not something most business users will interact with directly.
What's Controversial
Stories sparking genuine backlash, policy fights, or heated disagreement in the AI community
Google and OpenAI Staff Circulate Open Letter Opposing Pentagon's Anthropic Blacklist
An open letter circulating at Google and OpenAI is urging employees not to help their companies fill the void left by Anthropic's expulsion from federal work. The letter, organized through notdivided.org, frames the Department of War's blacklisting of Anthropic as a warning to the entire industry: cooperate without limits or face punishment. Signatories—current and former employees at both companies—argue the government is "trying to divide each company with fear that the other will give in." The letter offers anonymous signing options, reflecting the sensitivity of publicly opposing your employer's defense contracts.
Why it matters: Employee activism at major AI labs has historically preceded significant policy shifts—Google's 2018 Project Maven revolt led to the company withdrawing from the Pentagon drone program. Whether this letter gains similar traction will test how much leverage tech workers still have in the current political climate.
Discuss on Hacker News · Source: notdivided.org
Former Trump AI Adviser Delivers Blistering Attack on Anthropic Blacklist
Dean W. Ball, who left the White House just weeks ago after helping craft the administration's AI policies, called the Anthropic blacklist "a psychotic power-grab," "almost certainly illegal," "a dark day in U.S. history," and "attempted corporate murder" in a string of posts on X. Ball said the U.S. government now treats China's DeepSeek dramatically better than it treats Anthropic's Claude, and warned of severe downstream effects on the American AI ecosystem. "I could not possibly recommend investing in American AI to any investor; I could not possibly recommend starting an AI company in the United States," he wrote, urging companies to look at Canada, the U.K., Australia, the UAE, and elsewhere.
Why it matters: When the administration's own recent AI adviser publicly calls its actions illegal and warns investors to flee the country, it signals that the Anthropic blacklist may have crossed a line even within the president's policy circle—and gives the emerging legal challenge a powerful validator.
What's in the Lab
New announcements from major AI labs
Microsoft and OpenAI Issue Joint Statement Amid Partnership Speculation
Microsoft and OpenAI issued a joint statement reaffirming their partnership across research, engineering, and product development. The brief announcement provided no new details about specific initiatives or changes to their relationship, instead emphasizing "years of partnership and shared success." The timing suggests the statement may be responding to recent speculation about tensions between the companies, though neither addressed specific concerns.
Why it matters: Joint statements affirming partnerships typically signal the opposite—that questions have been raised—making this worth watching as Microsoft has invested over $13 billion in OpenAI.
OpenAI Adds Parental Controls and Safety Tools Amid Legal Pressure
OpenAI published an update on its mental health safety initiatives, announcing new protective features including parental controls, trusted contacts for check-ins, and improved detection of users in distress. The post also addresses recent litigation—the company faces lawsuits alleging its chatbots contributed to user harm. OpenAI framed the update as part of ongoing safety work, though the timing coincides with increased regulatory and legal scrutiny of AI companionship products.
Why it matters: As AI chatbots become more conversational and emotionally engaging, companies face mounting pressure—from courts, regulators, and the public—to demonstrate they're building guardrails, making this a template other labs will likely follow.
What's in Academe
New papers on AI and its effects from researchers
Brain-Monitoring Framework Models Neural Activity as Continuous Flow
Researchers have proposed ODEBRAIN, a framework for analyzing brain activity from EEG data that models neural dynamics as continuous rather than discrete time steps. The approach combines spatial, temporal, and frequency information from brain signals and uses Neural ODEs to track how brain states evolve. The researchers claim the method reduces prediction errors and generalizes better than existing approaches, though the paper's abstract doesn't provide specific benchmark numbers.
Why it matters: This is academic research with potential long-term applications in brain-computer interfaces, neurological diagnostics, and mental health monitoring—but nothing that changes clinical or commercial EEG tools today.
Voice Assistants Could Feel More Natural With Listen-While-Talking Technique
Researchers have developed a framework called DDTSR that could make AI voice assistants feel more conversational by letting them listen and formulate responses simultaneously—like humans do—rather than waiting for you to finish speaking before processing. In benchmark tests, the approach cut response latency by 19% to 51% while maintaining conversation quality. The researchers say the system works as a plug-and-play module compatible with various large language models.
Why it matters: This is research-stage work, but it addresses a real friction point: the awkward pauses in voice AI that make conversations feel robotic rather than natural.
AI Accuracy on Complex Documents Jumps Up to 61% in Research Test
Researchers have proposed MoDora, a system designed to help AI analyze messy real-world documents—the kind with tables, charts, and nested sections that trip up standard chatbots. The approach organizes document components into a tree structure that preserves layout relationships, then uses tailored retrieval strategies depending on the query. In testing, MoDora reportedly improved accuracy by 6% to 61% over baseline methods for answering questions about complex documents.
Why it matters: If this approach matures into products, it could make AI assistants far more reliable for analyzing financial reports, contracts, and technical documentation where structure carries meaning.
Researchers Propose Tweak to Transformer Architecture, Claim Stability Gains
Researchers have proposed a modification to the core attention mechanism that powers modern AI models like GPT and Claude. The technique, called Affine-Scaled Attention, adjusts how the model weighs different parts of its input, relaxing a mathematical constraint that's been standard since transformers were introduced. The team claims this produces more stable training and better performance across multiple model sizes, though specific benchmark numbers weren't provided.
Why it matters: This is foundational research—if validated and adopted by major labs, it could improve the base capabilities of future AI models, but it won't affect tools you're using today.
Framework Aims to Catch AI Models Hiding Messages in Their Outputs
Researchers have proposed a framework for detecting when AI models hide information in their outputs—a technique called steganography. Traditional detection methods require knowing what "normal" output looks like, which isn't feasible for LLMs. The new approach measures the "steganographic gap"—how much more information someone who knows the code can extract versus someone who doesn't. The authors claim this can detect, quantify, and mitigate hidden signaling in LLM outputs.
Why it matters: As AI systems become more autonomous and communicate with each other, detecting whether models are secretly encoding information humans can't see becomes a real safety concern—this is early-stage work on a problem that enterprise AI governance may eventually need to address.
What's Happening on Capitol Hill
Upcoming AI-related committee hearings
Tuesday, March 03 — Hearings to examine AI that improves safety, productivity, and care. Senate · Senate Commerce, Science, and Transportation Subcommittee on Science, Manufacturing, and Competitiveness (Meeting) 253, Russell Senate Office Building
What's On The Pod
Some new podcast episodes
AI in Business — AI for Better Customer Connections in CX - with Joe Atamian of Comcast