March 1, 2026

D.A.D. today covers 13 stories from 5 sources. What's New, What's Innovative, What's Controversial, What's in the Lab, and What's in Academe.

D.A.D. Joke of the Day: My AI passed the bar exam, medical boards, and CPA test. I asked it to summarize a one-page email and it said "I'll do my best."

What's New

AI developments from the last 24 hours

Claude Surges to No. 1 on Apple Store Amid Pentagon Fight

Claude surged to the No. 1 spot on the Apple App Store, riding a wave of user support after the Trump administration blacklisted Anthropic from all federal work. Responding to the demand, Anthropic released copy-paste instructions to help people migrate their user preferences from ChatGPT. OpenAI's account deletion page also circulated on Hacker News, which includes a 30-day window before permanent deletion—effectively allowing users to return with a fresh account if desired.

Why it matters: Consumer backlash is turning the Pentagon's punishment of Anthropic into a marketing event—the company is gaining users precisely because it was blacklisted for refusing to drop its safety red lines.


OpenAI Publishes Pentagon Contract In Effort To Quell Furor

OpenAI published its agreement with the Department of War, which permits military use of its AI systems for "all lawful purposes" under existing surveillance and weapons laws. The contract references compliance with the Fourth Amendment, FISA, and DoD directives on autonomous weapons, and OpenAI says it could terminate the agreement if the government violates terms. The community reaction was largely critical: many argued the contract contained ample wiggle room for the Pentagon to cross OpenAI and Anthropic's stated red lines. The contract only prohibits surveillance "to the extent that that surveillance is already prohibited by law"—meaning the DoD could legally procure bulk citizen data from private companies and apply OpenAI's tools to it at scale, something Anthropic explicitly refused. One poster on Hacker News said: "This seems like the weasel language Dario was talking about." The phrase allowing use for "all lawful purposes, consistent with applicable law, operational requirements, and well-established safety protocols" also drew fire—particularly "operational requirements," which critics said could provide broad discretionary authority.

Why it matters: OpenAI published the contract to show its guardrails are real, but the reaction suggests it may have achieved the opposite—drawing attention to how much the protections depend on existing law and Pentagon self-restraint rather than explicit prohibitions.


Former Tesla AI Chief Demonstrates GPT Architecture in 200 Lines of Code

Andrej Karpathy, former Tesla AI director and OpenAI researcher, released "microgpt"—a 200-line Python file that implements a complete GPT system from scratch with no external dependencies. The educational project includes a tokenizer, neural network architecture, optimizer, and training loop. Trained on 32,000 names, it successfully generates plausible new ones. Karpathy's point: everything beyond these 200 lines in production AI systems is optimization for speed and scale, not fundamental capability.

Why it matters: This is developer education, not a product—but it's a striking illustration of how conceptually simple the core LLM architecture actually is, which may inform how executives think about the technology's commoditization.


What's Innovative

Clever new use cases for AI

Perplexity Releases Its First Text Embedding Model

Perplexity AI released pplx-embed-v1-0.6b, a 600-million parameter text embedding model now available on Hugging Face. The model, built on Qwen3 architecture, is designed for feature extraction—converting text into numerical representations that enable semantic search, document clustering, and retrieval systems. This is Perplexity's first public embedding model, signaling the search-focused company is building out its own AI infrastructure rather than relying solely on third-party models.

Why it matters: This is developer infrastructure, not an end-user product—but it suggests Perplexity is positioning to compete more directly with enterprise search and retrieval vendors, not just as a consumer search engine.


Research Checkpoint for Video Reasoning Appears on Hugging Face

A new model called VBVR-Wan2.2 appeared on Hugging Face, built on the Wan2.2 image-to-video architecture. The model appears focused on video reasoning or generation, though no documentation, benchmarks, or usage examples were provided at publication. This is developer-level infrastructure—a research checkpoint rather than a usable product.

Why it matters: Unless you're building custom video AI pipelines, this isn't actionable yet; it signals continued activity in open-source video generation but offers nothing ready for business use.


Alibaba Releases Vision-Language Model Optimized for Standard Hardware

Qwen released Qwen3.5-27B-FP8, a 27-billion parameter multimodal model that processes both images and text in conversation. The model uses FP8 format—a compression technique that reduces memory requirements while preserving most capability—making it more practical to run on standard hardware. Available now through Hugging Face's transformers library, it's aimed at developers building AI applications that need to understand visual content alongside text.

Why it matters: This is developer infrastructure—another capable open-weights multimodal model joining the options for teams building custom AI applications, though it won't change how most professionals use existing tools today.


Fal Releases Lightweight Model for Multi-Angle Image Editing

Fal released a new image-editing model called Qwen-Image-Edit-2511-Multiple-Angles-LoRA, built on Alibaba's Qwen architecture. The model is a LoRA—a lightweight fine-tune that modifies an existing model for specific tasks—designed for editing images across multiple angles. No performance benchmarks or demonstrations were provided with the release.

Why it matters: This is developer infrastructure—a specialized tool for those building image-editing applications, not something most professionals would use directly yet.


Explicit Face-Swap Tool Surfaces on Hugging Face, Raising Moderation Questions

A Hugging Face Space labeled 'nsfw-face-swap' appeared on the platform, suggesting a face-swapping tool potentially designed for explicit content. The listing provides minimal details—just that it uses Gradio and is US-hosted. No information about actual capabilities, safeguards, or usage is available. This is one of many such tools that surface on open AI platforms, raising ongoing questions about content moderation and deepfake misuse.

Why it matters: The existence of explicitly-named NSFW deepfake tools on major AI platforms highlights the persistent tension between open-source accessibility and preventing harm—a policy debate that could affect how all AI tools are regulated and distributed.


What's in Academe

New papers on AI and its effects from researchers

10 Open-Weight AI Models Now Available for Local Deployment

A roundup article compares 10 open-weight large language models released between January and February 2026, surveying the latest architectures available outside the major closed providers. Open-weight models—where the model weights are publicly available for download and local use—give organizations more control over deployment, data privacy, and customization than API-only services. The article appears to be a landscape overview rather than original research, with no benchmark data or specific performance claims provided in the source material.

Why it matters: For teams evaluating self-hosted AI options, landscape comparisons help track which open models are worth testing—though this particular roundup lacks the performance data needed to make procurement decisions.


Breakthrough Method Reconstructs 3D Scenes From 1,000 Images in Under a Minute

Researchers have developed VGG-T³, a 3D reconstruction model that scales linearly with input images rather than quadratically—a computational breakthrough for processing large image sets. The method reportedly reconstructs 1,000 images in 54 seconds, an 11.6x speedup over standard approaches, while maintaining competitive accuracy. The technique distills visual data into compact neural network representations during processing, avoiding the memory bottlenecks that have limited existing systems.

Why it matters: This is research infrastructure—relevant if your organization works with 3D scanning, architecture visualization, or spatial computing, where processing large photo collections has been a significant time and cost constraint.


Training Method Targets AI's Overconfident Wrong Answers

Researchers have identified a flaw in how AI models learn from feedback: when models make confident but wrong answers, standard training methods don't penalize those errors strongly enough. The fix, called Asymmetric Confidence-aware Error Penalty (ACE), hits overconfident mistakes harder while preserving the model's willingness to explore different solutions. Testing across three model families on math reasoning benchmarks showed consistent improvements.

Why it matters: Better error correction during training could mean AI assistants that are less prone to confident-sounding hallucinations—a persistent problem in enterprise deployments where users may not catch plausible-sounding errors.


Medical Scan AI Claims Better Results With Less Training Data

Researchers developed MedCLIPSeg, a framework that adapts OpenAI's CLIP model for medical image segmentation—the task of identifying and outlining specific structures in medical scans. The approach claims to outperform prior methods across 16 datasets covering five imaging types (X-rays, CT, MRI, etc.) and six organs. Key advantages: it requires less training data than conventional methods and generates uncertainty maps showing where the AI is less confident in its predictions—potentially useful for flagging cases that need human review.

Why it matters: For healthcare organizations exploring AI-assisted diagnostics, data efficiency and built-in uncertainty quantification could reduce both implementation costs and liability concerns around automated image analysis.


First Open Benchmark for General-Purpose AI Agents Released

Researchers have released what they call the first Open General Agent Leaderboard, benchmarking five AI agent implementations across six different environments. The accompanying paper proposes evaluation standards for "general agents"—AI systems designed to handle diverse tasks without environment-specific tuning. The team claims these generalist agents can match the performance of specialized agents built for particular domains. The evaluation framework is being made public.

Why it matters: As enterprises explore AI agents for automation, standardized evaluation methods could help buyers compare products—though this early-stage research needs industry adoption before it becomes a practical purchasing tool.


What's Happening on Capitol Hill

Upcoming AI-related committee hearings

Tuesday, March 03Hearings to examine AI that improves safety, productivity, and care. Senate · Senate Commerce, Science, and Transportation Subcommittee on Science, Manufacturing, and Competitiveness (Meeting) 253, Russell Senate Office Building


What's On The Pod

Some new podcast episodes

AI in BusinessAI for Better Customer Connections in CX - with Joe Atamian of Comcast