Each AI Edit Quietly Corrupts Your Documents, Study Finds
May 10, 2026
D.A.D. today covers 11 stories from 2 sources. What's New, What's Innovative, What's Controversial, What's in the Lab, and What's in Academe.
D.A.D. Joke of the Day: I asked Claude to help me cut my presentation down to 10 slides. It gave me 47 slides explaining why brevity matters.
What's New
AI developments from the last 24 hours
Meta's AI Push Reportedly Taking Toll on Employee Morale
A report claims Meta's aggressive AI push is straining employee morale, though specific details weren't available in the source material. Community reaction online showed limited sympathy—commenters noted Meta employees are "getting a taste of the medicine Facebook has been giving to everyone," while others debated broader questions about AI's role in workplace power dynamics and the lack of established norms around AI-assisted communication at work.
Why it matters: As major tech companies race to integrate AI across operations, early signals about workforce impact—and public appetite for sympathy—may preview tensions other organizations will face.
Discuss on Hacker News · Source: nytimes.com
Internet Archive Opens Swiss Nonprofit to Preserve AI Models and Global Archives
Internet Archive Switzerland has launched as an independent non-profit in St. Gallen, joining similar foundations in Canada and Europe. The new organization says it will focus on preserving endangered archives globally and documenting the generative AI era—including a partnership with the University of St. Gallen to archive AI models. Community reaction was supportive, with users welcoming a geographically distributed backup of the Internet Archive's mission. Some commenters suggested the separate foundations may need independence for funding and political resilience rather than pure redundancy.
Why it matters: As AI companies train on web data and legal challenges mount against the original Internet Archive, a distributed network of independent preservation organizations creates both geographic redundancy and institutional resilience for keeping the digital record intact.
Discuss on Hacker News · Source: blog.archive.org
Each AI Edit Quietly Corrupts Your Documents, Study Finds
A new paper examines how LLMs degrade documents when used for delegated tasks, testing fidelity by running text through chains of reversible steps. The finding: even top-tier models accumulate errors on tasks that should be straightforward for computers, with each pass introducing small corruptions. Community discussion largely confirms the pattern—one commenter coined 'semantic ablation' to describe the gradual degradation. Others report that LLM mistakes appear 'fundamentally incorrigible,' occurring regardless of task difficulty, though using smaller, purpose-built documents may help mitigate the effect.
Why it matters: For anyone chaining AI steps in workflows—editing, reformatting, translating back and forth—this suggests quality degrades cumulatively, making human checkpoints or simpler document structures worth considering.
Discuss on Hacker News · Source: arxiv.org
Fields Medalist Says ChatGPT Solved Research-Level Math Problems in an Hour
Mathematician Timothy Gowers, a Fields Medal winner, reports that ChatGPT 5.5 Pro solved research-level problems in additive number theory in roughly an hour—work he says required no serious mathematical input from him. The problems came from an academic paper on open questions in the field. Gowers suggests LLMs can now find 'easy arguments' that human mathematicians have overlooked, potentially raising the bar for what counts as a suitable research problem for PhD students and early-career researchers.
Why it matters: If a top mathematician's assessment holds up, it signals that frontier AI models are crossing into territory previously reserved for trained specialists—with implications for how research programs are designed and which problems remain worthwhile for humans to pursue.
Discuss on Hacker News · Source: gowers.wordpress.com
Developer Builds Fully Functional Web Server in Raw Assembly Code
A developer built a fully functional web server for MacOS entirely in ARM64 assembly language—the low-level code that talks directly to processors. The project, called ymawky, supports standard HTTP methods, video streaming, directory listings, and even security mitigations, all in about 4,000 lines of hand-written assembly. The developer notes this could be done with far less effort in a higher-level language. Community reaction was warmly appreciative, with users calling it a labor of love and requesting documentation to use it as a learning resource.
Why it matters: This is a hobbyist passion project, not a practical tool—but it signals continued interest in understanding computing at its most fundamental level, a skillset that's increasingly rare as AI generates ever more code.
Discuss on Hacker News · Source: github.com
What's in Academe
New papers on AI and its effects from researchers
Census Data Shows 18% of U.S. Firms Use AI, Mostly for Augmentation Not Job Cuts
New Census Bureau data from late 2025 shows AI adoption at 18% of U.S. firms—but 32% when weighted by employment, reflecting concentration in large companies. Adoption tops 50-60% among very large firms in information, professional services, and finance. Most adopters use AI narrowly: 57% deploy it in three or fewer business functions, with sales/marketing (52%), strategy (45%), and IT (41%) leading. The job-replacement fears? Not materializing yet—66% of firms use AI for task augmentation, while just 2% report cutting headcount. Firms with broader AI integration show better performance.
Why it matters: This is the most granular official U.S. data on how businesses actually use AI—and it shows adoption concentrated at the top, used mostly to assist workers rather than replace them, with a performance correlation that may accelerate the gap between AI adopters and laggards.
LLMs Skew Their Analysis When They Can Guess Your Goal
Research from the National Bureau of Economic Research found that large language models produce biased outputs when they can infer what you're trying to accomplish. In financial prediction tasks, telling an LLM the downstream use case caused it to skew intermediate analysis toward that goal—a form of overfitting that worked on historical data but failed on new information. The bias emerged even from subtle conversational cues hinting at purpose, not just explicit instructions. The researchers argue this isn't an algorithmic flaw but a consequence of how humans frame prompts.
Why it matters: For professionals using AI for analysis or forecasting, this suggests that explaining your goal to get 'better' answers may actually compromise objectivity—the model tells you what fits your narrative rather than what's true.
AlphaFold Boosted Basic Science but Drug Discovery Hasn't Followed
A study examining AlphaFold2's impact since its 2021 release found a surprising split: basic research on previously unstudied proteins jumped 15-40%, but experimental lab work determining protein structures continued at nearly the same rate. Researchers are using the AI predictions to explore new territory, not to skip the wet lab. Perhaps more striking: early-stage drug development shows no evidence of shifting toward the newly accessible proteins that AlphaFold mapped.
Why it matters: This is the first rigorous look at whether a landmark AI tool actually changed scientific practice—and the answer is 'yes, but not how you'd expect,' suggesting AI augments rather than replaces traditional methods, at least so far.
AI Stock Portfolios Perform No Better Than Market Benchmarks, Study Finds
Researchers tested what happens when you let LLMs manage stock portfolios by collecting daily recommendations from several AI models. The finding: AI picks stocks like a retail investor scanning headlines. The AI-managed portfolios clustered around large-cap, momentum-driven, low book-to-market stocks—essentially whatever's getting media attention. Using established finance methodology, the researchers found these portfolios generated no statistically significant abnormal returns compared to benchmarks. The AIs also recommended undiversified holdings, concentrating risk rather than spreading it.
Why it matters: For anyone tempted to outsource investment decisions to ChatGPT or similar tools, this is an early empirical caution: current LLMs appear to chase popular stocks rather than generate alpha.
Economic Model Predicts AI Could Trigger Explosive Growth Within Six Years
An NBER working paper models when AI automating AI research could trigger explosive economic growth. The researchers identify two reinforcing feedback loops: technological (AI improving AI improves everything else) and economic (higher output funds more research). Their simulation found that fully automating software research combined with just 5% automation in other sectors produces a "singularity"—superexponential growth—within six years. The paper derives analytical conditions for when these feedback loops overcome the diminishing returns that normally constrain innovation.
Why it matters: This provides a rigorous economic framework for the "intelligence explosion" scenario that AI labs and policymakers increasingly debate—moving it from science fiction speculation toward testable theory.
What's Happening on Capitol Hill
Upcoming AI-related committee hearings
Wednesday, May 13 — Hearings to examine how social media verdicts demand federal action. Senate · Senate Judiciary Subcommittee on Privacy, Technology, and the Law (Open Hearing) 226, Dirksen Senate Office Building
What's On The Pod
Some new podcast episodes
The Cognitive Revolution — Milliseconds to Match: Criteo's AdTech AI & the Future of Commerce w/ Diarmuid Gill & Liva Ralaivola