GPT-5.4 Predicts Your Preferences Better Than Other Humans Do, Study Finds
May 11, 2026
D.A.D. today covers 10 stories from 4 sources. What's New, What's Innovative, What's Controversial, What's in the Lab, and What's in Academe.
D.A.D. Joke of the Day: My AI wrote a perfect cover letter for a job I didn't apply for. Now I'm not sure which one of us is looking for an exit strategy.
What's New
AI developments from the last 24 hours
The Case for On-Device AI: Privacy and Reliability vs. Raw Capability
A developer argues that software makers should default to on-device AI rather than cloud APIs from OpenAI or Anthropic, citing privacy, reliability, and reduced complexity. The piece demonstrates the approach using Apple's local model APIs for article summarization in an iOS app, showing roughly 10 lines of Swift code to implement the feature. The argument centers on avoiding network dependencies, vendor lock-in, and data retention concerns—though it doesn't address the significant capability gap between on-device models and frontier cloud services.
Why it matters: This reflects a growing tension in AI product design: cloud models offer superior capabilities, but local processing offers privacy and reliability guarantees that matter for certain use cases—a tradeoff more teams will need to navigate as Apple, Google, and others expand on-device AI options.
Discuss on Hacker News · Source: unix.foo
Maryland Ratepayers Challenge $2B Grid Upgrade That Benefits Out-of-State Data Centers
Maryland's consumer advocate has filed a federal complaint challenging a $2 billion charge to state ratepayers for regional grid upgrades, arguing the costs primarily benefit data centers in Virginia, Ohio, and other states. The charge is part of a $22 billion infrastructure plan by PJM, the grid operator serving 13 states and 65 million people. If approved, Maryland residential customers would pay an estimated $345 extra over 10 years, with industrial users facing roughly $15,000 each. The complaint invokes the 'ratepayer protection pledge'—the principle that tech companies, not consumers, should fund grid expansions driven by AI demand. Meanwhile, 69 jurisdictions across the country have passed moratoriums on data center projects.
Why it matters: This is an early test case for who pays for AI's enormous energy appetite—a fight that will play out repeatedly as data center demand strains regional grids.
Discuss on Hacker News · Source: tomshardware.com
What's Innovative
Clever new use cases for AI
Best Local AI Model for M4 MacBooks Depends on Memory Trade-Offs
A developer testing local AI models on an M4 MacBook Pro with 24GB memory found Qwen 3.5-9B to be the sweet spot for usable performance. Running a compressed 4-bit version through LM Studio, it achieved roughly 40 tokens per second with a 128K context window—fast enough for practical work while leaving headroom for other apps. Larger models like GPT-OSS 20B and Devstral Small 24B proved unusable on this hardware. Google's Gemma 4B ran smoothly but failed at tool use, a dealbreaker for coding workflows.
Why it matters: For professionals curious about running AI locally—whether for privacy, offline access, or avoiding subscription costs—this provides a practical benchmark: a $1,600 laptop can now run capable models at usable speeds, though model choice matters enormously.
Discuss on Hacker News · Source: jola.dev
What's in the Lab
New announcements from major AI labs
OpenAI Recruits Student Clubs for Campus Partnership Program
OpenAI launched a Campus Network initiative, inviting student clubs at universities worldwide to apply for partnerships. The program promises hands-on learning opportunities, workshops, research support, and early access to OpenAI tools and programs. The company says it aims to create 'AI-native campuses' through a global network of student leaders. No details yet on selection criteria, timeline, or what 'early access' specifically includes.
Why it matters: This is talent pipeline strategy—OpenAI is cultivating relationships with future AI practitioners and potential hires while building brand loyalty on campuses, a playbook tech giants have used for decades.
Enterprise AI Success Depends on Trust and Governance, Not Speed, Executives Say
A new executive guide compiles interviews with leaders at Philips, BBVA, Mirakl, Scout24, JetBrains, and Scania on scaling AI across large organizations. The through-line: successful enterprise AI adoption depends less on rapid technology rollout than on building trust, governance structures, clear ownership, and protecting human judgment in workflows. The guide promises detailed case studies and metrics in a downloadable version, though the published summary offers frameworks rather than hard numbers.
Why it matters: For executives navigating AI adoption, this is peer perspective from companies that have moved past pilots—useful as a sanity check on whether your organization is prioritizing the right problems.
What's in Academe
New papers on AI and its effects from researchers
GPT-5.4 Predicts Your Preferences Better Than Other Humans Do, Study Finds
A new working paper from economists at Harvard Business School, Dartmouth, and the University of Bonn finds that GPT-5.4 predicts individual human preferences better than other humans do. Amitabh Chandra and Joshua Schwartzstein (Harvard Business School), Erzo F.P. Luttmer (Dartmouth), Tomáš Jagelka (University of Bonn), and Omar Abdel Haq showed both the AI and a representative sample of Americans pairs of hypothetical life stories—varying income, longevity, working conditions—and asked which life they'd prefer. The surprising result: a person's choice aligned more closely with the LLM's prediction than with another human's choice on the same comparison. The researchers suggest LLMs could complement traditional surveys and experiments for studying what people actually value.
Why it matters: If validated, this could reshape how companies and policymakers research consumer preferences, product appeal, and life satisfaction—potentially faster and cheaper than conventional methods.
AI Agents Now Build Research Datasets That Once Required Human Assistants
Sebastian Galiani (University of Maryland), Ramiro H. Gálvez (Universidad Torcuato Di Tella), Santiago Afonso, and Raul A. Sosa have developed a method called Deep Research on a Loop (DRIL) that uses AI agents to build research datasets from public sources—work traditionally done by research assistants manually combing through government documents and reports. Testing it on a tax policy database for eight Latin American countries, the system produced 129 sources and 136 evidence records, covering most qualitative fields completely, at a cost the researchers say is comparable to a few hours of RA work using a standard LLM subscription.
Why it matters: If the approach scales, economics research teams could dramatically accelerate data collection—the tedious groundwork that often bottlenecks empirical studies—though documented gaps in quantitative estimates suggest human oversight remains essential.
AI Hiring Tools That Screen Out Non-Degree Holders May Violate Civil Rights Law
Peter Q. Blair (Harvard Graduate School of Education) and co-author Rui Guo argue that AI hiring tools which automatically screen out candidates without bachelor's degrees may violate civil rights law. When these algorithmic credential screens produce disparate impact on protected groups, the authors contend, employers face existing legal obligations to prove the requirements are job-related—or adopt skills-based hiring as a less discriminatory alternative. The framework draws on the landmark Griggs v. Duke Power precedent that employers can't use neutral-seeming requirements that disproportionately exclude minorities without business necessity.
Why it matters: As companies increasingly automate resume screening, this legal theory could expose a compliance gap—degree requirements baked into AI tools might carry the same liability as the discriminatory practices courts struck down decades ago.
Data Centers Boost Local Jobs and Tax Revenue but Raise Electricity Prices
New research from Fernando E. Alvarez (University of Chicago), David Argente and Diana Van Patten (both Yale), and Joyce Chow quantifies the local footprint of data centers across U.S. counties. The study finds they boost employment, business formation, house prices, and tax revenue—but also raise local electricity prices. Using facility-level data and an econometric approach designed to isolate causal effects, the team found positive impacts on construction and data-processing jobs, local establishments, and adjusted gross income.
Why it matters: As AI drives explosive data center growth, this research gives executives and policymakers the first rigorous look at the local tradeoffs—useful context for site selection debates and community relations.
Researchers Propose Standard Framework for Comparing AI Tools in Business Settings
Researchers proposed a standardized methodology for evaluating AI systems in real-world business contexts, arguing that current benchmarks often lack the transparency needed for meaningful comparisons. Working with financial services experts, they developed a structured process that transforms broad use cases—like fraud detection or credit memo generation—into 107 specific test scenarios. The approach uses a three-stage pipeline combining AI-generated expansions with human review, plus a formal worksheet template for documenting evaluation criteria.
Why it matters: As enterprises evaluate competing AI tools for specific workflows, standardized evaluation methods could help procurement teams make more defensible vendor comparisons—moving beyond marketing claims to structured, repeatable testing.
What's Happening on Capitol Hill
Upcoming AI-related committee hearings
Wednesday, May 13 — Hearings to examine how social media verdicts demand federal action. Senate · Senate Judiciary Subcommittee on Privacy, Technology, and the Law (Open Hearing) 226, Dirksen Senate Office Building
What's On The Pod
Some new podcast episodes
The Cognitive Revolution — Milliseconds to Match: Criteo's AdTech AI & the Future of Commerce w/ Diarmuid Gill & Liva Ralaivola