Forcing Students To Use Typewriters To Avoid AI? A College Prof's Experiment
April 19, 2026
D.A.D. today covers 11 stories from 2 sources. What's New, What's Innovative, What's Controversial, What's in the Lab, and What's in Academe.
D.A.D. Joke of the Day: My AI keeps suggesting I "circle back" and "synergize" in emails. I didn't ask for a writing assistant, I asked for a exorcist.
What's New
AI developments from the last 24 hours
Software Company Cuts Cloud Costs 84% by Switching to Dedicated Servers
A Turkish software company cut its monthly cloud costs from $1,432 to $233 by moving from DigitalOcean managed infrastructure to Hetzner dedicated servers. The migration covered 248 GB of MySQL data across 30 databases, 34 websites, GitLab, and live mobile app traffic with zero downtime. The tradeoff: they gained more powerful hardware (48-core AMD EPYC, 256 GB DDR5 RAM) but now manage their own servers rather than using a managed cloud platform.
Why it matters: For teams questioning whether managed cloud premiums are worth it, this case study shows the math can favor dedicated servers dramatically—if you have the ops expertise to handle the migration and ongoing maintenance yourself.
Discuss on Hacker News · Source: isayeter.com
AI Design Tools Split Between Two Competing Philosophies
An analysis of Claude's new design tool argues that AI design tooling is splitting into two philosophies. Figma Make treats the design file as the source of truth; Claude Design treats code as canonical. The author contends that Figma's proprietary file format excluded it from LLM training data, potentially leaving it with what they call a 'pre-agentic system' as AI coding tools mature. This is opinion and analysis—no benchmarks or performance comparisons were provided.
Why it matters: If the thesis holds, teams choosing between AI design tools are also choosing which artifact—design file or code—becomes their system of record.
Discuss on Hacker News · Source: samhenri.gold
Cold War Bombers Used Analog Star Trackers Before GPS Existed
A technical deep-dive explores the Astro Compass navigation system used in B-52 bombers starting in the early 1960s—before GPS or practical digital computers existed. The system used an electromechanical "Angle Computer" that physically modeled the celestial sphere through analog mechanisms to perform trigonometric calculations. It could track three stars simultaneously through a 4-inch glass dome, using photomultiplier tubes for star detection and gyroscope-stabilized platforms, achieving heading accuracy to a tenth of a degree. The system required 19 separate components including ten amplifier/computer units.
Why it matters: This is historical engineering, not AI news—but it's a fascinating look at the analog computing systems that solved navigation problems before digital technology caught up, a reminder that 'compute' once meant physical gears and servos modeling mathematical relationships.
Discuss on Hacker News · Source: righto.com
What's Innovative
Clever new use cases for AI
Markdown Extension Adds Dashboards and Charts, Enters Crowded Field
A developer project called MDV extends standard Markdown with features for building documents, dashboards, and slides—adding YAML configuration blocks, chart/data visualization syntax, styled containers, and auto-generated tables of contents. Community reaction on Hacker News was skeptical: commenters noted that Emacs Org-Mode and pandoc already offer similar capabilities, and a developer from Evidence.dev pointed out they use Stripe's Markdoc for comparable purposes.
Why it matters: This is developer tooling in an already-crowded space—worth watching if your team builds internal dashboards or documentation, but established alternatives like pandoc and Markdoc may be safer bets.
Discuss on Hacker News · Source: github.com
Cornell Instructor Requires Typewriters to Block AI-Assisted Work
A German language instructor at Cornell has required students to complete assignments on manual typewriters since spring 2023, eliminating access to AI tools, online translators, and spellcheckers. The analog approach forces students to think through word choices without digital assistance and removes the temptation to paste work into ChatGPT. Students reportedly found themselves talking more with classmates and experiencing fewer screen-based distractions.
Why it matters: It's a small-scale experiment, but signals how educators are exploring radical low-tech solutions rather than AI detection tools—a reminder that some workplaces may eventually adopt similar friction-based approaches to ensure human-generated work.
Discuss on Hacker News · Source: sentinelcolorado.com
What's in Academe
New papers on AI and its effects from researchers
Reinforcement Learning Method Could Simplify How AI Models Are Fine-Tuned
Researchers introduced Value Gradient Flow (VGF), a reinforcement learning approach that eliminates the need for separate policy models when training AI systems. Instead of training a dedicated model to decide what actions to take, VGF uses mathematical optimization techniques from optimal transport theory to directly map from baseline behavior to optimal actions. The team claims state-of-the-art results on standard offline RL benchmarks and language model training tasks, though the paper doesn't provide specific performance numbers.
Why it matters: This is foundational ML research—if the claims hold up, it could eventually simplify how companies fine-tune language models to follow instructions or align with preferences, but it's far from production use today.
Why Detailed Prompts Sometimes Fail for Unusual 3D AI Requests
Researchers identified a significant failure mode in text-to-3D AI models: when asked to generate unusual or out-of-distribution shapes, these systems often become insensitive to prompt changes—a problem the paper calls 'latent sink traps.' The finding suggests that typing more detailed prompts may not help when you're asking for something the model hasn't seen before. The researchers propose a workaround that separates the model's geometric capabilities from its text understanding, potentially enabling better manipulation of unconventional 3D shapes.
Why it matters: For teams using AI to generate 3D assets for product design, gaming, or architecture, this explains why prompt tweaking sometimes fails—and signals that better methods for creating novel shapes may be coming.
Transformer Redesign Trains AI Models Nearly Twice as Fast
Researchers propose a structural design called Three-Phase Transformer (3PT) that reorganizes how information flows through AI language models. The approach partitions the model's internal data channels into rotating phases—somewhat like three-phase electrical current—with specialized operations for each. In tests at 123 million parameters, 3PT achieved 7.2% better perplexity (a measure of prediction accuracy) while adding just 1,536 parameters, and reached target performance nearly twice as fast during training.
Why it matters: This is foundational AI research, not a product—but efficiency gains at training time could eventually mean cheaper, faster model development if the approach scales to larger systems.
Framework Tackles AI's 'Catastrophic Forgetting' Problem
Researchers have proposed MMOT, a framework for 'online incremental learning'—AI systems that continuously learn from new data without forgetting what they already know. The approach uses optimal transport theory to help models adapt to shifting data patterns while avoiding catastrophic forgetting, where new training erases previous knowledge. The paper claims superior performance on benchmarks but doesn't publish specific numbers or comparisons in its abstract.
Why it matters: This is academic research addressing a real enterprise pain point—AI systems that can update continuously without expensive full retraining—but without published benchmarks, it's too early to assess practical impact.
Simple Training Trick Helps AI Models Actually Look at Images
Researchers have developed V-GIFT, a training technique that makes multimodal AI models better at actually looking at images rather than guessing answers from text patterns. The approach adds simple visual tasks—like identifying rotated images or matching colors—to training data, forcing models to use visual evidence. Adding just 3-10% of these tasks to training consistently improved performance on vision-focused benchmarks across multiple models, without requiring new annotations, architecture changes, or extra training steps.
Why it matters: This addresses a known weakness in AI image analysis—models often 'cheat' by relying on language shortcuts rather than genuinely processing visual information, which matters for any business application where accurate image understanding is critical.