Sean Goedecke just dropped what might be the most contrarian take I've seen this week on Hacker News, and honestly? It has merit. In a lengthy essay titled "The Left-Wing Case for AI," he argues that anti-AI sentiment among progressives is partly a cultural reaction to two unrelated events: the 2022 crypto mania and the pro-Donald-Trump push from big tech CEOs in 2024. If the timing had been different, Goedecke contends, we could have had a legitimate pro-AI faction on the left. He's not trying to refute every anti-AI argument—he's doing something more interesting: outlining explicitly left-wing pro-AI arguments that don't require you to abandon your progressive principles.
Disability Rights Are Pro-AI
The strongest argument in Goedecke's piece centers on disability. The left has correctly taken a broad view on acceptable disability aids, often pausing to acknowledge that services like DoorDash—despite their exploitative labor practices—have meaningfully improved lives for disabled and chronically ill people who have few alternatives. LLMs fit squarely into this framework. Almost every video online now auto-generates captions. People with brain fog or chronic pain use AI to interact with computers more easily. Neurodivergent folks use ChatGPT to "code-switch" their emails into neurotypical-friendly language. Those with mobility or vision issues rely heavily on LLM-powered voice controls. This creates a fascinating tension in left-wing anti-AI spaces, according to Goedecke: every so often someone asks whether LLMs would help disabled people, and the comments devolve into a dogpile of (often non-disabled) people slamming AI while a handful of disabled users try to explain their actual experiences. One reader named Matt wrote to Goedecke with a poignant observation: "If similar reasoning had been applied to outright reject computers as fascist and unethical in the 80s and onward, my own life would have been quite different, and arguably worse." Matt has enough usable vision to handwrite uncomfortably with his head against the page—and credits computers with saving him from doing even more of that.
Medical Advocacy for the Chronically Ill
Goedecke flips another common anti-AI argument on its head. The claim goes: people will take dangerous medical advice from AI instead of trusting their doctor. But "just trust your doctor" is kind of right-wing-coded itself, and the left has historically been sympathetic to patients who don't or can't rely entirely on medical establishments. Many doctors are simply not good at handling unusual cases—and if you have one, you often have to advocate fiercely for yourself through extensive research. LLMs are particularly useful here because: the complex medical questions involved are usually well-documented in literature (good LLM fodder), the patient is motivated enough to verify sources themselves, and having to convince a doctor to prescribe treatment acts as a guardrail against going off the deep end. Various chronic illness communities have been waging a long guerrilla war against medical orthodoxy that ignores or dismisses them—endometriosis being a classic example of this war being won after decades of being dismissed as psychological. LLMs let these patients make cogent arguments and write petitions in the establishment's own language.
Class, Code-Switching, and Professional Power
Goedecke draws on Patrick McKenzie's concept of "dangerous professional" mode—the particular style of communication that signals to bureaucracies you're someone to take seriously. This includes an unemotional register, correct grammar, and explicit references to regulatory or legal options. Unless you've gone through the right educational pipeline, hitting this register is tricky. Go too far and you read as a crank; not enough and you're dismissed. LLMs provide what Goedecke calls a "dangerous professional translation service." You no longer need to match the style yourself—you just need to know it exists, and the AI handles the rest. It can tell you which regulators to contact and what to say. In other words, AI has democratized escalation pathways that were originally designed for narrow professional classes. This is a genuinely redistributive technology in terms of institutional power.
Education as the Great Equalizer
Another correct left-wing position is that education is gatekept by class and status—the idea being that everyone has equal potential but certain people get more educational opportunities, explaining uneven downstream outcomes. LLMs now make private tutoring available to every student who wants it. The common rebuttal—LLMs hallucinate—is weaker than it seems when you actually compare alternatives. Goedecke points out that teachers "hallucinate" all the time. He cites a 2016 study showing approximately 42% of lessons contained mathematical content errors at the lesson level. "I bet that's a higher rate than we'd see from GPT-5.5-Thinking on middle-school mathematics," he writes, though he acknowledges not wanting to draw too many conclusions from one study. The education argument also overlaps with disability: students with ADHD or other issues are often badly underserved by schools, and LLMs can transform content into whatever format helps that particular student learn best—written, audio, quiz, dialogue.
Key Takeaways
- LLMs are powerful disability aids for neurodiverse people and those with motor or vision impairments
- AI enables patients suffering medical discrimination to research their own conditions and advocate effectively
- Language models remove the communication advantage of wealthy "professional class" backgrounds
- AI provides everyone access to tutoring at least as good as median teacher quality
The Bottom Line
Goedecke's piece is a breath of fresh air in a discourse dominated by techno-utopianism or Luddite backlash. The disability and class arguments are genuinely compelling—if you've ever watched someone with chronic illness navigate an indifferent medical system, or seen how "speaking professional" gates access to justice, you know exactly what he's talking about. This isn't about ignoring AI's real harms; it's about refusing to let crypto bros and MAGA tech CEOs define the conversation.