In late 2021, months before ChatGPT shipped, a developer writing under the name Roganov published an essay in Russian titled "What we should be afraid of in AI." The English translation hit Hacker News this week with a score too low to trend but a thesis that deserves serious attention: the actual dangers of artificial intelligence have nothing to do with autonomous weapons or consciousness emerging from neural networks. They're about responsibility abdication, cargo-culting adoption, and handing decision-making authority to systems that fundamentally don't understand what they're optimizing for.
The Fundamental Misnomer That Started Everything
Roganov argues the trouble began in 1945 when military engineers building ENIAC decided to call their machines "electronic brains." Here's the problem with that framing: we still don't understand how actual brains work. Not even close. We know neurons weave together, electricity flows during thinking, memories get stored somewhere in there, but nobody can explain the mechanism. Some people take a head injury and lose their identity; others have half their brain surgically removed and function fine. The author draws a sharp distinction between understanding something the way we understand a Heathkit tube radio—where every component has a known purpose—and understanding the brain, where we're still operating on hunches.
We're Outsourcing Choice Before Thinking
The essay's most prescient section tackles how we've handed artificial intelligence our emotions and our ability to choose. Every scroll through TikTok, every tap of a "like" button on Instagram, every passive consumption of algorithmically-selected content represents a transfer of what was once human decision-making to a machine. Roganov describes the contrast perfectly: a human storyteller reads the room, pivots when a joke falls flat, remembers that Linda doesn't laugh at unicorn jokes but loves cats. The Algorithm has no such flexibility—it optimizes for time-on-site, not your actual wellbeing. Three hours later you're knee-deep in roadkill videos and couldn't explain how you got there.
The Accountability Shell Game
Here's where the insider concerns get uncomfortable. Roganov points out that AI lets people dodge responsibility for their actions—and organizations are exploiting this at scale. "We got the market data from this model, but it was inaccurate, so our sales tanked." "We couldn't properly moderate internet posts, and our entire social network filled up with drug propaganda." These aren't technical failures—they're professional failures being laundered through technology. A computer won't answer accusations. You can always claim you need to retrain the model. The punchline: things only happen because humans built those systems in the first place.
Corporate AI Adoption: Top-Down Mandates Meet Bottom-Up Cargo Culting
Roganov adds a 2026 update that cuts deep: corporate AI adoption now runs as "top-down mandate stacked on bottom-up cargo-culting." Juniors with no programming background are handed the keys and told to "automate everything." Codebases drift into a kind of complexity no human reads end-to-end. The cleanup consulting market that eventually rescues these disasters will look a lot like the security-breach recovery industry of the 2010s. This isn't speculation—it's already happening in enterprise shops across every sector.
What AI Actually Can't Do
The essay makes a crucial distinction: AI makes decisions faster, not better. Within a defined dataset and strict rules, a well-trained network finds answers more efficiently than a human. Processing arrays with thousands of parameters? That's what neural networks excel at. But creating something new? Originating ideas that didn't exist before? That's exclusively a human capability, and Roganov is blunt: "An algorithm that performs specific actions to achieve an end result cannot one fine day say to itself: 'Screw all of this! I'm moving out to a little village in the countryside.'" We don't know where creative thought comes from. Neurosurgeons don't know. Psychologists don't know. It just works—and we should leave it at that.
Key Takeaways
- AI is imitation of human behavior, not actual intelligence—a critical distinction the industry conveniently blurs
- The real danger isn't sentient machines but humans using AI to avoid accountability
- Corporate adoption driven by mandates + inexperience creates unmaintainable complexity
- We've already outsourced emotional and lifestyle choices to recommendation algorithms
- Understanding what AI can't do is more important than hyping what it can
The Bottom Line
Five years later, Roganov's warnings look less like paranoia and more like a roadmap of failures we're actively choosing to repeat. The tech industry's favorite magic trick—blaming the algorithm when humans made bad decisions—isn't going away. If anything, it's scaling faster than anyone in 2021 imagined possible.