A comprehensive annotated timeline tracing the history of modern AI and deep learning has surfaced on Hacker News, drawing from Cade Metz's book "Genius Makers" to map six decades of neural network research across academia and industry. The resource opens with a framing borrowed from Jürgen Schmidhuber's work: machine learning is fundamentally the science of credit assignment—not just for algorithms, but for ideas. It treats the concept literally, attributing breakthroughs to their rightful predecessors rather than letting popular narratives rewrite history.

The Perceptron Era (1960s–1970s)

Frank Rosenblatt built the Mark I Perceptron at Cornell Aeronautical Laboratory in 1960—the first hardware neural network—before Marvin Minsky and Seymour Papert published their famous critique, "Perceptrons," in 1969. That book exposed limitations like the XOR problem and contributed to what became known as the First AI Winter. Meanwhile, a young Geoff Hinton began his PhD at the University of Edinburgh in 1971, planting seeds for decades of breakthroughs yet to come.

The Backpropagation Revival (1980s–1990s)

The field thawed when David Rumelhart, Hinton, and Williams published their backpropagation paper in Nature in 1986. That same year, Yann LeCun joined Bell Labs and developed LeNet for handwritten digit recognition—the first real-world application of convolutional neural networks. The quiet years that followed saw Yoshua Bengio hired at the University of Montreal (1993) and Hinton founding the Gatsby Computational Neuroscience Unit at UCL (1998), maintaining research momentum while deep learning fell out of fashion.

The Deep Learning Boom (2010s–2020)

The 2012 ImageNet competition changed everything. AlexNet, built by Alex Krizhevsky, Ilya Sutskever, and Hinton, won by a massive margin—signaling that GPU-accelerated deep learning had arrived. Google Brain was founded in 2011; DeepMind (founded by Demis Hassabis, Mustafa Suleyman, and Shane Legg) was acquired by Google in 2014. OpenAI launched in 2015 with backing from Elon Musk and Sam Altman. By 2019, Hinton, LeCun, and Bengio received the Turing Award for their foundational contributions.

Key Players Across Major Labs

The timeline documents over fifty researchers across Google, DeepMind, Meta (FAIR), Microsoft, OpenAI, Baidu, Nvidia, and academia—including Ian Goodfellow's invention of GANs in 2014, Fei-Fei Li's creation of ImageNet at Stanford, and Terry Sejnowski's Boltzmann machine work at Salk Institute. It even tracks corporate movements: Qi Lu leaving Microsoft for Baidu in 2017, or Elon Musk departing OpenAI's board in 2018.

Why This Resource Matters

For anyone building AI systems today, understanding the intellectual lineage matters. The resource serves as a corrective to revisionist histories that credit recent breakthroughs to current tech giants while erasing decades of academic work. It reminds us that deep learning didn't emerge from a vacuum—it was built incrementally by researchers who often struggled against funding droughts and scientific skepticism.

Key Takeaways

  • Six decades of neural network research traced from Rosenblatt's Mark I Perceptron (1960) to AlphaGo-era breakthroughs
  • The 1986 backpropagation paper and 2012 ImageNet competition marked the field's most pivotal inflection points
  • Over fifty researchers across Google, DeepMind, Meta FAIR, OpenAI, and academia documented with proper credit attribution

The Bottom Line

This annotated timeline is essential reading for anyone serious about understanding where AI came from—and what it means for the systems we're building now. Bookmark it.