TexTak
← LEARN AI
LEARN · 01

The History of AI

From a 1956 summer workshop to systems that pass the bar exam — artificial intelligence has been promised, abandoned, reborn, and revolutionized across eight decades.

This isn't a comprehensive academic history. It's the minimum viable context you need to understand why AI is where it is today, and what patterns are likely to repeat.

Interactive Timeline — 1950 to Present

Diagram of the Turing Test1950diagram

The Turing Test

Alan Turing proposes the Imitation Game — can a machine's responses be indistinguishable from a human's? The paper 'Computing Machinery and Intelligence' frames the question that defines the field for the next 75 years.

Defined the question that still drives the field 75 years later.

Image: Wikimedia Commons, Public Domain

ELIZA conversation transcript1966screenshot

ELIZA Speaks

Proved humans project intelligence onto machines — a dynamic that shapes AI adoption today.

Deep Blue computer hardware at the Computer History Museum1997photo

Deep Blue Defeats Kasparov

The first time a machine beat a world champion at a complex intellectual task.

Early GAN-generated samples from Goodfellow's 2014 paper2014ai-generated

First GAN Images

The architecture that made AI image generation possible. Every AI art tool descends from this paper.

Go board — the game AlphaGo mastered2016photo

AlphaGo's Move 37

The moment AI demonstrated genuine creativity — not just computation, but insight.

The Transformer architecture diagram2017diagram

Attention Is All You Need

The most consequential paper in modern AI. GPT, BERT, LLaMA, Claude — all Transformers.

GPT-3 playground interface2020ai-generated

GPT-3 Writes Code, Poetry, and Lies

Proved that scale produces capabilities. Also proved that scale produces confident hallucination.

AI-generated image of an astronaut reading a book2021ai-generated

DALL-E: Text to Image

The moment 'AI art' entered public consciousness. Copyright law still hasn't caught up.

Early Stable Diffusion generated images2022ai-generated

Stable Diffusion Goes Open Source

Open source changed the power dynamic. Image generation went from lab curiosity to global tool in weeks.

ChatGPT conversation interface2022screenshot

ChatGPT: 100M Users in 60 Days

The inflection point. AI stopped being a technology story and became a human story.

Multiple frontier AI models competing2023diagram

The Model Race Begins

Competition drives capability. The question shifts from 'can AI do this?' to 'which AI should do this?'

AI agents using tools and reasoning2024ai-generated

Agents and Reasoning

The year AI went from answering questions to taking actions.

AI-generated content and the trust crisis2025ai-generated

AI Everywhere, Trust Nowhere

The year the information environment permanently changed. Seeing is no longer believing.

Forecast-first journalism and calibration2026diagram

The Forecast Era

Where we are now. The question isn't what happened — it's what happens next.

1943–1956

The Genesis

1943

McCulloch & Pitts publish a mathematical model of neural networks

The first formal description of how neurons could compute logic — laying the theoretical foundation 80 years before ChatGPT.

1950

Turing publishes 'Computing Machinery and Intelligence'

The paper that asked 'Can machines think?' and proposed the Imitation Game (later called the Turing Test) as a way to answer it.

1956

The Dartmouth Workshop coins 'Artificial Intelligence'

John McCarthy, Marvin Minsky, Claude Shannon, and Nathaniel Rochester convene the conference that names the field. Their proposal predicted that 'every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.' They expected it to take one summer.

PATTERN

The founders of AI were brilliant but wildly overconfident about timelines. This pattern repeats in every era.

1957–1974

The Golden Years

1957

Frank Rosenblatt builds the Perceptron

The first hardware neural network. The New York Times reported it would 'be able to walk, talk, see, write, reproduce itself, and be conscious of its existence.' It could classify simple patterns.

1966

ELIZA simulates a therapist

Joseph Weizenbaum's chatbot at MIT uses pattern matching to mimic a Rogerian therapist. Users become emotionally attached to it — the first demonstration that humans anthropomorphize AI.

1970

Shakey the Robot navigates a room

The first robot to reason about its own actions. Shakey combined vision, movement, and problem-solving. It moved at about 2 meters per hour.

PATTERN

Early AI was funded by defense money and fueled by hype. When results didn't match promises, the money dried up.

1974–1980

The First AI Winter

1973

The Lighthill Report declares AI a failure

British mathematician James Lighthill's report to the UK Science Research Council concludes that AI had failed to achieve its 'grandiose objectives.' Funding collapses across the UK.

1974

US and UK governments cut AI funding

DARPA slashes AI budgets. UK follows. Research continues in universities but the commercial promise evaporates.

PATTERN

AI winters happen when expectations outpace capabilities. The pattern: promise → fund → underdeliver → defund. It will happen again.

1980–1987

Expert Systems Boom

1980

Expert systems enter industry

Rule-based systems like XCON (used by DEC to configure computers) prove that AI can deliver commercial value in narrow domains.

1986

Backpropagation revives neural networks

Rumelhart, Hinton, and Williams publish the paper that makes training multi-layer networks practical. This idea, ignored for 20 years, becomes the foundation of modern deep learning.

PATTERN

Expert systems proved AI could work in production. But they were brittle — every edge case needed a human-written rule.

1987–1993

The Second AI Winter

1987

Expert systems market collapses

Companies that spent millions on AI systems find them expensive to maintain and limited in scope. The specialized hardware market (Lisp machines) dies.

1988

'AI' becomes a dirty word in grant applications

Researchers rebrand their work as 'machine learning,' 'computational intelligence,' or 'knowledge systems' to avoid the stigma.

PATTERN

When AI researchers renamed their field to avoid the label, it was the clearest signal of how badly overpromising had damaged credibility.

1997–2011

The Quiet Revolution

1997

Deep Blue defeats Kasparov at chess

IBM's chess computer wins a six-game match against the world champion. It used brute-force search, not learning — but it proved machines could outperform humans at complex tasks.

2006

Hinton coins 'deep learning'

Geoffrey Hinton's group at Toronto shows that deep neural networks can be trained layer by layer. This paper reignites the field that became modern AI.

2011

Watson wins Jeopardy!

IBM's Watson defeats champions Ken Jennings and Brad Rutter. The system combined NLP, knowledge retrieval, and probabilistic reasoning — a preview of what LLMs would do at scale.

PATTERN

The real progress happened when nobody was watching. Statistical methods and compute growth created the conditions for the coming explosion.

2012–2022

The Deep Learning Era

2012

AlexNet wins ImageNet by a landslide

Krizhevsky, Sutskever, and Hinton's deep neural network crushes the competition in image recognition. This is the moment deep learning becomes undeniable.

2014

GANs create synthetic images

Ian Goodfellow invents Generative Adversarial Networks — two neural networks competing against each other to generate increasingly realistic images.

2017

'Attention Is All You Need' introduces the Transformer

Google researchers publish the architecture that powers GPT, BERT, and every modern LLM. The key insight: attention mechanisms can replace recurrence entirely, enabling massive parallelization.

2020

GPT-3 demonstrates emergent capabilities

OpenAI's 175-billion parameter model shows that scale alone can produce capabilities nobody explicitly programmed — writing code, translating languages, and reasoning about novel problems.

2022

ChatGPT reaches 100 million users in 2 months

The fastest-growing consumer application in history. AI goes from research curiosity to dinner table conversation overnight.

PATTERN

The Transformer paper is the most consequential publication in modern AI. Everything since 2017 — GPT, BERT, LLaMA, Claude — is a descendant of that architecture.

2023–Present

The Frontier Era

2023

GPT-4, Claude, Gemini — multimodal frontier models arrive

Models that see images, write code, reason through problems, and pass professional exams. The capability ceiling keeps rising.

2024

Open-source models close the gap

Meta's LLaMA, Mistral, and others approach frontier performance at a fraction of the cost. The moat question becomes central.

2025

AI agents begin enterprise deployment

Autonomous systems that can browse the web, write and execute code, manage workflows, and operate tools with minimal human oversight.

2026

You are here

Regulatory frameworks are taking shape. AI-generated content exceeds human-created content online. The labor market is shifting. And nobody knows what happens next — which is why forecasting matters.

PATTERN

We're inside the inflection point. The pace of change makes prediction harder and more necessary at the same time. That's why TexTak exists.

Why History Matters for Forecasting

Every AI era follows the same arc: breakthrough → hype → overinvestment → disappointment → quiet progress → next breakthrough. Understanding this cycle doesn't tell you what will happen, but it tells you what to watch for — and what to be skeptical of.

TexTak's forecast model tracks these patterns explicitly. When coverage of a technology follows the same trajectory as past hype cycles, that's a signal. When it diverges, that's a different signal.

NEXT GUIDE
The People Behind AI →