What The F*&K Even Is AI — bold 3D text illustration

In Depth

What The F*&K Even Is AI?!

From a thought experiment in 1950 to the thing running your meetings in 2026. Here’s the actual story.

It started as a bet

In 1950, a British mathematician named Alan Turing asked a simple question: can a machine think?

He didn’t build one. He just asked the question. And that question — written in a nine-page paper called “Computing Machinery and Intelligence” — is where this whole thing starts.

Turing proposed a test. Put a human and a machine in separate rooms. Ask them both questions. If you can’t tell which is which from the answers alone, the machine passes. He called it the Imitation Game. Everyone else eventually called it the Turing Test.

Nobody passed it for decades. But the question didn’t go away.

The first wave — and the first crash (1950s–1970s)

The 1950s and 60s were electric. Researchers built programs that could play chess, solve algebra problems, and hold basic conversations. The government poured money in. Headlines declared that thinking machines were ten years away.

They weren’t.

The programs were brittle. They could only do the exact thing they were built to do. Ask them anything outside that narrow lane and they fell apart immediately. By the mid-1970s, funding dried up. The hype collapsed. Researchers called it the AI Winter — the first of two.

But a few people kept working.

Expert systems and the second crash (1980s–1990s)

The 1980s brought a different approach: expert systems. Instead of teaching machines to learn, engineers hand-coded human expertise directly into rules. If the patient has these symptoms, consider these diagnoses. If the loan applicant has this profile, approve or reject.

It worked. Businesses spent billions on these systems. For a while, it felt like AI had finally arrived.

Then the maintenance costs hit. Every time the world changed, someone had to update every rule by hand. The systems couldn’t adapt. They couldn’t learn. By the early 1990s, the market for expert systems collapsed. Second winter.

The quiet revolution nobody noticed (1986–2012)

While the expert systems industry was burning down, something else was happening in university basements.

A small group of researchers — Geoffrey Hinton, Yann LeCun, Yoshua Bengio — were working on neural networks. The idea had been around since the 1940s: build software that loosely mimics how neurons connect in a brain. Layer simple pattern-recognizers on top of each other. Show them enough examples. Let them figure it out.

It kept not quite working. The computers weren’t fast enough. The datasets weren’t big enough. Funding was impossible to get. These three researchers became known, with some sarcasm, as the “deep learning mafia” — true believers in an approach most of the field had given up on.

Then in 2012, everything changed.

A neural network called AlexNet entered an annual image recognition competition. It didn’t just win — it destroyed the competition, cutting the error rate nearly in half compared to the next best approach.

The field stopped laughing.

The transformer changes everything (2017)

In 2017, a team at Google published a paper with a deliberately understated title: “Attention Is All You Need.”

It described a new architecture called the Transformer. The core idea: instead of reading text word by word in sequence, the model could look at all the words at once and figure out which ones matter most in relation to each other.

This sounds technical. The consequence wasn’t.

Transformers scaled. Throw more data at them, they get better. Throw more computing power at them, they get better. There was no obvious ceiling. For the first time, making AI smarter was mostly a question of resources — and resources were suddenly available.

OpenAI, Google, Anthropic, Meta, and others started building models that were orders of magnitude larger than anything that had existed before.

The moment the world noticed (November 2022)

On November 30, 2022, OpenAI released ChatGPT.

It wasn’t the most technically advanced thing they’d built. GPT-4 came later. But ChatGPT was the first time anyone could just open a browser, type something, and have a conversation with one of these models.

One million users signed up in five days. One hundred million in two months. It became the fastest-growing consumer application in history.

Every major technology company immediately went into emergency mode. Google declared a “code red.” Microsoft invested $10 billion into OpenAI and shipped Copilot into everything. Meta open-sourced their models. Anthropic launched Claude. The race was on.

Where we actually are right now (2025–2026)

Here’s what exists today, stated plainly:

Large language models — the technology behind ChatGPT, Claude, Gemini, and the others — are software systems trained on enormous amounts of text. They predict what words should come next given what came before. That’s the core mechanism. The emergent behavior from doing that at scale is what surprises everyone.

These models can write, summarize, translate, reason through problems, write and debug code, analyze documents, answer questions, and hold coherent conversations across almost any domain. The best ones are genuinely useful for knowledge work in ways that nothing before them has been.

They also hallucinate — they produce confident-sounding statements that are simply wrong. They can be manipulated. They don’t actually “know” anything in the way a human does. They’re tools, not minds.

The current frontier — agents — takes these models one step further. Instead of answering a question, an agent can take actions: browse the web, write and run code, send emails, interact with software. The question is no longer “can AI tell me how to do this” but “can AI do this.”

The answer, increasingly, is yes.

Where it’s credibly going

Predicting AI is a good way to embarrass yourself. The field has humiliated too many confident forecasters.

But some things are clear enough to say out loud.

Models will continue getting more capable. The scaling laws that have held for the last decade haven’t broken yet, and there’s no obvious reason they will in the near term.

The cost of running these models will continue to fall dramatically. Things that required expensive API calls in 2023 will run on a phone in 2026. Things that require a data center today will run on a laptop by 2028.

Agents will become a real part of how organizations operate. Not replacing people, but handling the execution layer of work — the drafting, the research, the routing, the formatting — while humans make the decisions that actually matter.

The teams and individuals who learn to work with these systems now will have a significant advantage over those who wait. Not because AI will take their jobs. Because the people who understand it will do more, faster, with less friction.

That’s the story. From a nine-page thought experiment in 1950 to something running your meeting notes in 2026.

Seventy-six years. And it’s still early.

Want more like this?

Get the full AIVIO library

Ongoing briefings, topic packs, and practical assets for operators who want to stay current and move fast.

Get the full AIVIO library