What is Artificial Intelligence?
A clear, jargon-free introduction to AI and why it matters right now.
What is Artificial Intelligence?
Artificial Intelligence (AI) is software that can perform tasks that normally require human intelligence — things like understanding language, recognizing images, making decisions, and generating new content.
But here's the thing most people get wrong: AI isn't new. It's been around since the 1950s. What is new is how shockingly useful it's become — practically overnight.
Watch: 3Blue1Brown — But what is a Neural Network? (the visual foundation for understanding AI)
AI is not a sentient being. It's a tool — like a calculator for language. It doesn't "think" or "want" anything. It predicts the most likely helpful response based on patterns in data.
A Brief Timeline
- 1950s: Alan Turing asks "Can machines think?" — the Turing Test is born
- 1960s-80s: Expert systems — rules-based AI that could answer narrow questions
- 1990s-2000s: Machine learning takes off — AI learns from data instead of rules
- 2010s: Deep learning revolution — neural networks with many layers
- 2020s: Large Language Models (ChatGPT, Claude, Gemini) — AI that truly converses
Notice the pattern: each era built on the last. We didn't jump from nothing to ChatGPT. It took 70 years of compounding breakthroughs.
The AI You Use Today
Modern AI isn't a sentient robot. It's a prediction engine. When you type a message to ChatGPT or Claude, the AI is predicting the most helpful next words based on patterns it learned from enormous amounts of text.
Think of it like autocomplete on steroids. Your phone suggests the next word; AI suggests the next paragraph — and it's remarkably good at it.
You type: "Write me a professional email declining a meeting"
The AI doesn't understand what a meeting is or why you're declining. It has seen millions of polite decline emails in its training data, so it predicts what a helpful decline email looks like — word by word, sentence by sentence.
The result? A perfectly professional email in 3 seconds that would have taken you 10 minutes to write.
Why Now?
Three things converged at the same time, creating a perfect storm:
- 1Data — The internet created trillions of pages of text to learn from
- 2Compute — GPUs became powerful enough to train massive models
- 3Algorithms — The Transformer architecture (2017) was a breakthrough
Without all three, none of this works. Remove the data, and the AI has nothing to learn from. Remove the compute, and training takes centuries. Remove the algorithm, and the AI can't make sense of what it reads.
You don't need to understand how AI works internally to use it well. But knowing it's a prediction engine helps you write better prompts. If you give it vague input, it predicts vague output. Give it specific input, and it predicts specific, useful output. You'll master this in Wave 2.
Key Takeaway
AI is a tool, not magic. It has incredible strengths (speed, consistency, breadth of knowledge) and real limitations (it can be wrong, it can't truly "understand," it reflects biases in training data). Throughout this course, you'll learn to leverage the strengths and guard against the weaknesses.
AI can sound extremely confident while being completely wrong. This is called hallucination — and it's the single most important limitation to understand. Never blindly trust AI output for facts, citations, or numbers. Always verify. We'll cover this in depth in Lesson 6.
Exercises
0/3What is the best way to think about modern AI like ChatGPT or Claude?
Which breakthrough in 2017 enabled the current generation of AI?
In your own words, explain to a friend what AI is and isn't. Keep it to 2-3 sentences.
Hint: Focus on the "prediction engine" concept and mention at least one limitation.