Backpropagation Step-by-Step

How Neural Networks Learn From Their Mistakes

Difficulty
Intermediate
Duration
18-22 minutes
Prerequisites
Neural networks, Loss functions

What You'll Discover

Learn how neural networks propagate errors backward to improve

Chain Rule Intuition

How gradients compose through layers so the network knows which weights to adjust.

Backward Pass

Watch error signals flow from output back to input, layer by layer.

Weight Updates

See how each weight gets adjusted based on its contribution to the error.

The Full Picture

How forward + backward passes form the complete training loop.

Key Concepts

Chain Rule

Multiply derivatives to trace blame through layers

Gradient Flow

Error signals propagate backward through the network

Error Signals

Each layer receives feedback about its contribution

Weight Updates

Adjust weights proportional to their error contribution

Layer-by-Layer

Process goes backward: output → hidden → input

Learning Loop

Forward pass → loss → backward pass → update → repeat

Step
1/ 8

What is Backpropagation?

Imagine you're a teacher grading a group project. The final answer is wrong, and you need to figure out who made the mistake and how much each person contributed to the error.

Backpropagation works exactly the same way. When a neural network makes a wrong prediction, it traces the error backward through the network to figure out which weights are to blame and how to fix them.

Here's the training cycle that repeats over and over:

  1. Forward Pass — Feed data through the network, get a prediction
  2. Compute the Error — How wrong was the prediction?
  3. Backward Pass — Trace the error back to assign blame to each weight
  4. Update Weights — Adjust weights to reduce the error

This "learn from your mistakes" cycle is how every neural network trains — from simple classifiers to ChatGPT.

A Simple Neural Network

+0.30+0.50-0.20+0.40+0.60-0.30InputHidden 1Output
Positive (Excitatory)
Negative (Inhibitory)
Thickness = Strength

The Backpropagation Cycle

StepDirectionWhat Happens
1. Forward Pass→ Left to RightData flows through, make a prediction
2. Compute Error⊗ At OutputCompare prediction vs. target
3. Backward Pass← Right to LeftTrace blame for the error
4. Update Weights↻ EverywhereAdjust weights to reduce error