Loss Functions & Metrics

How Neural Networks Measure "Wrongness"

Difficulty
Beginner
Duration
10-12 minutes
Prerequisites
Basic neural networks

What You'll Discover

Learn how neural networks know when they're wrong

MSE for Numbers

How to measure prediction errors when predicting continuous values like prices or temperatures.

Cross-Entropy for Categories

The go-to loss for classification - why it punishes confident wrong answers harshly.

Interactive Comparison

See side-by-side how different loss functions respond to the same errors.

When to Use What

A simple decision guide for picking the right loss function for your task.

Key Concepts

MSE (Mean Squared Error)

Squares errors - big mistakes get punished more

MAE (Mean Absolute Error)

Linear penalty - all errors treated fairly

Binary Cross-Entropy

For yes/no classification with probabilities

Categorical Cross-Entropy

For multi-class: pick one from many options

Loss vs Metrics

Training uses loss, evaluation uses metrics

Matching Activations

Sigmoid→BCE, Softmax→CCE, Linear→MSE

Step
1/ 7

What is a Loss Function?

A loss function is like a score card that tells your neural network how wrong it is.

Think of it like playing darts:

  • Hit the bullseye → Loss = 0 (perfect!)
  • Miss by a little → Small loss
  • Miss by a lot → Big loss

The network's job is to make the loss as small as possible.

Try it! Drag the slider to see how loss changes

PredictionLossTarget: 5MSEMAE
😐
MSE
4.0
MAE
2.0
Prediction
3.0
📊 Notice the difference!
MSE (4.0) is 2.0× larger than MAE (2.0). MSE punishes big errors more!

How Training Works

🔄 The Training Loop
1
Model makes a prediction
2
Loss function calculates the error
3
Model adjusts to reduce error
Repeat until loss is tiny!
💡 Why Loss Matters
Without a loss function, the model has no feedback.
It's like playing darts blindfolded — you need someone to tell you how far off you are!
🎯 The Goal
Minimize the loss! Lower loss = better predictions = smarter model.