The core training algorithm for neural networks, which adjusts model weights by propagating errors backward through the layers.
Friendly Description: Backpropagation is how a neural network learns from its mistakes. Picture a student taking a practice test: when they get an answer wrong, they trace back through their work to figure out where things went sideways, and they make a small mental note to do better next time. Backpropagation does the same for AI, billions of tiny notes at a time, so the model gradually gets smarter.
Example: Imagine a network is learning to predict house prices. It guesses $400,000 for a house that actually sold for $500,000. Backpropagation looks at that $100,000 mistake and walks backward through every internal calculation that led to the guess, nudging each step slightly so the next guess will be a little closer to the truth.