The optimization algorithm used to train neural networks by iteratively adjusting weights to minimize a loss function.
Friendly Description: Gradient descent is the way an AI gradually finds the best version of itself. Picture standing on a foggy mountainside trying to get to the lowest point in the valley. You can't see the bottom, but you can feel which direction goes downhill, so you take a small step that way and repeat. After many steps, you've reached the bottom. Training an AI works the same way: small adjustments, over and over, until the model's mistakes are as low as possible.
Example: When training a model to predict customer churn, gradient descent quietly tweaks millions of internal numbers, each adjustment making the model's predictions a tiny bit closer to reality. After enough iterations, the model lands at a setting that minimizes its overall error.