Systematic errors or unfair outcomes in model behavior, often stemming from training data or design choices.
Friendly Description: Bias in AI happens when a model treats some people, ideas, or situations unfairly, usually because the examples it learned from were unbalanced or incomplete. It's a lot like a person who only ever met one type of dog and now thinks all dogs look alike. The AI isn't being mean on purpose; it just hasn't seen enough of the bigger picture. Spotting and fixing bias is an important part of building AI that works well for everyone.
Example: If a job-screening AI was trained mostly on resumes from one group of people, it might unfairly rank similar resumes from other groups lower, not because they're worse, but because the AI didn't learn to recognize them. Researchers fix this by adding more diverse examples and carefully checking the model's results before it's used.