A chunk of text (a word, subword, or character) that a language model processes as a single unit.
Friendly Description: A token is a small chunk of text that a language model treats as a single piece. Sometimes a token is a whole word, sometimes a piece of a word, sometimes just a punctuation mark. Models read and produce text token by token. Pricing and context limits are usually counted in tokens, which is why people sometimes say things like "that response used about 800 tokens."
Example: The sentence "AI is fun!" might break down into four tokens: "AI," " is," " fun," and "!" A 1,000-word essay typically lands somewhere around 1,300 to 1,500 tokens. Knowing roughly how many tokens you're using helps you stay within model limits and manage cost.