Module 5
Prediction
Every time an AI generates text, it's making a prediction β what is the most likely next word? Let's look inside that decision.
How does AI choose the next word?
After converting your text to tokens (M1) and understanding their meaning (M2βM4), the AI reaches its most critical step: prediction.
For every possible next token, the model calculates a probabilityβ how likely is this word to come next, given everything that came before? The model doesn't just pick the most likely word every time β it samples from the distribution, which is why AI writing feels varied and natural rather than mechanical.
Explore predictions
Select a scenario to see what the AI thinks comes next β and why.
Prompt
The capital of France is
This scenario demonstrates a high-confidence prediction where the model has a very strong inclination towards a single correct answer based on world knowledge.
Top 8 predicted tokens:
The temperature dial
AI models have a βtemperatureβ setting that controls how adventurous their predictions are.
Prompt
βThe weather today isβ
Drag the temperature dial and see how the AI's choice changes:
Temperature 0.5 β Low
βwarmβ
As temperature slightly increases, the model considers a wider range of high-probability words, offering some variation.
π§ Low temperature (0.1)
Almost always picks the most probable token. Consistent but repetitive. Good for facts.
π₯ High temperature (1.5)
Samples from lower-probability tokens too. Surprising and creative, but can go off-track.
What you've learned
- βAI generates text by predicting one token at a time, over and over
- βEvery possible next token gets a probability score β not just the obvious ones
- βHigh-confidence predictions (facts, idioms) look very different from ambiguous ones (opinions)
- βTemperature controls creativity: low = predictable, high = surprising
- βAI doesn't "know" answers β it predicts what text is most likely to follow