Work

LLM Word of the Week: Training

LLM
AI
Machine Learning
Training
Language Models

Training is the flashcard process where LLMs learn from repeated feedback — a key step that transforms random guesses into understanding.

Illustration of an AI model learning through feedback loops

LLM Word of the Week: Training

We hear it all the time — “training a model” — but what does that actually mean in the world of large language models (LLMs)?

Let’s break it down.


Think of it like teaching with flashcards

Imagine you’re teaching a kid new vocabulary. You show a flashcard that says “apple,” and the kid says “banana.” You correct them: “No, that’s an apple.” Next time, they’re a little closer.

That’s training — showing the model an example, checking its guess, and adjusting until it starts getting it right.

Each flashcard is like a data point. Each correction helps the model learn patterns that help it predict better next time.


Under the hood

Technically, training is the process of adjusting model parameters (weights) so that predictions become more accurate over time.

The steps look something like this:

  1. The model makes a prediction based on input text.
  2. The prediction is compared to the correct answer (the “label”).
  3. The error (difference) is calculated.
  4. The model updates its internal parameters slightly to reduce that error.
  5. Repeat. Billions of times.

Over time, those tiny updates accumulate — and the model starts to understand language patterns, context, and meaning.


Visualizing the process

If you could peek inside, it might look like a giant feedback loop:

Data → Model → Prediction → Error → Adjustment → Repeat

The more data, the more diverse the examples, and the more refined the model becomes.


Beyond just memorization

Training doesn’t make the model memorize answers — it helps it learn relationships. That’s why a well-trained model can handle brand new prompts it’s never seen before.

It’s pattern recognition, not rote memorization.


Why it matters

Training is what gives models their intelligence. Fine-tuning, reinforcement learning, and all the buzzwords build on top of this foundation. Without training, an LLM is just a pile of math waiting for meaning.


Next week’s word will dive deeper into another building block of LLMs. 🚀