How LLM Understands Meaning
How words become numbers that capture meaning
When you learn a language, your brain builds invisible connections — "happy" links to "glad," "cat" links to "dog." An LLM does the same thing during training: it reads billions of sentences and learns which words appear in similar contexts. Words that show up in similar situations get **similar numbers** — that's how the model "knows" that "king" and "queen" are related. Each word becomes a point in space, like a city on a map. But instead of 2 coordinates, this model uses **384 dimensions**. The tool below crunches those down to 3D so you can spin it around and explore. Every dot is a real word, positioned by a trained AI model. Words with similar meanings **clump together** — zoom in and you'll see clusters of colors, animals, emotions, and more. > **Try it:** Search for **king** and look at the neighbors. You'll see **queen**, **prince**, **monarch** nearby. The lines show how close they are in meaning — a small angle means similar, a wide angle means different. The **similarity score** (0 to 1) is exactly what LLMs use internally to compare words. Now try **dog** — completely different cluster, same math.