LLMs generate text by predicting word patterns, but they don’t know what words actually mean to humans. That’s why they sometimes hallucinate. But when you provide outside information—like documents or websites—their answers can be anchored to real, meaningful data. That’s grounding.

Harnad, Stevan (1990): The symbol grounding problem