Technically, they are predicting the next token. To do that properly they may need to predict the next idea, but thats just a means to an end (the end being the next token).
Also, the LLM is just predicting it, it’s not selecting it. Additionally it’s not limited to the role of assistant, if you (mis) configure the inference engine accordingly it will happily predict user tokens or any other token (tool calls etc).
A phrase that throws more heat than light.
What they are predicting is not the next word they are predicting the next idea
technically, how it functionally works, its the next word / token / chunk a lot more than its an “ideal”. That’s even rough to quantify
Take it as hear if you like but the other relatively accurate analogy is a probabilistic database
Neither work if you’ve fallen into anthropomorphising, but they’re relatively accurate to architecture and testing
Technically, they are predicting the next token. To do that properly they may need to predict the next idea, but thats just a means to an end (the end being the next token).
Also, the LLM is just predicting it, it’s not selecting it. Additionally it’s not limited to the role of assistant, if you (mis) configure the inference engine accordingly it will happily predict user tokens or any other token (tool calls etc).
Thanks, I haven’t heard this phrase before, but it feels quite descriptive :)