Except it’s not their reflection, it’s a string of phrases presented to you based partly on the commonality of similar phrases appearing next to one another in the training data, and partly on mysterious black box modifications! Fun!
I like to describe it as a “force multiplier” along the lines of a powered suit.
You are putting in small inputs, and it’s echoing out in a vast, vast virtual space and being compared and connected with countless billions of possible associations. What you get back is a kind of amplification of what you put in. If you make even remotely leading suggestions in your question or prompt, that tiny suggestion is also going to get massively boosted in the background, this is part of why some LLM’s can go off the rails with some users. If you don’t take care with what exactly you’re putting in, you will get wildly unexpected results.
Except it’s not my reflection, it’s a reflection of millions if not billions of humans.
Except it’s not their reflection, it’s a string of phrases presented to you based partly on the commonality of similar phrases appearing next to one another in the training data, and partly on mysterious black box modifications! Fun!
I like to describe it as a “force multiplier” along the lines of a powered suit.
You are putting in small inputs, and it’s echoing out in a vast, vast virtual space and being compared and connected with countless billions of possible associations. What you get back is a kind of amplification of what you put in. If you make even remotely leading suggestions in your question or prompt, that tiny suggestion is also going to get massively boosted in the background, this is part of why some LLM’s can go off the rails with some users. If you don’t take care with what exactly you’re putting in, you will get wildly unexpected results.