I remember when compression was popularized, like mp3 and jpg, people would run experiments where they would convert lossy to lossy to lossy to lossy over and over and then share the final image, which was this overcooked nightmare
I wonder if a similar dynamic applies to the scenario presented in the comic with AI summarization and expansion of topics. Start with a few bullet points have it expand that to a paragraph or so, have it summarize it back down to bullet points, repeat 4-5 times, then see how far off you get from the original point.
A couple decades ago, novelty and souvenir shops would sell stuffed parrots which would electronically record a brief clip of what they heard and then repeat it back to you.
If you said “Hello” to a parrot and then set it down next to another one, it took only a couple of iterations between the parrots to turn it into high pitched squealing.
overall it didn’t seem too bad. it sort of started focusing on the ecological and astrobiological side of the same topic but didn’t completely drift. to be honest, i think it would have done a lot worse if i made the prompt less specific. if it was just “summarize this text” and “expand on these points” i think chatgpt would get very distracted
Interesting. I also wonder how it would fare across different models (eg user a uses chatgpt, user b uses gemini, user c uses deepseek, etc) as that may mimic real world use (such as what’s depicted in the comic) more closely
There is, or maybe was, a YouTube channel that would run well known song lyrics through various layers of translation, then attempt to sing the result to the tune of the original.
Summarizing requires understanding what’s important, and LLMs don’t “understand” anything.
They can reduce word counts, and they have some statistical models that can tell them which words are fillers. But, the hilarious state of Apple Intelligence shows how frequently that breaks.
I remember when compression was popularized, like mp3 and jpg, people would run experiments where they would convert lossy to lossy to lossy to lossy over and over and then share the final image, which was this overcooked nightmare
I wonder if a similar dynamic applies to the scenario presented in the comic with AI summarization and expansion of topics. Start with a few bullet points have it expand that to a paragraph or so, have it summarize it back down to bullet points, repeat 4-5 times, then see how far off you get from the original point.
People do that with google translate as well
Are humans doing this as well and if they don’t, why not?
Humans do this yes. https://en.m.wikipedia.org/wiki/Telephone_game
A couple decades ago, novelty and souvenir shops would sell stuffed parrots which would electronically record a brief clip of what they heard and then repeat it back to you.
If you said “Hello” to a parrot and then set it down next to another one, it took only a couple of iterations between the parrots to turn it into high pitched squealing.
Reminds me of this classic video https://www.youtube.com/watch?v=t-7mQhSZRgM
i was curious so i tried it with chatgpt. here are the chat links:
overall it didn’t seem too bad. it sort of started focusing on the ecological and astrobiological side of the same topic but didn’t completely drift. to be honest, i think it would have done a lot worse if i made the prompt less specific. if it was just “summarize this text” and “expand on these points” i think chatgpt would get very distracted
Interesting. I also wonder how it would fare across different models (eg user a uses chatgpt, user b uses gemini, user c uses deepseek, etc) as that may mimic real world use (such as what’s depicted in the comic) more closely
Doesn’t chatgpy remember the context of the previous question and text?
Maybe a difference in accounts and llms makes a bigget difference.
that’s why i ran every request in a different chat session
In my experience, LLMs aren’t really that good at summarizing
It’s more like they can “rewrite more concisely” which is a bit different
you mean hallucinate
I used to play this game with Google translate when it was newish
There is, or maybe was, a YouTube channel that would run well known song lyrics through various layers of translation, then attempt to sing the result to the tune of the original.
🎵Once you know which one, you are acidic, to win!🎵
Gradually watermelon… I like shapes.
Twisted translations
Sounds about right to me.
translation party!
Throw Japanese into English into Japanese into English ad nauseum, untill an ‘equilibrium’ statement is reached.
… Which was quite often nowhere near the original statement, in either language… but at least the translation algorithm agreed with itself.
If it isn’t accurate to the source material, it isn’t concise.
LLMs are good at reducing word count.
In case you haven’t seen it, Tom7 created a delightful exploration of using an LLM to manipulate word counts.
Summarizing requires understanding what’s important, and LLMs don’t “understand” anything.
They can reduce word counts, and they have some statistical models that can tell them which words are fillers. But, the hilarious state of Apple Intelligence shows how frequently that breaks.