Now that’s just not fair. I don’t think any of us have a problem with handicapped cows getting the special help they need, be it a wheelchair or a prosthetic arm.
Thanks for the unprompted mansplanation bro, but I was specifically refering to the comment that replied “JuSt lIkE hUmAn BrAin”, to “they generate data based on other data”
That’s crazy, because they weren’t even talking about keyboard autofill, so why’d you even bring that up? How can you imply my comment is irrelevant when it’s a direct response to your initial irrelevant comment?
Nice hijacking of the term mansplaining, btw. Super cool of you.
Fine, I’ll play along, chew it up for you, since you’ve been so helpful and mansplained that a keyboard is different than LLM:
My comment was responding to anthropomorphization of software. Someone said it’s not human because it just generates output based on input. Someone else said “just like human brain”, I said yes, but also just like a keyboard, alluding to the false equivalence.
They don’t come up with any statements, they generate data extrapolating other data.
So just like human brains?
I like this argument.
Anything that is “intelligent” deserves human rights. If large language models are “intelligent” then forcing them to work without pay is slavery.
So cows and pigs salary when?
When they grow god damn thumbs.
So, you’re prejudiced against the handicapped. Wow.
(I kid, I kid.)
Now that’s just not fair. I don’t think any of us have a problem with handicapped cows getting the special help they need, be it a wheelchair or a prosthetic arm.
Even animals are protected against human cruelty by law.
You’re moving the goal post. You were talking about salary first, then moved to “human cruelty.”
Are you suggesting slavery isn’t a form of cruelty or are you just being obtuse?
lol. we’re talking about AI hallucinations and you’re trying to drive the topic elsewhere. Nice red-herring attempt.
I don’t think that slaughterhouses are illegal.
Well, yes, but actually, no
Main difference is that human brains usually try to verify their extrapolations. The good ones anyway. Although some end up in flat earth territory.
How many, percentually, do you think are critical to input?
Yes, my keyboard autofill is just like your brain, but I think it’s a bit “smarter” , as it doesn’t generate bad faith arguments.
Your Markov chain based keyboard prediction is a few tens of billions of parameters behind state of the art LLMs, but pop off queen…
Thanks for the unprompted mansplanation bro, but I was specifically refering to the comment that replied “JuSt lIkE hUmAn BrAin”, to “they generate data based on other data”
That’s crazy, because they weren’t even talking about keyboard autofill, so why’d you even bring that up? How can you imply my comment is irrelevant when it’s a direct response to your initial irrelevant comment?
Nice hijacking of the term mansplaining, btw. Super cool of you.
Oh my god, we’ve got a sealion here.
Fine, I’ll play along, chew it up for you, since you’ve been so helpful and mansplained that a keyboard is different than LLM:
My comment was responding to anthropomorphization of software. Someone said it’s not human because it just generates output based on input. Someone else said “just like human brain”, I said yes, but also just like a keyboard, alluding to the false equivalence.
Clearer?
I’m not sure if you know what sealioning is…