Obviously suicide because it happened at his house/apartment. Because who else would suicide him self in his apartment right? I wouldn’t go trying to figure out how it happened. Like with finger printing surfaces or sniffing dogs or checking cameras. Why would sniffing a dog even help?
We’re truly in a dystopian future when big tech nerds are doing mafia hits. Reminds me of that guy in Better Call Saul that hired Mike as a bodyguard.
RIP. Hope that his whistleblowing doesn’t end up falling on deaf ears
There are 11 others involved in the case.
When the working class kills a CEO, there’s a reward by the FBI and is found in a week. When a company does it, the world is silent.
Whistleblower deaths should have all of a company’s director’s investigated by default. It may be that 99% are innocent, but just one or two seeing their massively valuable stock, and options in danger, may be driven to such actions on their own.
The police are too busy shooting people’s pets and being scared of acorns to investigate something like this.
Oh and shooting people for avoid the 3.50 New York subway fare…
What whistle did he blow?
He had hard proof chat gpt used copyright work to train. Opening them up to lawsuits of said copyright holders and basically collapsing the whole company.
I didn’t even read the article. I just barely skimmed it and guess what I found within 2 seconds.
“Balaji’s death comes three months after he publicly accused OpenAI of violating U.S. copyright law while developing ChatGPT, a generative artificial intelligence program that has become a moneymaking sensation used by hundreds of millions of people across the world.”
You don’t even need “Hard” proof. The mere fact that ChatGPT “knows” about certain things indicate that it ingested certain copyrighted works. There are countless examples. Can it quote a book you like? Does it know the plot details? There is no other way for it to get certain information about such things.
The issue is proving that it ingested the original copyrighted work, and not some hypothetical public copyleft essay.
Facts aren’t protected by copyright. Regurgitating facts about a thing is in no way illegal, even if done by ai and done by ingested copyrighted material. I can legally make a website dedicated to stating only facts about Disney products (all other things the same) when prompted by questions of my users.
I think you’re missing the point. We are talking about whether it is fair use under the law for an AI model to even ingest copyrighted works and for those works to be used as a basis to generate the model’s output without the permission of the copyright holder of those works. This is an unsettled legal question that is being litigated right now.
Also, in some cases, the models do produce verbatim quotes of original works. So, it’s not even like we’re just arguing about whether the AI model stated some “facts.” We are also saying, hey can an AI model verbatim reproduce an actual copyrighted work? It’s settled law that humans cannot do that except in limited circumstances.
The mere fact that ChatGPT “knows” about certain things indicate that it ingested certain copyrighted works.
This is the bit I’m responding to. This “mere fact” that you propose is not copyright infringement by facts I’ve stated. I’m not making claims to any of your other original statements
Verbatim reproduction may be copyright infringement, but that wasn’t your original claim that I quoted and am responding to (I didn’t make that clear earlier, that’s on me).
“Apologies” for my autistic way of communicating (I’m autistic)
I think you’re using the word fact in two senses here.
I am making an argument that ChatGPT and other AI models were created by copyrighted works and my “proof” is the “fact” that it can reproduce those works verbatim or state facts about them that can be derived from nowhere else but in the original copyrighted work or a derivative copyrighted work that used the original under fair use.
Now, the question is — is it fair use under copyright law, for AI models to be built with copyrighted materials?
If it is considered fair use, I’m guessing it would have a chilling effect on human creativity given that no creator can guarantee themselves a living if their style of works can be reproduced so cheaply without them once AI has been trained using their works as inputs. So, it would then become necessary to revisit copyright law to redefine fair use such that we don’t discourage creators. AI can only really “remix” what it has seen before. If nothing new is being created because AI has killed all incentive to make new things, it will stagnate and degrade.
It was more of an opinion piece. They were already being sued and he didn’t bring any new info forward from what I understand.
His AI GF must’ve convinced him to shoot himself in the back of the head with a shotgun twice.
Twice is better. Reliable.
If Zombieland taught us anything, it’s the double-tap.
Police say it appears to be a suicide. Probably true, honestly, but that doesn’t mean he wasn’t driven to it.
Police say it appears to be a suicide.
Let me guess: it was less than 30 stabs that they found in his back?
I wouldn’t believe the cops without some evidence either way.
That’s even worse!
“He blew the whistle on a multibillion dollar company - obviously he knew they’d kill him! Suicide.”