After Anthropic refused flat out to agree to apply Claude AI to autonomous weapons and mass surveillance of American citizens, OpenAI jumps right into bed with the United States Department of War.
Flogging Molly has been fun to be reminded to listen to. Brought me back to Streetlight Manifesto that i forgot was great, also do you mind if i borrow one of those dollar bills while i go to take a leak real quick?
Oh man, I’ve seen FM 3 times live, honestly the best concerts I’ve ever been to - something about their music just brings out the best energy in everyone
And this is why anybody who made a mistake should be shunned forever, unless they invent a time machine to go back and undo their past misdeeds. They may as well just jump off a bridge and save us the trouble of setting up a firing squad.
Making a mistake is one thing. Ignoring the BIG FLASHING WARNING SIGNS is another. There have been massive warning signs around AI for several years. If you looked at the warning signs and proceeded anyway, you deserve what you get.
They never said they should be shunned, they didnt even list a social consequence. The fact remains that if you used OpenAI in the past you already contributed.
OK, and the point of the thread was that it’s still a good thing if they quit it now. No one can undo past mistakes but you can decide not to keep making them.
You can do a good thing now, and have done a shit thing in the past. I dont understand why thats so hard to grasp. We aren’t absolved of shitty behavior because we didnt know better. A person who murders someone before they knew it was wrong will still have murdered someone. You can’t absolve your own past by future actions, just try not to repeat it.
Yes, there are applications that can be used for good or evil. But being super reductive and claiming the whole internet has tons of negative uses is ridiculous. The internet itself is a series of protocols running on communications hardware.
It is up to the users of the applications to judge whether the application is inherently positive or negative, or whether the use of the technology is being handled in a positive and/or ethical manner. And more so, it’s up to the user to judge wether the technology aligns with their personal values.
Social networks: Xitter, Farcebook, Instawhore, TikTok, Reddit… all of them have proven they are platforms of manipulation, so I walked away. In fact, most of them I walked away from before it was shown how just how bad they were.
Cryptocurrencies: had the opportunity to be good, but grifters set in on them, so I never got involved.
NFTs: the next generation of CryptoGrifters, stayed away.
AI: has never been ready to be a public application / platform. That has been apparent for the last 3-5 years. If you didn’t read and pay attention to the signs and still signed up for an account despite all the warnings being out there, then yes, you have aided and abetted in the use of the technology in manners that are going to have a severely negative impact on the world.
Here’s the thing: we have a long, long history with technology. We know that it can be used for both good and bad. However, we also should have evolved in our thinking over the past 6-7 decades in terms of how technologies are being applied.
Nuclear reactors: Mostly good with negative side effects. Judgment on this needed longer terms study to understand it’s implication. Nuclear bombs? Clearly evil.
Cassette recorders, VCRs, CD Recorders: predominantly good, but open to bad uses (i.e., piracy). The balance: mostly good, minimal negative effects
AI? Potentially good, but immediately threw up huge red flags in terms of negative uses (deep fakes, revenge porn, etc.). Even AI researchers have expressed concerns over the direction of the research.
The thing is, technology is something that we’ve lived with since the industrial revolution. Every single technological invention since that time has had major implications for it’s impact on society. We can choose, on an individual basis, how that impact is shaped. If you chose to use a technology, then you are better that it’s uses will align with your values. Don’t cry when it’s used in ways that don’t align with your values, or is used against you.
That’s not the argument at all. The argument is that there have been warning signs, big flashing warning signs, about the dangers of using AI for years now. Most technology, in general doesn’t come with anywhere near as many warnings.
And, it’s been a known fact that people using AI are also training in the AI. That’s an active choice that people that signed up for accounts are making.
So yes, users of this technology are taking an active role in the training of the technology, that makes them complicit.
That is a far cry from data brokers going out and harvesting public records, or companies tracking your spending habits and feeding that into a database. If those companies then turned around and made a weapon, no I wouldn’t point the finger at people whose information got scraped. OTOH - if you continued to use a platform that you know is using you to gather information (aka, Facebook, Reddit, Twitter, etc.) and let them do it, then yeah…you have some level of complicity.
Yeah, we live and learn. We don’t expect perfection, we expect self improvement. Its important not to excuse bad decisions/behavior. Be more skeptical of new technology in the future and pay attention to who’s creating/selling it.
There is no such thing as “ethical” AI coming from Big Tech. Google, Microsoft, Anthropic, Amazon, all of them built their machines without consent, all their machines have been subsidized with our taxes and resources, and Anthropic is a pro-Trump pro-foreign-dictator company that crossed every single red line until the very last one.
Anthropic was pro mass surveillance of foreigners.
It was okay with helping Trump plan criminal invasions.
It just doesn’t want to be held responsible for pushing the “go” button, but we know their software was one suggestion away from doing it anyway.
That’s no judgment on me. I don’t use AI. I tried it one night 3-4 years ago, realized that it wasn’t ready for widespread adoption, and haven’t touched it since.
Thanks for doing this - it isn’t a proper leftist get-together without some assclown imposing impossible purity tests.
you are no true Irishman!
I have an IRA account, does that make me kinda Irish?
Does it have 3$ in it and is there a Guinness in yer hand? Then welcome to the Irish me boy
Flogging Molly has been fun to be reminded to listen to. Brought me back to Streetlight Manifesto that i forgot was great, also do you mind if i borrow one of those dollar bills while i go to take a leak real quick?
Oh man, I’ve seen FM 3 times live, honestly the best concerts I’ve ever been to - something about their music just brings out the best energy in everyone
Well no Irishman is a Scotsman so that tracks
Impossible purity test? That’s utter bull crap. There have been many warnings about the negative uses of AI for years now, for example: https://aiforgood.itu.int/event/addressing-the-dark-sides-of-ai/
To expect people to be able to understand that this use could be expanded to committing state sponsored atrocities is not a stretch.
And this is why anybody who made a mistake should be shunned forever, unless they invent a time machine to go back and undo their past misdeeds. They may as well just jump off a bridge and save us the trouble of setting up a firing squad.
Making a mistake is one thing. Ignoring the BIG FLASHING WARNING SIGNS is another. There have been massive warning signs around AI for several years. If you looked at the warning signs and proceeded anyway, you deserve what you get.
They never said they should be shunned, they didnt even list a social consequence. The fact remains that if you used OpenAI in the past you already contributed.
OK, and the point of the thread was that it’s still a good thing if they quit it now. No one can undo past mistakes but you can decide not to keep making them.
You can do a good thing now, and have done a shit thing in the past. I dont understand why thats so hard to grasp. We aren’t absolved of shitty behavior because we didnt know better. A person who murders someone before they knew it was wrong will still have murdered someone. You can’t absolve your own past by future actions, just try not to repeat it.
This person uses the internet, which for *years *has had TONs of negative uses.
How do you think Epstein emailed his buddies? The internet.
You can’t trust people that use evil technologies like user Unattributed. Thanks for the incredibly sound and intelligent logical framework!
Yes, there are applications that can be used for good or evil. But being super reductive and claiming the whole internet has tons of negative uses is ridiculous. The internet itself is a series of protocols running on communications hardware.
It is up to the users of the applications to judge whether the application is inherently positive or negative, or whether the use of the technology is being handled in a positive and/or ethical manner. And more so, it’s up to the user to judge wether the technology aligns with their personal values.
Social networks: Xitter, Farcebook, Instawhore, TikTok, Reddit… all of them have proven they are platforms of manipulation, so I walked away. In fact, most of them I walked away from before it was shown how just how bad they were.
Cryptocurrencies: had the opportunity to be good, but grifters set in on them, so I never got involved.
NFTs: the next generation of CryptoGrifters, stayed away.
AI: has never been ready to be a public application / platform. That has been apparent for the last 3-5 years. If you didn’t read and pay attention to the signs and still signed up for an account despite all the warnings being out there, then yes, you have aided and abetted in the use of the technology in manners that are going to have a severely negative impact on the world.
Here’s the thing: we have a long, long history with technology. We know that it can be used for both good and bad. However, we also should have evolved in our thinking over the past 6-7 decades in terms of how technologies are being applied.
Nuclear reactors: Mostly good with negative side effects. Judgment on this needed longer terms study to understand it’s implication. Nuclear bombs? Clearly evil.
Cassette recorders, VCRs, CD Recorders: predominantly good, but open to bad uses (i.e., piracy). The balance: mostly good, minimal negative effects
AI? Potentially good, but immediately threw up huge red flags in terms of negative uses (deep fakes, revenge porn, etc.). Even AI researchers have expressed concerns over the direction of the research.
The thing is, technology is something that we’ve lived with since the industrial revolution. Every single technological invention since that time has had major implications for it’s impact on society. We can choose, on an individual basis, how that impact is shaped. If you chose to use a technology, then you are better that it’s uses will align with your values. Don’t cry when it’s used in ways that don’t align with your values, or is used against you.
You are fucking insane. By your logic any customer of a company that might one day build a weapon is complicit. That is asinine.
That’s not the argument at all. The argument is that there have been warning signs, big flashing warning signs, about the dangers of using AI for years now. Most technology, in general doesn’t come with anywhere near as many warnings.
And, it’s been a known fact that people using AI are also training in the AI. That’s an active choice that people that signed up for accounts are making.
So yes, users of this technology are taking an active role in the training of the technology, that makes them complicit.
That is a far cry from data brokers going out and harvesting public records, or companies tracking your spending habits and feeding that into a database. If those companies then turned around and made a weapon, no I wouldn’t point the finger at people whose information got scraped. OTOH - if you continued to use a platform that you know is using you to gather information (aka, Facebook, Reddit, Twitter, etc.) and let them do it, then yeah…you have some level of complicity.
Yeah, we live and learn. We don’t expect perfection, we expect self improvement. Its important not to excuse bad decisions/behavior. Be more skeptical of new technology in the future and pay attention to who’s creating/selling it.
With their last link, they’re complicit
Well, judge not lest you too be judged…
There is no such thing as “ethical” AI coming from Big Tech. Google, Microsoft, Anthropic, Amazon, all of them built their machines without consent, all their machines have been subsidized with our taxes and resources, and Anthropic is a pro-Trump pro-foreign-dictator company that crossed every single red line until the very last one.
Anthropic was pro mass surveillance of foreigners.
It was okay with helping Trump plan criminal invasions.
It just doesn’t want to be held responsible for pushing the “go” button, but we know their software was one suggestion away from doing it anyway.
That’s no judgment on me. I don’t use AI. I tried it one night 3-4 years ago, realized that it wasn’t ready for widespread adoption, and haven’t touched it since.