Given how harder it’s becoming to tell apart AI slop from something made by a human (videos, photos, text), and how much scammers and other criminals are piling up on the tech, I’m thinking this will be the silver lining, making some people pay more attention to real life and finally accept the maxim “Don’t believe everything you see on the internet”
How is video different from text? We had a lot of lies and propaganda in newspapers durin WWII and everybody believed it, I don’t think it will change just because it’s a different medium now.
You know how the Brittish were able to shoot down so many nazi planes? Carrots improved their night vision.
Now personally I don’t understand how shoving carrots in your eyes would improve your vision, but thats what the papers say!
They had proper equipment.
He specifically mentioned AI videos.
When he said go see real life he didn’t mean YouTube.
And I said, what’s the difference with lies in text in a newspaper and lies on video on youtube? There is no difference, so nothing will change.
They’ve been doctoring photos for at least a hundred years now.
I have no issue filtering. Everyone I watch regularly has a minimum of a masters degree in their respective edutainment field. I don’t watch anything packaged by the big 6 or any algorithm. And I run my own dialed AI stuff for many tasks I personally find useful.
AI is not the problem friend. The problem is cultural. Tools are not the problems. The people that use them are.
The world will continue to specialize. As it does, the range of available content will grow as will the difficulty in finding your niche. Most places cater to the lowest common denominator. Perhaps you’ve found yourself some places where it is time to move on or devolve along with them.
Haha
You think the reason people engage with “content” is the quality and a drop in quality would decrease engagement?
I’m skeptical
That’s a silver lining I hadn’t considered. 🤞
People actually do that you know. Maybe you can too if you spend less time on Boomer type of thoughts.
I think it’s just hard to tell AI production from mediocre human production. But when humans make an effort their is no confusion possible.
I talk more with ChatGPT nowdays than I do with my friends. They’re not interested in the things I’m. ChatGPT atleast pretends to be. I can’t wait for it to improve to the point that it becomes impossible to tell apart from a real human.
“Who was the king of Norway in 1600?”
“IDK, who even cares lol”
Given how harder it’s becoming to tell apart AI slop from something made by a human…
If AI is that good, it’s not ‘slop’, is it? I see this argument all the time. Apparently AI is both awful slop, devoid of merit and also indistinguishable from human made content and a threat to us all. Pick a side.
Well, not all LLMs are created equal. Some are decent, some are slop, some are nightmare machines
I trained a LLM on nothing but Hitler speeches and Nazi propaganda. then asked it to write a speech if Hitler was the 2016 president and you’ll be shocked at the results.
Not to mention, there’s also a lot of human slop.
Sure, but there’s never a qualifier in these arguments. It’s just ‘hur dur AI bad’ which is lazy and disingenuous.
AI is generally bad because it tends to steal content from human creators and is largely being pushed because corporations want another excuse to throw more workers on the street in favor of machines (while simultaneously raising their prices).
There are some AI uses that are good though, such as AI voice generation to help those that can’t speak to communicate with the world and not sound like a robot.
AI is generally bad because it tends to steal content from human creators…
Again, this is an argument that I see a lot, that’s simply not true. AI is not stealing anything. Theft is a specific legal term. If I steal your TV, I have your TV and you don’t. If AI is trained on some content that content still exists. Whatever training takes place steals nothing.
…because corporations want another excuse to throw more workers on the street in favor of machines…
Your point is a valid one, but this not unique to AI and is the inevitable result of the onward march of technology. The very thing we’re using to communicate right now, the Internet, is responsible for billions of job losses. That’s not a valid reason to get rid of it. Instead of blaming AI for putting people out work, we should be pressuring governments to implement things like UBI to provide people with a basic living wage. That way people need not fear the impact the advance of technology will have on their ability to feed and house themselves.
There are some AI uses that are good though, such as AI voice generation to help those that can’t speak to communicate with the world and not sound like a robot.
These are great examples.
It’s indistinguishable from human slop that’s for sure.
That’s the problem with imaginary enemies. They have to be both ridiculously incompetent, and on the verge of controlling the whole world. Sounds familiar doesn’t it?
The argument being made is: “AI is currently slop but there is a reasonable expectation that it will be pushed until it is indistinguishable from human work, and therefore devaluing of human work.”
I don’t like AI because it’s just another way that “corporate gonna corporate” and it never ends up working out for the mere mortals’ benefit. Also, misinformation is already so prevalent and it’s going to continue to get worse (we have seen this already–trump abuses it continually).
The argument being made is: "AI is currently slop but there is a reasonable expectation that it will be pushed until it is indistinguishable from human work, and therefore devaluing of human work.
Again, if the work is ‘indistinguishable’ then I don’t see how AI art ‘devalues’ human work any more than the work done by another human. This just sounds like old fashioned competition, which has existed as long as art itself has.
I don’t like AI because it’s just another way that “corporate gonna corporate” and it never ends up working out for the mere mortals’ benefit
Corporations abusing technology to the disbenefit of people is nothing new, unfortunately, and isn’t unique to AI (see Email, computers, clocking in machines, monitoring software etc). That speaks to a need for better corporate oversight and better worker rights.
misinformation is already so prevalent and it’s going to continue to get worse (we have seen this already–trump abuses it continually).
This is a good point, but again AI is hardly the first time technology has been used to spread lies and misinformation. This highlights a fundamental problem with our media and a need to teach better critical thinking in schools etc.
They’re all valid concerns but in my opinion they suggest AI is being used as an enabler, and not that the problems in question are the sole product of it. Sadly if we stopped using anything and everything that was misused for nefarious means we’d go back to the stone age.
deleted by creator