- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
AI companies have all kinds of arguments against paying for copyrighted content::The companies building generative AI tools like ChatGPT say updated copyright laws could interfere with their ability to train capable AI models. Here are comments from OpenAI, StabilityAI, Meta, Google, Microsoft and more.
Computers aren’t people. AI “learning” is a metaphorical usage of that word. Human learning is a complex mystery we’ve barely begun to understand, whereas we know exactly what these computer systems are doing; though we use the word “learning” for both, it is a fundamentally different process. Conflating the two is fine for normal conversation, but for technical questions like this, it’s silly.
It’s perfectly consistent to decide that computers “learning” breaks the rules but human learning doesn’t, because they’re different things. Computer “learning” is a a new thing, and it’s a lot more like creating replicas than human learning is. I think we should treat it as such.
I’m so fed up trying to explain this to people. People thing LLMs are real GAI and are treating them as such.
Computers do not learn like humans. It cannot, and should not be regulated in the same way.
Yes 100%. Once you drop the false equivalence, the argument boils down to X does Y and therefore Z should be able to do Y, which is obviously not true, because sometimes we need different rules for different things.