I’m wondering if its a legitmate line of argumentation to draw the line somewhere.

If someone uses an argument and then someone else uses that same argument further down the line, can you reject the first arguments logic but accept the 2nd argument logic?

For example someone is arguing that AI isnt real music because it samples and rips off other artists music and another person pointed out that argument was the same argument logically as the one used against DJs in the 90s.

I agree with the first argument but disagree with the second because even though they use the same logic I have to draw a line in my definition of music. Does this track logically or am I failing somewhere in my thoughts?

  • masterspace@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 hours ago

    Free will exists and you feel it every time you’re dieting, lol, or restricting yourself in any way for higher reasons. It escapes the realm of words because it’s fundamental to our existence, you can’t argue against it in good faith, it can simply be denied the same way you could deny the rising of the sun… And, again, I think you’re confused.

    Lol, “free will exists because I think it exists” is not an argument.

    Okay, could a calculator with all its parts be considered intelligent/more intelligent than us simply because it can make calculations faster and with more accuracy? A computer? It’s the same principle.

    Computers have long been limited to not being more intelligent than very complicated calculators, because they had no good way of solving fuzzy pattern matching problems, like ingesting arbitrary data and automatically learning and pulling out patterns from it. The entire surge in AI is being driven because AI algorithms that model our neurons do exactly that.

    We simply haven’t made anything that 1) understands the world around it in any way 2) has volition.

    LLMs contain some understanding of the world, or they wouldn’t be able to do what they do, but yes I would agree. That doesn’t meant we won’t or can’t get there though. Right now many leading edge AI researchers are specifically trying to build world models as opposed to LLMs that do have an understanding of the world around them.

    We have made a code eating, code spitting machine that works when we want it to, that’s all.

    No, this is a reductive description of how even LLMs work. They are not just copying and pasting. They are truly combining and synthesizing information in new and transformative ways, in a similar way that humans do. Yes, we can regurgitate some of that book we read, but most of what we get from it is an impression / general knowledge that we then combine with other knowledge, just like an LLM.

    Language is literally the basis for almost all of our knowledge, it’s wild to flatly deny that a multi billion collection of simulated neurons that are trained on language could not possibly have any intelligence or understanding of the world.