I’m wondering if its a legitmate line of argumentation to draw the line somewhere.
If someone uses an argument and then someone else uses that same argument further down the line, can you reject the first arguments logic but accept the 2nd argument logic?
For example someone is arguing that AI isnt real music because it samples and rips off other artists music and another person pointed out that argument was the same argument logically as the one used against DJs in the 90s.
I agree with the first argument but disagree with the second because even though they use the same logic I have to draw a line in my definition of music. Does this track logically or am I failing somewhere in my thoughts?


Are they or just simulating a model of a model? And it doesn’t have to be magical for it to be unreachable for us (read Roger Penrose), lol, and what we have today is just inert code ready to work on command, not some e-mind just living in the cloud? Come on, man, this is not debatable.
So you’re saying there’s a magical other plane that the material objects on this world are just a model of and the objects in this plane don’t actually determine behaviour, the ones on that plane do?
What evidence do you have to support that? What evidence do you have that consciousness exists on that plane and isn’t just a result of the behaviour of neurons? Why does consciousness change when you get a brain injury and damage those neurons?
Roger Penrose, the guy who wrote books desperately claiming that free will must exist and spent his time searching for any way it could before arriving at a widely discredited theory of quantum gravity being the basis for consciousness?
So? If we could put human brains in suspended animation, and just boot them up on command to execute tasks for us, does that mean that they’re not intelligent?
It obviously and evidently is debatable since we are debating it, and saying “it’s not debatable” isn’t an argument, it’s a thought terminating phrase.
Free will exists and you feel it every time you’re dieting, lol, or restricting yourself in any way for higher reasons. It escapes the realm of words because it’s fundamental to our existence, you can’t argue against it in good faith, it can simply be denied the same way you could deny the rising of the sun… And, again, I think you’re confused. Okay, could a calculator with all its parts be considered intelligent/more intelligent than us simply because it can make calculations faster and with more accuracy? A computer? It’s the same principle. We simply haven’t made anything that 1) understands the world around it in any way 2) has volition. We have made a code eating, code spitting machine that works when we want it to, that’s all. It’s pretty impressive, it’s not intelligent, it’s not free willed and it has no consciousness. It’s “intelligent” the way a calculator could be, or a set of pulleys could be considered “strong”.
Lol, “free will exists because I think it exists” is not an argument.
Computers have long been limited to not being more intelligent than very complicated calculators, because they had no good way of solving fuzzy pattern matching problems, like ingesting arbitrary data and automatically learning and pulling out patterns from it. The entire surge in AI is being driven because AI algorithms that model our neurons do exactly that.
LLMs contain some understanding of the world, or they wouldn’t be able to do what they do, but yes I would agree. That doesn’t meant we won’t or can’t get there though. Right now many leading edge AI researchers are specifically trying to build world models as opposed to LLMs that do have an understanding of the world around them.
No, this is a reductive description of how even LLMs work. They are not just copying and pasting. They are truly combining and synthesizing information in new and transformative ways, in a similar way that humans do. Yes, we can regurgitate some of that book we read, but most of what we get from it is an impression / general knowledge that we then combine with other knowledge, just like an LLM.
Language is literally the basis for almost all of our knowledge, it’s wild to flatly deny that a multi billion collection of simulated neurons that are trained on language could not possibly have any intelligence or understanding of the world.