So do you think Dyson Spheres are inevitable too? Because things advance?
You’re also shifting your goalposts tremendously. First you were implying that today’s AI would bring about AGI and now you’re saying that something, somewhere, might happen in some sci-fi future.
I’m not sure if you’re actually worried about present day destruction, though, because you seemed to not like it when I brought up with the AGI true believers are doing to the vulnerable people that flock to them. Dario is on board with Trump’s fossil fuel, anti-green buildout too.
If you believe so much in AI, but allegedly believe in the things you’ve talked about, perhaps it’s time to start criticizing the people you hold so dear.
I’m less certain about that than I am about AGI - there may be other ways to produce that same amount of energy with less effort - but generally speaking, yeah, it seems highly probable to me.
First you were implying that today’s AI would bring about AGI
I’ve never made such a claim. I’ve been saying the exact same thing since around 2016 or so - long before LLMs were even a thing. It’s in no way obvious to me that LLMs are the path to AGI. They could be, but they don’t have to be. Either way, it doesn’t change my core argument.
I’ve been saying the exact same thing since around 2016 or so - long before LLMs were even a thing
You really aren’t beating the Yudkowsky/LessWrong allegations with this one, you know.
If you really think LLMs might mean nothing at all when it comes to actually achieving AGI, then maybe you should speak out against the environmental destruction they’re doing today with full endorsement from Anthropic and all the other corporate AI perverts.
It is everything to do with your claim about its inevitability, because we’re witnessing real life in the present day, not some fantasy prediction of the future. If people like Dario and Eli get their way, there will be no future to get AGI.
… I am growing increasingly concerned you really are a Yudkowskist rationalist
You don’t seem very interested in sticking to the topic, do you? This conversation has been all over the place, complete with ad-hominems, concern-trolling, red herrings, strawmen, gish galloping - as if you’re trying to break some kind of record.
It’s pretty clear you’ve built up a cartoon-villain version of me in your head and now you’re fighting that imagined version like it’s real. I made a pretty simple claim about AGI, you’ve piled an entire story on top of it, and now you’re demanding I defend views I don’t even hold.
I’ve been trying to have a good-faith conversation here, but if this is what you’re going to keep doing, then I’ll just move on.
The topic of…LLMs? Because that’s what this thread is. If you come in here and you start talking about something that’s entirely unrelated to LLMs (what was that about red herrings?) I’ll point it out.
And if it’s based on Yudkowskism, all the more reason to call it out. You’re aware of the sexual abuse and death Eli Yudkowski is either directly or indirectly responsible for, right?
So do you think Dyson Spheres are inevitable too? Because things advance?
You’re also shifting your goalposts tremendously. First you were implying that today’s AI would bring about AGI and now you’re saying that something, somewhere, might happen in some sci-fi future.
I’m not sure if you’re actually worried about present day destruction, though, because you seemed to not like it when I brought up with the AGI true believers are doing to the vulnerable people that flock to them. Dario is on board with Trump’s fossil fuel, anti-green buildout too.
If you believe so much in AI, but allegedly believe in the things you’ve talked about, perhaps it’s time to start criticizing the people you hold so dear.
I’m less certain about that than I am about AGI - there may be other ways to produce that same amount of energy with less effort - but generally speaking, yeah, it seems highly probable to me.
I’ve never made such a claim. I’ve been saying the exact same thing since around 2016 or so - long before LLMs were even a thing. It’s in no way obvious to me that LLMs are the path to AGI. They could be, but they don’t have to be. Either way, it doesn’t change my core argument.
C’moon now.
You really aren’t beating the Yudkowsky/LessWrong allegations with this one, you know.
If you really think LLMs might mean nothing at all when it comes to actually achieving AGI, then maybe you should speak out against the environmental destruction they’re doing today with full endorsement from Anthropic and all the other corporate AI perverts.
That doesn’t have anything to do with my claim about the inevitability of AGI.
It is everything to do with your claim about its inevitability, because we’re witnessing real life in the present day, not some fantasy prediction of the future. If people like Dario and Eli get their way, there will be no future to get AGI.
… I am growing increasingly concerned you really are a Yudkowskist rationalist
You don’t seem very interested in sticking to the topic, do you? This conversation has been all over the place, complete with ad-hominems, concern-trolling, red herrings, strawmen, gish galloping - as if you’re trying to break some kind of record.
It’s pretty clear you’ve built up a cartoon-villain version of me in your head and now you’re fighting that imagined version like it’s real. I made a pretty simple claim about AGI, you’ve piled an entire story on top of it, and now you’re demanding I defend views I don’t even hold.
I’ve been trying to have a good-faith conversation here, but if this is what you’re going to keep doing, then I’ll just move on.
The topic of…LLMs? Because that’s what this thread is. If you come in here and you start talking about something that’s entirely unrelated to LLMs (what was that about red herrings?) I’ll point it out.
And if it’s based on Yudkowskism, all the more reason to call it out. You’re aware of the sexual abuse and death Eli Yudkowski is either directly or indirectly responsible for, right?