• Iconoclast@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    8 hours ago

    We are not “moving towards AGI” in any way with any modern technology

    So that means you believe AI research is completely frozen still or moving backwards. Please explain.

    Comparisons to faster-than-light travel are completely disingenuous and bad faith - that would break the laws of physics and you know it.

    You can also keep your red herrings to yourself. I’m discussing ideas here - not people.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 hours ago

      According to Dario Amodei, this is the year we are getting New Science. And apparently he believes in Dyson Spheres too. How do we feel about that?

      Anthropic is not special. They’re doing the LLM thing like everybody else. The Godfather of AI, Yann LeCun himself, said LLMs were a dead end on this front. But even if he didn’t chime in, it’s your job to show they’ll lead to AGI, it’s your job to show us how, not my job to show you it won’t.

      • Iconoclast@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 hours ago

        If you’re just gonna keep ignoring every single point I make and keep rambling about unrelated shit, then there’s nothing left to discuss here. If you actually had an argument, you would’ve made it by now.

        • XLE@piefed.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 hours ago

          Your claim: AI seems to be getting better, therefore AGI will happen

          My rebuttal: they aren’t linked

          Does that clear matters up?

          • Iconoclast@feddit.uk
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            7 hours ago

            My argument is that we’ll incrementally keep improving our technology like we have done throughout human history. Assuming that general intelligence is not substrate dependent - meaning that what our brains are doing cannot be replicated in silicon - or that we destroy ourselves before we get there, then it’s just a matter of time before we create a system that’s as intelligent as we are: AGI.

            I already said that the timescale doesn’t matter here. It could take a hundred years or two thousand - doesn’t matter. We’re still moving toward it. It does not matter how slow you move. As long as you keep moving, you’ll eventually reach your destination.

            So, how I see it is that if we never end up creating AGI ever, it’s either because we destroyed ourselves before we got there or there’s something borderline supernatural about the human brain that makes it impossible to copy in silicon.

            • XLE@piefed.social
              link
              fedilink
              English
              arrow-up
              0
              ·
              7 hours ago

              So do you think Dyson Spheres are inevitable too? Because things advance?

              You’re also shifting your goalposts tremendously. First you were implying that today’s AI would bring about AGI and now you’re saying that something, somewhere, might happen in some sci-fi future.

              I’m not sure if you’re actually worried about present day destruction, though, because you seemed to not like it when I brought up with the AGI true believers are doing to the vulnerable people that flock to them. Dario is on board with Trump’s fossil fuel, anti-green buildout too.

              If you believe so much in AI, but allegedly believe in the things you’ve talked about, perhaps it’s time to start criticizing the people you hold so dear.

              • Iconoclast@feddit.uk
                link
                fedilink
                English
                arrow-up
                0
                ·
                6 hours ago

                So do you think Dyson Spheres are inevitable too?

                I’m less certain about that than I am about AGI - there may be other ways to produce that same amount of energy with less effort - but generally speaking, yeah, it seems highly probable to me.

                First you were implying that today’s AI would bring about AGI

                I’ve never made such a claim. I’ve been saying the exact same thing since around 2016 or so - long before LLMs were even a thing. It’s in no way obvious to me that LLMs are the path to AGI. They could be, but they don’t have to be. Either way, it doesn’t change my core argument.

                people you hold so dear

                C’moon now.

                • XLE@piefed.social
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  6 hours ago

                  I’ve been saying the exact same thing since around 2016 or so - long before LLMs were even a thing

                  You really aren’t beating the Yudkowsky/LessWrong allegations with this one, you know.

                  If you really think LLMs might mean nothing at all when it comes to actually achieving AGI, then maybe you should speak out against the environmental destruction they’re doing today with full endorsement from Anthropic and all the other corporate AI perverts.