“There’s no way to get there without a breakthrough,” OpenAI CEO Sam Altman said, arguing that AI will soon need even more energy.

  • Petter1@lemm.ee
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    4
    ·
    10 months ago

    The models get more efficient and smaller very fast if you look just a year back. I bet we’ll run some small LLMs locally on our phones (I don’t really believe in the other form factors yet) sooner as we believe. I’d say prior 2030.

    • FractalsInfinite@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      10 months ago

      I can already locally host a pretty decent ai chatbot on my old M1 Macbook (llama v2 7B) which writes at the same speed I can read, its probably already possible with the top of the line phones.

      • Petter1@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        10 months ago

        Lol, “old M1 laptop” 3 to 4 years is not old, damn!

        (I have running macbookpro5,3 (mid 2009) on Arch, lol)

        But nice to hear that M1 (an thus theoretically even the iPad, if you are not talking about M1 pro / M1 max) can already run llamma v2 7B.

        Have you tried the mistralAI already, should be a bit more powerful and a bit more efficient iirc. And it is Apache 2.0 licensed.

        https://mistral.ai/news/announcing-mistral-7b/

        • FractalsInfinite@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          10 months ago

          3 to 4 years is not old

          Huh, nice. I got the macbook air secondhand so I thought it was older. Thanks for the suggestion, I’ll try mistralAI next, perhaps on my phone as a test.