• 0_o7@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      10
      ·
      6 days ago

      That’s one of the reason they’re gobbling up all GPU and RAM. They don’t want local models being viable to most people.

    • aesthelete@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      5 days ago

      Yep, I don’t even really like these things, but ollama is a docker container away and the models work just fine on my several year old AMD laptop GPU.