• djdarren@piefed.social
    link
    fedilink
    English
    arrow-up
    7
    ·
    6 hours ago

    I have an 8GB M1 mini in service as my Home Assistant server. 4GB to UTM to run HAOS, the rest for macOS and Ollama running a small LLM for speech to text. I’m genuinely amazed that it hasn’t fallen over. Tried the same thing in Asahi but without macOS’ memory management and access to GPU acceleration, it just wasn’t feasible.

    • partial_accumen@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      16 minutes ago

      Tried the same thing in Asahi but without macOS’ memory management and access to GPU acceleration, it just wasn’t feasible.

      Thank you for sharing this result. I knew Asahi’s memory management wasn’t as robust (so I got a 24GB RAM M2 unit to overcome this).

      For your macOS Ollama implementation are you able to leverage the NPU in the hardware (which I know is also unavailable so far in Asahi)?