• gravitas_deficiency@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 hours ago

    You’re willing to pay $none to have hardware ML support for local training and inference?

    Well, I’ll just say that you’re gonna get what you pay for.

    • bassomitron@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      No, I think they’re saying they’re not interested in ML/AI. They want this super fast memory available for regular servers for other use cases.