• FMT99@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    2 months ago

    Show the actual use case in a convincing way and people will line up around the block. Generating some funny pictures or making generic suggestions about your calendar won’t cut it.

    • overload@sopuli.xyz
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      2 months ago

      I completely agree. There are some killer AI apps, but why should AI run on my OS? Recall is a complete disaster of a product and I hope it doesn’t see the light of day, but I’ve no doubt that there’s a place for AI on the PC.

      Whatever application there is in AI at the OS level, it needs to be a trustless system that the user has complete control of. I’d be all for an Open source AI running at that level, but Microsoft is not going to do that because they want to ensure that they control your OS data.

      • PriorityMotif@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        4
        ·
        2 months ago

        Machine learning in the os is a great value add for medium to large companies as it will allow them to track real productivity of office workers and easily replace them. Say goodbye to middle management.

        • overload@sopuli.xyz
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          2 months ago

          I think it could definitely automate some roles where you aren’t necessarily thinking and all decisions are made based on information internally available to the PC. For sure these exist but some decisions need human input, I’m not sure how they automate out those roles just because they see stuff happening on the PC every day.

          If anything I think this feature is used to spy on users at work and see when keystrokes fall below a certain level each day, but I’m sure that’s already possible for companies to do (but they just don’t).

  • smokescreen@lemmy.ca
    link
    fedilink
    arrow-up
    17
    ·
    2 months ago

    Pay more for a shitty chargpt clone in your operating system that can get exploited to hack your device. I see no flaw in this at all.

  • cygnus@lemmy.ca
    link
    fedilink
    arrow-up
    24
    ·
    2 months ago

    The biggest surprise here is that as many as 16% are willing to pay more…

    • ShinkanTrain@lemmy.ml
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      2 months ago

      I mean, if framegen and supersampling solutions become so good on those chips that regular versions can’t compare I guess I would get the AI version. I wouldn’t pay extra compared to current pricing though

  • BlackLaZoR@kbin.run
    link
    fedilink
    arrow-up
    4
    ·
    2 months ago

    Unless you’re doing music or graphics design there’s no usecase. And if you do, you probably have high end GPU anyway

    • DarkThoughts@fedia.io
      link
      fedilink
      arrow-up
      3
      ·
      2 months ago

      I could see use for local text gen, but that apparently eats quite a bit more than what desktop PCs could offer if you want to have some actually good results & speed. Generally though, I’d rather want separate extension cards for this. Making it part of other processors is just going to increase their price, even for those who have no use for it.

      • BlackLaZoR@kbin.run
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        2 months ago

        There are local models for text gen - not as good as chatGPT but at the same time they’re uncensored - so it may or may not be useful

        • DarkThoughts@fedia.io
          link
          fedilink
          arrow-up
          2
          ·
          2 months ago

          Yes, I know - that’s my point. But you need the necessary hardware to run those models in a performative way. Waiting a minute to produce some vaguely relevant gibberish is not going to be of much use. You could also use generative text for other applications, such as video game NPCs, especially all those otherwise useless drones you see in a lot of open world titles could gain a lot of depth.

  • rtxn@lemmy.world
    link
    fedilink
    arrow-up
    81
    ·
    2 months ago

    The dedicated TPM chip is already being used for side-channel attacks. A new processor running arbitrary code would be a black hat’s wet dream.

    • MajorHavoc@programming.dev
      link
      fedilink
      arrow-up
      50
      ·
      2 months ago

      It will be.

      IoT devices are already getting owned at staggering rates. Adding a learning model that currently cannot be secured is absolutely going to happen, and going to cause a whole new large batch of breaches.

      • rtxn@lemmy.world
        link
        fedilink
        arrow-up
        19
        ·
        2 months ago

        TPM-FAIL from 2019. It affects Intel fTPM and some dedicated TPM chips: link

        The latest (at the moment) UEFI vulnerability, UEFIcanhazbufferoverflow is also related to, but not directly caused by, TPM on Intel systems: link

        • barsquid@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          2 months ago

          That’s insane. How can they be doing security hardware and leave a timing attack in there?

          Thank you for those links, really interesting stuff.

        • Blue_Morpho@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          2 months ago

          A processor that isn’t Turing complete isn’t a security problem like the TPM you referenced. A TPM includes a CPU. If a processor is Turing complete it’s called a CPU.

          Is it Turing complete? I don’t know. I haven’t seen block diagrams that show the computational units have their own cpu.

          CPUs also have co processer to speed up floating point operations. That doesn’t necessarily make it a security problem.

  • Poutinetown@lemmy.ca
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    2 months ago

    Tbh this is probably for things like DLSS, captions, etc. Not necessarily for chatbots or generative art.

  • UnderpantsWeevil@lemmy.world
    link
    fedilink
    arrow-up
    21
    ·
    2 months ago

    Okay, but here me out. What if the OS got way worse, and then I told you that paying me for the AI feature would restore it to a near-baseline level of original performance? What then, eh?

  • qaz@lemmy.world
    link
    fedilink
    arrow-up
    18
    arrow-down
    1
    ·
    edit-2
    2 months ago

    I would pay extra to be able to run open LLM’s locally on Linux. I wouldn’t pay for Microsoft’s Copilot stuff that’s shoehorned into every interface imaginable while also causing privacy and security issues. The context matters.

    • Blue_Morpho@lemmy.world
      link
      fedilink
      arrow-up
      10
      arrow-down
      1
      ·
      2 months ago

      That’s why NPU’s are actually a good thing. The ability to run LLM local instead of sending everything to Microsoft/Open AI for data mining will be great.

      • schizo@forum.uncomfortable.business
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        2 months ago

        I hate to be that guy, but do you REALLY think that on-device AI is going to prevent all your shit being sent to anyone who wants it, in the form of “diagnostic data” or “usage telemetry” or whatever weasel-worded bullshit in the terms of service?’

        They’ll just send the results for “quality assurance” instead of doing the math themselves and save a bundle on server hosting.

        • Blue_Morpho@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          2 months ago

          I replied to the person above “locally on Linux”.

          Even in Windows, local queries give the possibility of control. Set your firewall and it cannot leak.

        • alessandro@lemmy.ca
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          2 months ago

          All your unattended date will be taken (and some of the attended one). This doesn’t mean you should stop to attend your data. Even of you’re somehow forced to use Windows instead open alternative, it doesn’t mean you can’t dual boot or use other privacy conscious devices when dealing with your sensitive data.

          Closed/proprietary OS and hardware driver can’t be considered safe by design)

        • chicken@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          2 months ago

          but do you REALLY think that on-device AI is going to prevent all your shit being sent to anyone who wants it

          Yes, obviously, especially if you are running all open source software.

    • 31337@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      I would if the hardware was powerful enough to do interesting or useful things, and there was software that did interesting or useful things. Like, I’d rather run an AI model to remove backgrounds from images or upscale locally, than to send images to Adobe servers (this is just an example, I don’t use Adobe products and don’t know if this is what Adobe does). I’d also rather do OCR locally and quickly than send it to a server. Same with translations. There are a lot of use-cases for “AI” models.

    • barfplanet@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      2 months ago

      I’m interested in hardware that can better run local models. Right now the best bet is a GPU, but I’d be interested in a laptop with dedicated chips for AI that would work with pytorch. I’m a novice but I know it takes forever on my current laptop.

      Not interested in running copilot better though.

    • Honytawk@lemmy.zip
      link
      fedilink
      arrow-up
      17
      arrow-down
      1
      ·
      2 months ago
      • The ones who have investments in AI

      • The ones who listen to the marketing

      • The ones who are big Weird Al fans

      • The ones who didn’t understand the question

    • x0x7@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      2
      ·
      edit-2
      2 months ago

      Maybe people doing AI development who want the option of running local models.

      But baking AI into all consumer hardware is dumb. Very few want it. saas AI is a thing. To the degree saas AI doesn’t offer the privacy of local AI, networked local AI on devices you don’t fully control offers even less. So it makes no sense for people who value convenience. It offers no value for people who want privacy. It only offers value to people doing software development who need more playground options, and I can go buy a graphics card myself thank you very much.

    • Appoxo@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      5
      ·
      2 months ago

      raytracing is something I’d pay for even if unasked, assuming they meaningfully impact the quality and dont demand outlandish prices.
      And they’d need to put it in unasked and cooperate with devs else it won’t catch on quickly enough.
      Remember Nvidia Ansel?

  • alessandro@lemmy.ca
    link
    fedilink
    arrow-up
    20
    ·
    2 months ago

    I don’t think the poll question was well made… “would you like part away from your money for…” vaguely shakes hand in air “…ai?”

    People is already paying for “ai” even before chatGPT came out to popularize things: DLSS