Workers should learn AI skills and companies should use it because it’s a “cognitive amplifier,” claims Satya Nadella.

in other words please help us, use our AI

  • Imhereforfun@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 hour ago

    I hope all parties responsible for this garbage, including Microsoft will pay a huge price in the end. Fuck all these morons.

    Stop shilling for these corporate assholes or you will own nothing and will be forced to be happy.

  • ReallyCoolDude@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    59 minutes ago

    I work in AI and the only obvious profit is the ability to fire workers. Which they need to rehire after some months, but lowering wages. It is indeed a powerful tool, but tools are not driving profits. They are a cost. Unless you run a disinformation botnet, scamming websites, or porn. It is too unpredictable to really automatize software creation ( fuzzy is the term, we somehow mitigate with stochastic approach ). Probably movie industry is also cutting costs, but not sure.

    AI is the way capital is trying to acquire skills cutting off the skilled.

    Have to say though that having an interfacd that understands natural language opens so many possibilities. Which could really democratize access to tech, but they are so niche that they would never really drive profit.

  • Aceticon@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    3 hours ago

    AI isn’t at all reliable.

    Worse, it has a uniform distribution of failures in the domain of seriousness of consequences - i.e. it’s just as likely to make small mistakes with miniscule consequences as major mistakes with deadly consequences - which is worse than even the most junior of professionals.

    (This is why, for example, an LLM can advise a person with suicidal ideas to kill themselves)

    Then on top of this, it will simply not learn: if it makes a major deadly mistake today and you try to correct it, it’s just as likely to make a major deadly mistake tomorrow as it would be if you didn’t try to correct it. Even if you have access to actually adjust the model itself, correcting one kind of mistake just moves the problem around and is akin to trying to stop the tide on a beach with a sand wall - the only way to succeed is to have a sand wall for the whole beach, by which point it’s in practice not a beach anymore.

    You can compensate for this by having human oversight on the AI, but at that point you’re just back to having to pay humans for the work being done, so now instead of having to the cost of a human to do the work, you have the cost of the AI to do the work + the cost of the human to check the work of the AI and the human has to check the entirety of the work just to make sure since problems can pop-up anywere and take and form and, worse, unlike a human the AI work is not consistent so errors are unpredictable, it will never improve and it will never include the kinds of improvements that humans doing the same work will over time discover in order to make later work or other elements of the work be easier to do (i.e. how increase experience means you learn to do little things to make your work easier).

    This seriously limits the use of AI to things were the consequences of failure can never be very bad (and if you also include businesses, “not very bad” includes things like “not significantly damage client relations” which is much broader than merely “not be life threathening”, which is why, for example, Lawyers using AI to produce legal documents are getting into trouble as the AI quotes made up precedents), so mostly entertainment and situations were the AI alerts humans for a potential situation found within a massive dataset were if the AI fails to spot it, it’s alright and if the AI incorrectly spots something that isn’t there the subsequent human validation can dismiss it as a false positive (so for example, face recognition in video streams for the purpose of general surveillance, were humans watching those video streams are just or more likely to miss it and an AI alert just results in a human checking it)

    So AI is a nice new technological tool in a big toolbox, not a technological and business revolution justifying the stock market valuations around it, investment money sunk into it or the huge amount of resources (such as electricity) used by it.

    • ReallyCoolDude@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      50 minutes ago

      Several flaws here: dependomg on the tasks, you can train and retrain models. Instruct new ones. Previous errors will be greatly reduced, or disappear completely. ( if we talk about errors only ). Hallucinations are mathematically certain for less specialized models, but this is another problem all togheter.

      Using ai is indeed saving money ( and time ). It excels at tedious tasks with well defined constraints. This saves me so much time everyday: ie: find X in dataset Y that do not much Z. This work was usually done by humans, with an higher error rate. If I take 3 minutes to classify 1 millions rows, which would have took me at least 3 days before, that is money saved.

      This said, they trying to push the reverse centaur approach, human overseeing the ai worker, which is flawed. But companies reason in stakeholders profile and 3 months windows.

      When I started as a junior i was the guy classifying 1 M records. That is how I leaned. Now we dont have juniors anymore. But companies seems to dont care about the next 5 years.

    • intentionallyBlue@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      3 hours ago

      I generally agree with you, but I think the broadest category of useful applications is missing: Where it’s easy to check if the output makes sense. Or more precisely, applications where it’s easier to select the good outputs of an AI then to create them yourself.

      • Aceticon@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        2 hours ago

        Yeah.

        Whilst I didn’t explicitly list that category as such, if you think about it, my AI for video surveillance and AI for scientific research examples are both in it.

  • utopiah@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    edit-2
    5 hours ago

    “bend the productivity curve” is such a beautiful way to say that they are running out of ideas on how to sell that damn thing.

    It basically went from :

    • it’s going to change EVERYTHING! Humanity as we know it is a thing of the past!

    … to “bend the productivity curve”. It’s not how it “radically increase productivity” no it’s a lot more subtle than that, to the point that it can actually bend that curve down. What a shit show.

    • ReallyCoolDude@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      46 minutes ago

      I recently had an hook to get some investment for a startup. Money is flowing in this sector. The investor told me: find me any idea that might sell, be useful.

      I went to speak with 3 associations of entrepreneurs in 3 different countries. Like guys, we have the money, give me some ideas, all services will be free for.you. All these entrepreneurs did not know where to fit AI if not for some support chat.

  • HaraldvonBlauzahn@feddit.org
    link
    fedilink
    English
    arrow-up
    40
    ·
    edit-2
    6 hours ago

    Literally burning the planet with power demand from data centers but not even knowing what it could possibly be good for?

    That’s eco-terrorism for lack of a better word.

    Fuck you.

  • saimen@feddit.org
    link
    fedilink
    English
    arrow-up
    11
    ·
    5 hours ago

    Eeh didn’t you pay attention in economy 101? If you generate more supply than demand that’s a you problem. The free market will take care.

    • Halcyon@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 hours ago

      The products and services around ‘AI’ are deficient and dangerous, that’s what the market says. There’s no demand for bullshit products. It is the ignorance and unwillingness to understand of the tech bros that is revealed here. They don’t listen to the market, aka. the people.

  • RamRabbit@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    ·
    8 hours ago

    Just make copilot it’s own program that is uninstallable, remove it from everywhere else in the OS, and let it be. People who want it will use it, people who don’t want it won’t. Nobody would be pissed at Microsoft over AI if that is what they had done from the start.

    • filcuk@lemmy.zip
      link
      fedilink
      English
      arrow-up
      17
      ·
      7 hours ago

      No, it will be attached to every application, as well as the start menu, settings, notepad, paint, regedit, calculator and every other piece of windows you AI hating swine

        • filcuk@lemmy.zip
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 hours ago

          Can we get AI on various libraries too and let it respond to API calls, I’m tired of these DLL responses being so predictable

    • utopiah@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      5 hours ago

      Right, except that unlike Explorer or IE after that, it siphons everything it can to send it back to Redmond so even if one does not use it, it is STILL a problem.

  • Doomsider@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    ·
    8 hours ago

    Delusional, created a solution to a problem that doesn’t exist to usurp the power away from citizens and concentrate it in the minority.

    This is the opposite of the information revolution. This is the information capture. It will be sold back to the people it was taken from while being distorted by special interests.

  • JelleWho@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 hours ago

    I bough a second hand laptop with windows 11 and it had Copilot pushing down your throat.

    It’s now running Fedora just fine. And if I want I can spin up a local AI when I decide that I need it.

  • redlemace@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    6 hours ago

    To be honest, I did tried a couple of AI’s. But all I got where solutions that would never work on the stated hardware. Code full of errors and when fixed never functions as requested. On any non-technical questions it’s always agreeing and hardly (not at all actually) challenging any input you give it. So yeah, i’m done with it and waiting for the bubble to burst.

    • utopiah@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 hours ago

      Sorry buddy but you are not “smart enough” to use that super powerful tool that supposedly can do everything extremely convenient for you! /s