Workers should learn AI skills and companies should use it because it’s a “cognitive amplifier,” claims Satya Nadella.
in other words please help us, use our AI
Workers should learn AI skills and companies should use it because it’s a “cognitive amplifier,” claims Satya Nadella.
in other words please help us, use our AI
AI isn’t at all reliable.
Worse, it has a uniform distribution of failures in the domain of seriousness of consequences - i.e. it’s just as likely to make small mistakes with miniscule consequences as major mistakes with deadly consequences - which is worse than even the most junior of professionals.
(This is why, for example, an LLM can advise a person with suicidal ideas to kill themselves)
Then on top of this, it will simply not learn: if it makes a major deadly mistake today and you try to correct it, it’s just as likely to make a major deadly mistake tomorrow as it would be if you didn’t try to correct it. Even if you have access to actually adjust the model itself, correcting one kind of mistake just moves the problem around and is akin to trying to stop the tide on a beach with a sand wall - the only way to succeed is to have a sand wall for the whole beach, by which point it’s in practice not a beach anymore.
You can compensate for this by having human oversight on the AI, but at that point you’re just back to having to pay humans for the work being done, so now instead of having to the cost of a human to do the work, you have the cost of the AI to do the work + the cost of the human to check the work of the AI and the human has to check the entirety of the work just to make sure since problems can pop-up anywere and take and form and, worse, unlike a human the AI work is not consistent so errors are unpredictable, it will never improve and it will never include the kinds of improvements that humans doing the same work will over time discover in order to make later work or other elements of the work be easier to do (i.e. how increase experience means you learn to do little things to make your work easier).
This seriously limits the use of AI to things were the consequences of failure can never be very bad (and if you also include businesses, “not very bad” includes things like “not significantly damage client relations” which is much broader than merely “not be life threathening”, which is why, for example, Lawyers using AI to produce legal documents are getting into trouble as the AI quotes made up precedents), so mostly entertainment and situations were the AI alerts humans for a potential situation found within a massive dataset were if the AI fails to spot it, it’s alright and if the AI incorrectly spots something that isn’t there the subsequent human validation can dismiss it as a false positive (so for example, face recognition in video streams for the purpose of general surveillance, were humans watching those video streams are just or more likely to miss it and an AI alert just results in a human checking it)
So AI is a nice new technological tool in a big toolbox, not a technological and business revolution justifying the stock market valuations around it, investment money sunk into it or the huge amount of resources (such as electricity) used by it.
I generally agree with you, but I think the broadest category of useful applications is missing: Where it’s easy to check if the output makes sense. Or more precisely, applications where it’s easier to select the good outputs of an AI then to create them yourself.
Yeah.
Whilst I didn’t explicitly list that category as such, if you think about it, my AI for video surveillance and AI for scientific research examples are both in it.
Several flaws here: dependomg on the tasks, you can train and retrain models. Instruct new ones. Previous errors will be greatly reduced, or disappear completely. ( if we talk about errors only ). Hallucinations are mathematically certain for less specialized models, but this is another problem all togheter.
Using ai is indeed saving money ( and time ). It excels at tedious tasks with well defined constraints. This saves me so much time everyday: ie: find X in dataset Y that do not much Z. This work was usually done by humans, with an higher error rate. If I take 3 minutes to classify 1 millions rows, which would have took me at least 3 days before, that is money saved.
This said, they trying to push the reverse centaur approach, human overseeing the ai worker, which is flawed. But companies reason in stakeholders profile and 3 months windows.
When I started as a junior i was the guy classifying 1 M records. That is how I leaned. Now we dont have juniors anymore. But companies seems to dont care about the next 5 years.