In particular, know how to identify the common and deadly species (eg: much of the genus Amanita) yourself, and get multiple trustworthy field guides for your part of the world.
In particular, know how to identify the common and deadly species (eg: much of the genus Amanita) yourself, and get multiple trustworthy field guides for your part of the world.
The general public still doesn’t seem to grasp the current capabilities of AI. It’s still just mimicry. AI is a parrot that “learns” something and repeats it to the best of it’s ability, but it doesn’t understand the thing it learned. You can teach a bird to say “Polly want a cracker” but it doesn’t know what a cracker is, and while it does have wants like any other animal, it doesn’t know what “want a cracker” actually means.
ML models get a billion images of mushrooms and then “learn” what “mushroom” looks like, but even if the images of mushrooms are properly labeled poisonous and not poisonous, it doesn’t really know that in the same way humans do. And it gets even worse when the AI tries to make new things from those sets it’s trained off, which all of those certainly do. Making new mushrooms that don’t exist, how can it tell which one of these new fantasy mushrooms are poison and which ones aren’t? It can’t know, but it sure as hell can make it up.
Hell, most AI can’t even get text right.
Don’t trust AI for anything that isn’t hard coded math, or systems that reference and directly quote known good sources without doing any kind of creative embellishments.
You are lumping a whole lot of different things that work in completely different ways under the singular label of AI, and while I can’t really blame you as that is what the industry does as well, image recognition, image generation and large language models like chat-gpt all work entirely differently.
Image recognition especially can be trained to be extremely accurate with a properly restricted scope and a good dataset, but even so it would never be enough for identifying mushrooms because no matter if it’s being done by the perfect AI or an organic meatbag, mushrooms simply cannot be accurately identified from a single picture as they can look literally identical to one another in many ways.
And parrots totally can learn what words mean. Just like how a dog can learn what “Sit”, “Paw” or “Let’s go for a walk” mean, parrots just also have the ability to “talk”.
Here is an alternative Piped link(s):
parrots totally can learn what words mean.
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
what’s wrong with lumping a lot of things with different substrate together if, as you admit yourself, there’s still no evidence any of them work well?
LLMs are the current big buzzword and the main ones that “don’t work”, because people assume and expect them to be intelligent and actually know and understand things, which they simply do not. Their purpose is to generate text in a way that a human would and for that they actually work perfectly - get a competent LLM and a human and ask them to write about something, and you are very unlikely to spot which one is the machine unless you can catch them lying, and even then it might just be a clueless human talking about things he kinda understands but isn’t an expert of. Like me.
But they are constantly being used for all kinds of purposes that they really don’t yet fit well, because you can’t actually trust anything they say.
Image generation mainly has issues with hands and fingers so they aren’t bullet proof at making fake realistic imagery, but for many subjects and style they can create images that are pretty much impossible to identify as being generated. Civit.ai is full of examples. Most people think it doesn’t work yet because they mostly see someone throwing simple prompts into midjourney and taking the first thing it generates for an article thumbnail.
And image identification definitely works, but it’s… Quirky. I said it can’t be used to identify mushrooms, because nothing can identify two things that look exactly the same from one another. But give one enough photos of every single hot wheels car that exists, and you can get one that will perfectly recognize which one you have. But it will also tell you that a shoe or a tree is one of them, because it only knows about hot wheels cars.
Making one that is trying to identify absolutely everything from a photo, like Google Lens, will still misidentify some things as the dataset is so enormous, but so would a human. Just that for an AI, “I don’t know” is never an option, it always says the most likely answer it thinks is right.
edit: oops accidental post
okay? so i am quite aware of all of this already; none of this info is new.
my question is still, “what’s wrong with lumping all of these technologies together as ‘AI’ when all of them are ineffective at identifying mushrooms (and certain other tasks)?”