In particular, know how to identify the common and deadly species (eg: much of the genus Amanita) yourself, and get multiple trustworthy field guides for your part of the world.
In particular, know how to identify the common and deadly species (eg: much of the genus Amanita) yourself, and get multiple trustworthy field guides for your part of the world.
what’s wrong with lumping a lot of things with different substrate together if, as you admit yourself, there’s still no evidence any of them work well?
LLMs are the current big buzzword and the main ones that “don’t work”, because people assume and expect them to be intelligent and actually know and understand things, which they simply do not. Their purpose is to generate text in a way that a human would and for that they actually work perfectly - get a competent LLM and a human and ask them to write about something, and you are very unlikely to spot which one is the machine unless you can catch them lying, and even then it might just be a clueless human talking about things he kinda understands but isn’t an expert of. Like me.
But they are constantly being used for all kinds of purposes that they really don’t yet fit well, because you can’t actually trust anything they say.
Image generation mainly has issues with hands and fingers so they aren’t bullet proof at making fake realistic imagery, but for many subjects and style they can create images that are pretty much impossible to identify as being generated. Civit.ai is full of examples. Most people think it doesn’t work yet because they mostly see someone throwing simple prompts into midjourney and taking the first thing it generates for an article thumbnail.
And image identification definitely works, but it’s… Quirky. I said it can’t be used to identify mushrooms, because nothing can identify two things that look exactly the same from one another. But give one enough photos of every single hot wheels car that exists, and you can get one that will perfectly recognize which one you have. But it will also tell you that a shoe or a tree is one of them, because it only knows about hot wheels cars.
Making one that is trying to identify absolutely everything from a photo, like Google Lens, will still misidentify some things as the dataset is so enormous, but so would a human. Just that for an AI, “I don’t know” is never an option, it always says the most likely answer it thinks is right.
edit: oops accidental post
okay? so i am quite aware of all of this already; none of this info is new.
my question is still, “what’s wrong with lumping all of these technologies together as ‘AI’ when all of them are ineffective at identifying mushrooms (and certain other tasks)?”