I’ve found that AI has done literally nothing to improve my life in any way and has really just caused endless frustrations. From the enshitification of journalism to ruining pretty much all tech support and customer service, what is the point of this shit?
I work on the Salesforce platform and now I have their dumbass account managers harassing my team to buy into their stupid AI customer service agents. Really, the only AI highlight that I have seen is the guy that made the tool to spam job applications to combat worthless AI job recruiters and HR tools.
I thought it was pretty fun to play around with making limericks and rap battles with friends, but I haven’t found a particularly usefull use case for LLMs.
Chat GPT enabled me to automate a small portion of my former job. So that was nice.
I like asking ChatGPT for movie recommendations. Sometimes it makes some shit up but it usually comes through, I’ve already watched a few flicks I really like that I never would’ve heard of otherwise
I use it often for grammar and syntax checking
It’s great at summarization and translations.
tl;dr?
Translates Sumerian texts.
Until it makes shit up that the original work never said.
The services I use, Kagi’s autosummarizer and DeepL, haven’t done that when I’ve checked. The downside of the summarizer is that it might remove some subtle things sometimes that I’d have liked it to keep. I imagine that would occur if I had a human summarize too, though. DeepL has been very accurate.
LLMs are especially bad for summarization for the use case of presenting search results. The source is just as critical of information for search as the information itself, and LLMs obfuscate this critical source information and combine results from multiple sources together…
LLMs are TERRIBLE at summarization
Downvoters need to read some peer reviewed studies and not lap up whatever BS comes from OpenAI who are selling you a bogus product lmao. I too was excited for summarization use-case of AI when LLMs were the new shiny toy, until people actually started testing it and got a big reality check
Might want to rethink the summarization part.
AI also hasn’t made any huge improvements in machine translation AFAIK. Translators still get hired because AI can’t do the job as well.
Thank you for pointing that out. I don’t use it for anything critical, and it’s been very useful because Kagi’s summarizer works on things like YouTube videos friends link which I don’t care enough to watch. I speak the language pair I use DeepL on, but DeepL often writes more natively than I can. In my anecdotal experience, LLMs have greatly improved the quality of machine translation.
I have found ChatGPT to be better than Google for random questions I have, asking for general advice in a whole bunch of things but sido what to go for other sources. I also use it to extrapolate data, come up with scheduling for work (I organise some volunteer shifts) and lots of excel formulae.
Sometimes it’s easier to check ChatGPT’s answers, ask follow up questions, look at the sources it provides and live with the occasional hallucinations than to sift through the garbage pile that google search has become.
I use AI every day. I think it’s an amazing tool. It helps me with work, with video games, with general information, with my dog, and with a whole lot of other things. Obviously verify the claims if it’s an important matter, but it’ll still save you a lot of time. Prompting AI with useful queries is a skill set that everyone should be developing right now. Like it or not, AI is here and it’s going to impact everyone.
To me AI is useless. Its not intelligent, its just a blender that blends up tons of results into one hot steaming mug of “knowledge”. If you toss a nugget of shit into a smoothie while it’s being blended, it’s gonna taste like shit. Considering the amount of misinformation on the internet, everything AI spits out is shit.
It is purely derivative, devoid of any true originality with vague facade of intelligence in an attempt to bypass existing copyright law.
Your last line pretty much sums up my feelings entirely.
I have had fun with ChatGPT, but in terms of integrating it into my workflow: no. It just gives me too much garbage on a regular basis for me not to have to check and recheck anything it produces, so it’s more efficient to do it myself.
And as entertainment, it’s more expensive than e.g. a game, over time.
ChatGPT can be useful or fun every now and then but besides that no.
The image generators have been great for making token art for my dnd campaign. Other than that, no.
I went for a routine dental cleaning today and my dentist integrated a specialized AI tool to help identify cavities and estimate the progress of decay. Comparing my x-rays between the raw image and the overlay from the AI, we saw a total of 5 cavities. Without the AI, my dentist would have wanted to fill all of them. With the AI, it was narrowed down to 2 that need attention, and the others are early enough that they can be maintained.
I’m all for these types of specialized AIs, and hope to see even further advances in the future.
I love chatgpt, and am dumbfounded at all the AI hate on lemmy. I use it for work. It’s not perfect, but helps immensely with snippets of code, as well as learning STEM concepts. Sometimes I’ve already written some code that I remember vaguely, but it was a long time ago and I need to do it again. The time it would take to either go find my old code, or just research it completely again, is WAY longer than just asking chatgpt. It’s extremely helpful, and definitely faster for what I’d already have to do.
I guess it depends on what you use it for ¯\_(ツ)_/¯.
I hope it continues to improve. I hope we get full open source. If I could “teach” it to do certain tasks someday, that would be friggin awesome.
Even before AI the corps have been following a strategy of understaffing with the idea that software will make up for it and it hasn’t. Its beyond the pale the work I have to do now for almost anything I do related to the private sector (work as their customer not as an employee).
I like messing with the locally hosted AI available. We have a locally hosted LLM trained on our command media at work that is occasionally useful. I avoid it otherwise if I didn’t set it up myself or know who did.
I think it’s a fun toy that is being misused and forced into a lot of things it isn’t ready for.
I’m doing a lot with AI but it’s pretty much slop. I use self hosted stable diffusion, Ollama, and whisper for a discord bot, code help, writing assistance, and I pay elevenlabs for TTS so I can talk to it. It’s been pretty useful. It’s all running on an old computer with a 3060. Voice chat is a little slow and has its own problems but it’s all been fun to learn.
I’ve never had AI code run straight off the bat - generally because if I’ve resorted to asking an AI, I’ve already spent an hour googling - but it often gives me a starting point to narrow my search.
There’s been a couple of times it’s been useful outside of coding/config - for example, finding the name of some legal concepts can be fairly hard with traditional search, if you don’t know the surrounding terminology.
For the most part, it’s worthless garbage.
Kitboga has used AI (STT, LLMs, and TTS) to waste the time of Scammers.
There are AI tools being used to develop new cures which will benefit everyone.
There are AI tools being used to help discover new planets.
I use DLSS for gaming.
I run a lot of my own local AI models for various reasons. Whisper - for Audio Transcriptions/Translations.
Different Diffusion Models (SD or Flux) - for some quick visuals to recap a D&D session.
Tesseract OCR - to scan an image and extract any text that it can find (makes it easy to pull out text from any image and make it searchable).
Local LLMs (Llama, Mixtral) for brainstorming ideas, reformatting text, etc. It’s great for getting started with certain subjects/topics, as long as I verify everything that it says.
For fun I’ll probably setup GLaDOS like what was done here: https://www.reddit.com/r/LocalLLaMA/comments/1csnexs/local_glados_now_running_on_windows_11_rtx_2060/