Multiple times throughout the day. I co-work on personal projects with several different LLMs. Primarily Claude, but also GPT-4o and Llama 70b.
It’s simple: I don’t.
I’ve done it once or twice in the early days to see what was up, never since then.
I’ve attempted to use it to program an android app.
2 weeks of effort… It’ll finally build without issue, but still won’t run.
Daily.
💯
I’ve never tried to have what I would call a conversation, but I use it as a tool for both fixing/improving writing and for writing basic scripts in autohotkey, which it’s fairly good at.
It’s language models are good for removing the emotional work from customer service - either giving bad news in a very detached professional way or being polite and professional when what I want is to call someone a fartknocker.
It varies. Sometimes several times a day, sometimes none for a week or two. I’d say about half of those conversations are about software design.
Never
Maybe 3-4 times a year. Can’t see using it more than that at this point.
About as often as I have a conversation with my dishwasher: never.
Jeez…how do you think your dishwasher feels about that? Monster!!
I had fun with it a dozen times or so when it was new, but I’m not amused anymore. Last time was about a month ago, when someone told me about using chatGPT to seek an answer, and I intentionally found a few prompts that made it spill clear bullshit, to send screenshots making a point that LLMs aren’t reliable and asking them factual questions is a bad idea.
asking them factual questions is a bad idea
This is a crucial point that everybody should make sure their non-techie friends understand. AI is not good at facts. AI is primarily a bullshitter. They are really only useful where facts don’t matter, like planning events, finding ways to spend time, creating art, etc…
If you’re prepared to fact check what it gives you, it can still be a pretty useful tool for breaking down unfamiliar things or for brainstorming. And I’m saying that as someone with a very realistic/concerned view about its limitations.
Used it earlier this week as a jumping off point for troubleshooting a problem I was having with the USMT in Windows 11.
Absolutely. With code (and I suppose it’s of other logical truths) it’s easier because you can ask it to write a test for the code, though the test may be invalid so you have to check that. With arbitrary facts, I usually ask “is that true?” To have it check itself. Sometimes it just gets into a loop of lies, but other times it actually does tell the truth.
If by conversation you mean asking for a word by describing it conceptually because I can’t remember, every day. If you mean telling it about my day and hobbies, never.
That is basically the best use of LLMs.
A few of the most useful conversations I’ve had with ChatGPT:
I mean, if asking to help with code/poorly explained JS libraries counts then… Pretty much every day. Other than that… very rarely.
Only to try out the next big upgrade. It will never be human or superhuman.
Your lack of faith is disturbing.
Don’t be too proud of this technological terror you’ve created. The ability to compose haiku is insignificant next to the power of a nice hug.
Lol somebody downvoted you. I love a good hug
All too easy
I use Perplexity pretty much every day. It actually gives me the answers I’m looking for, while the search engines just return blog spam and ads.
I had a professor tell our class straight up, use perplexity, just put it in your own own words.