It is getting more present at work every day, I keep having to hear even seniors how they “discussed” something with chatgpt or how they will ask it for help. Had to resolve some issue with devops a while back and they just kept pasting errors into chatgpt and trying out whatever it spewed back, which I guess wasn’t that much different from me googling the same issue and spewing back whatever SO said.
I tried it myself and while it is neat for some simple repetitive things, I always end up with normal google searches or clicking on the sources because the problems I usually have to google for are also complicated problems that I need the whole original discussion and context too, not just a summary that might skip important caveats.
I dunno, I simultaneously feel old and out of touch, angry at myself for not just going with the flow and buying into it, but also disappointed in other people that rely on it without ever understanding that it’s so flawed, unreliable and untrustworthy, and making people into worse programmers.
Someone suggested using it to identify things you only remember bits of or certain scenes from. I tried using it to find this YA book I read as a kid; it was not at all helpful, but did eventually lead me to do researching and finding the book elsewhere. (And it turns out the scene I was describing was exactly what happened, and the characters were named exactly what I thought they were, so that was born annoying at the time and frustrating later.)
I also tried using it to find this really obscure, incredibly bad 1970s tv movie that I had vague recollections of. Again, the scene was pretty much what I remembered, it couldn’t identify it, but I eventually found a site that lists the plots of old tv movies and I read through like 30 pages of movie synopses until I found the one I was looking for.
I’ve also tried using it to find this 1980’s interactive fiction game, but it’s proved useless once again - and once again further research has identified a couple possibilities except I haven’t had time to try to find the game and set up the right environment for it.
So my experience has been that it’s useless in finding the things I want it to find, but that in trying to persist against it may lead me to find what I’m looking for elsewhere.
I used the Bing AI (back when it was called that) to try to find a mall I went to many years ago. It was brand new and still had some parts being built so it looked very different to today, which made it difficult to find. Neither me nor my mother remembered the name or any stores, just the general area it was in. Took some time but the AI was able to discern what mall it was from the details I gave it.
People have to view them less as general AI and more like search engines you can have a back and forth with.
That’s how I was using it, like an iterative search engine.
ChatGPT is not a search engine, nor can it “think”. I’m not surprised it didn’t work in that way.
I used it the other day to redact names from a spreadsheet. It got 90% of them, saving me about 90 minutes of work. It has helped clean up anomalies in databases (typos, inconsistencies in standardized data sets, capitalization errors, etc). It also helped me spruce up our RFP templates by adding definitions for standard terminology in our industry (which I revised where needed, but it helped to have a foundation to build from).
As mentioned in a different post, I use it for DND storylines, poems, silly work jokes and prompts to help make up bed time stories.
My wife uses it to help proofread her papers and make recommendations on how to improve them.
I use it more often now than google search. If it’s a topic important enough that I want to verify, then I’ll do a deeper dive into articles or Wikipedia, which is exactly what I did before AI.
So yea, it’s like the personal assistant that I otherwise didn’t have.
I’ve used it to help me write batch scripts and excel formulas but found it pretty bad for LISP
GitHub Copilot became my daily helper at work. While I’m not 100% satisfied with its code quality, I must admit it’s very handy at writing boilerplate code. A few days ago, I had to write code without having internet access, and it was so disappointing to write boilerplate code by hand. It’s an easy task, but it’s time-consuming and unpleasant.
I’m very curious what these development workflows are where writing “boilerplate” code is so common, especially that AI-generated is good enough. It’s very rare for me to need this, and Ileven then I generally have spent more time cleaning up wjatbit built than the time it saved me.
It could be level. Less experienced engineers find the boilerplate spit out by copilot extremely useful, more experienced find it more in the way.
Is some of this just caused by more experienced folks being less inclined to learn AI tools? Maybe. I think experience writing code is the bigger factor though.
I’m trying real hard to find it useful. I can see it eventually being more useful, but it’s just not worth the cost.
Maybe, it’s just really rare for me to be looking at a "blank page’. 95% is an incremental improvement on top of something existing. So “boilerplate” would come into my workflow pretty rarely.
Maybe some frequently-changing data science or data mining tasks?
By saying “boilerplate”, I mean constructors, simple methods and even small classes that have some “standard” implementation. Copilot easily writes simple constructors, class initialization and destruction. It can suggest small method implementation, right after I added its declaration to related interface. Anything, that can be done almost without thinking of how to do it because there are standard practices, is handled by Copilot. It’s not perfect, it can write a whole method at a time, or only line by line, or refuse to suggest any code. But it often writes valid code.
I will forever continue to suggest that as a developer, you learn your IDE of choice’s features for templates/code snippets, or make yourself a “templates” file to copy and paste from.
Far more control, far less opportunity to miss something small and mess up, cheaper, less resource use, and faster.
Using VsCode/VsCodium’s snippets feature has been a serious game changer for me when it comes to boilerplate.
Copilot shines where snippets/templates don’t work or make no sense. It can write constructors, simple methods, and even simple classes if something similar is found in the solution.
Bit sad reading these comments. My life has measurably improved ever since I jumped on using AI.
At first I just used it Copilot for helping me with my code. I like using a pretty archaic language and it kept trying to fed me C++ code. Had to link it the online reference and it surprisingly was able to adapt each time. Still gave a few errors here and there but good time saver and “someone” to “discuss” with.
Over time it has become super good, especially with the VScode extension that autofills code. Instead of having to ask help from one of the couple hundred people experienced with the language, I can just ask Copilot if I can do X or Y, or for general advice when planning out how to implement something. Legitimately a great and powerful tool, so it shocks me that some people don’t use it for programming (but I am pretty bad at coding too, so).
I’ve also bit the bullet and used it for college work. At first it was just asking Gemini for refreshers on what X philosophical concept was, but it devolved into just asking for answers because that class was such a snooze I could not tolerate continuing to pay attention (and I went into this thinking I’d love the class!). Then I used it for my Geology class because I could not be assed to devote my time to that gen ed requirement. I can’t bring myself to read about rocks and tectonic plates when I could just paste the question into Google and I get the right answer in seconds. At first I would meticulously check for sources to prevent mistakes from the AI buuuut I don’t really need 100%… 85% is good enough and saves so much more time.
A me 5 years younger would be disgusted at cheating but I’m paying thousands and thousands to pass these dumb roadblocks. I just want to learn about computers, man.
Now I’d never use AI for writing my essays because I do enjoy writing them (investigating and drawing your own conclusions is fun!), but this economics class is making it so tempting. The shit that I give about economics is so infinitesimally small.
So, to be clear, your use cases are “copilot’s assistance with programming in an obscure language for fun” and “cheating on college classwork”.
Just a few examples of my use cases but yes. It’s an even quicker search engine.
Lmao, it’s funny how most of these use cases rarely stray from the stereotype of ‘I can’t spend an hour focusing on something and learn so I’ll take a shortcut instead’.
Meanwhile at work all chatGPT has caused is misery as it makes people think they’re expert programmers now while I have to debug their shitty code. Do they learn? Nope, just repeatedly serving up slop.
I used it once to write a polite “fuck off” letter to an annoying customer, and tried to see how it would revise a short story. The first one was fine, but using it with a story just made it bland, and simplified a lot of the vocabulary. I could see people using it as a starting point, but I can’t imagine people just using whatever it spots out.
just made it bland, and simplified
Not always, but for the most part, you need to tell it more about what you’re looking for. Your prompts need to be deep and clear.
“change it to a relaxed tone, but make it make me feel emotionally invested, 10th grade reading level, add descriptive words that fit the text, throw an allegory, and some metaphors” The more you tell it, the more it’ll do. It’s not creative. It’s just making it fit whatever you ask it to do. If you don’t give enough direction, you’ll just get whatever the random noise rolls, which isn’t always what you’re looking for. It’s not uncommon to need to write a whole paragraph about what you want from it. When I’m asking it for something creative, sometimes it takes half a dozen change requests. Once in a while, I’ll be so far off base, I’ll clear the conversation and just try again. The way the random works, it will likely give you something completely different on the next try.
My favorite thing to do is give it a proper outline of what I need it to write, set the voice, tone, objective, and complexity. Whatever it gives back, I spend a good solid paragraph critiquing it. when it’s > 80% how I like it, I take the text and do copy edits on it until I’m satisfied.
It’s def not a magic bullet for free work. But it can let me produce something that looks like I spent an hour on it when I spent 20 minutes, and that’s not nothing.
i’ve used it fairly consistently for the last year or so. i didn’t actually start using it until chatgpt 4 and when openai offered the $20 membership
i think AI is a tool. like any other tool, your results vary depending on how you use it
i think it’s really useful for specific intents
example, as a fancy search engine. yesterday I was watching Annie from 1999 with my girlfriend and I was curious about the capitalist character. i asked chatgpt the following question
in the 1999 hit movie annie, who was the billionaire mr warbucks supposed to represent? were there actually any billionaires in the time period? it’s based around the early 1930s
it gave me context. it showed examples of the types of capitalist the character was based on. and it informed me that the first billionaire was in 1916.
very useful for this type of inquiry.
other things i like using it for are to help coding. but there’s a huge caveat here. some thing it’s very helpful for… and some things it’s abysmal for.
for example i can’t ask it “can you help me write a nice animation for a react native component used reanimated”
because the response will be awful and won’t work. and you could go back and forth with it forever and it won’t make a difference. the reason is it’s trained on a lot of stuff that’s outdated so it’ll keep giving you code that maybe would have worked 4 years ago. and even then, it can’t hold too much context so complex applications just won’t work
BUT certain things it’s really good. for example I need to write a script for work. i use fish shell but sometimes i don’t know the proper syntax or everything fish is capable of
so I ask
how to test, using fish, if an “images.zip” file exists in $target_dir
it’ll pump out
if test -f "$target_dir/images.zip" echo "File exists." else echo "File does not exist." end
which gives me what i needed in order to place it into the script i was writing.
or for example if you want to convert a bash script to a fish script (or vice versa), it’ll do a great job
so tldr:
it’s a tool. it’s how you use it. i’ve used it a lot. i find great value in it. but you must be realistic about its limitations. it’s not as great as people say- it’s a fancy search engine. it’s also not as bad as people say.
as for whether it’s good or bad for society, i think good. or at least will be good eventually. was the search engine a bad thing for society? i think being able to look up stuff whenever you want is a good thing. of course you could make the argument kids don’t go to libraries anymore… and maybe that’s sorta bad. but i think the trade-off is definitely worth it
I was in the same boat a while ago when I had to use React for remaking a UI (was reworking the whole backend). I’ve never tried writing something in JS/TS, so it was super helpful having Copilot guide my hand. Took me a day but had a beautiful little interactive window by the end of it!
I’m in the same boat. Chat gpt is great for little bits of code where you forgot how to do X Y or Z, especially when there’s a lot of nuance.
Or if you need to ask it a hyper specific question as Lin as there’s been 5 people out there asking the 5 pieces of your question, it will combine them into the one answer you want.
Also it sure ain’t perfect, and anyone who thinks it will wholesale replace any skilled job is an idiot. It can assist someone and make them more efficient, but it won’t replace them.
It seemingly has little impact. I’ve attempted to use LLMs a couple of times to ask very specific technical questions (on this specific model, running this specific OS version, how do I do this very specific thing) to try and cut down on the amount of research I would have to do to find a solution. The answer every time has been wrong. Once it was close enough to the answer I was able to figure it out but “close enough” doesn’t seem worth bothering with most of the time.
When I search for things I always slip the AI summary at the top of the page.
For me?
Nothing, other than “I tried it with ChatGPT” before they bothered with Documentation.
Fuck anyone who skips documentation
I am going to say that so far it hasn’t done that much for me. I did originally ask it some silly questions, but I think I will be asking it for questions about coding soon.
I love using it for writing scripts that need to sanitize data. One example I had a bash script that looped through a csv containing domain names and ran AXFR lookups to grab the DNS records and dump into a text file.
These were domains on a Windows server that was being retired. The python script I had Copilot write was to clean up the output and make the new zone files ready for import into PowerDNS. Made sure the SOA and all that junk was set. Pdns would import the new zone files into a SQL backend.
Sure I could’ve written it myself but I’m not a python developer. It took about 10 minutes of prompting, checking the code, re-prompting then testing. Saved me a couple hours of work easy.
I use it all the time to output simple automation tasks when something like Ansible isn’t apropos
Man, so much to unpack here. It has me worried for a lot of the reasons mentioned: The people who pay money to skilled labor will think “The subscription machine can just do it.” And that sucks.
I’m a digital artist as well, and while I think genAi is a neat toy to play with for shitposting or just “seeing what this dumb thing might look like” or generating “people that don’t exist” and it’s impressive tech, I’m not gonna give it ANY creative leverage over my work. Period. I still take issue with where it came from and how it was trained and the impact it has on our culture and planet.
We’re already seeing the results of that slop pile generated from everyone who thought they could “achieve their creative dreams” by prompting a genie-product for it instead of learning an actual skill.
As for actual usefulness? Sometimes I run a local model for funsies and just bounce ideas off of it. It’s like a parrot combined with a “programmer’s rubber ducky.” Sometimes that gets my mind moving, in the same way “autocomplete over and over” might generate interesting thoughts.
I also will say it’s pretty decent at summarizing things. I actually find it somewhat helpful when YouTube’s little “ai summary” is like “This video is about using this approach taking these steps to achieve whatever.”
When the video description itself is just like “Join my Patreon and here’s my 50+ affiliate links for blinky lights and microphones” lol
I use it to explain concepts to me in a slightly different way, or to summarize something for which there’s a wealth of existing information.
But I really wish people were more educated about how it actually works, and there’s just no way I’m trusting the centralized “services” for doing so.
For me, the amount of people and time spent in meetings that talk about AI grossly outweighs any benefit of AI.
It’s affected me by being really annoying to hear about in the news all the time.