I promise this question is asked in good faith. I do not currently see the point of generative AI and I want to understand why there’s hype. There are ethical concerns but we’ll ignore ethics for the question.

In creative works like writing or art, it feels soulless and poor quality. In programming at best it’s a shortcut to avoid deeper learning, at worst it spits out garbage code that you spend more time debugging than if you had just written it by yourself.

When I see AI ads directed towards individuals the selling point is convenience. But I would feel robbed of the human experience using AI in place of human interaction.

So what’s the point of it all?

  • red_concrete@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    4 days ago

    My understanding is that it will eventually used to improve autocorrect, when they get it working properly.

  • I use it to re-tone and clarify corporate communications that I have to send out on a regular basis to my clients and internally. It has helped a lot with the amount of time I used to spend copy editing my own work. I have saved myself lots of hours doing something I don’t really like (copy-editing) and more time doing the stuff I do (engineering) because of it.

    • dQw4w9WgXcQ@lemm.ee
      link
      fedilink
      arrow-up
      3
      ·
      4 days ago

      Absolutely this. I’ve found AI to be a great tool for nitty-gritty questions concerning some development framework. While googling/duckduckgo’ing, you need to match the documentation pretty specifically when asking about something specific. AI seems to be much better at “understanding” the content and is able to match with the documentation pretty reliably.

      For example, I was reading docs up and down at ElasticSearch’s website trying to find all possible values for the status field within an aggregated request. Google only lead me to general documentations without the specifics. However, a quick loosely worded question to chatGPT handed me the correct answer as well as a link to the exact spot in the docs where this was specified.

  • bobbyfiend@lemmy.ml
    link
    fedilink
    arrow-up
    6
    ·
    3 days ago

    I have a very good friend who is brilliant and has slogged away slowly shifting the sometimes-shitty politics of a swing state’s drug and alcohol and youth corrections policies from within. She is amazing, but she has a reading disorder and is a bit neuroatypical. Social niceties and honest emails that don’t piss her bosses or colleagues off are difficult for her. She jumped on ChatGPT to write her emails as soon is it was available, and has never looked back. It’s been a complete game changer for her. She no longer spends hours every week trying to craft emails that strike that just-right balance. She uses that time to do her job, now.

    • corsicanguppy@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      3 days ago

      I hope it pluralizes ‘email’ like it does ‘traffic’ and not like ‘failure’.

  • Vanth@reddthat.com
    link
    fedilink
    English
    arrow-up
    6
    ·
    4 days ago

    Idea generation.

    E.g., I asked an LLM client for interactive lessons for teaching 4th graders about aerodynamics, esp related to how birds fly. It came back with 98% amazing suggestions that I had to modify only slightly.

    A work colleague asked an LLM client for wedding vow ideas to break through writer’s block. The vows they ended up using were 100% theirs, but the AI spit out something on paper to get them started.

    • Mr_Blott@feddit.uk
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      4 days ago

      Those are just ideas that were previously “generated” by humans though, that the LLM learned

      • TheRealKuni@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 days ago

        Those are just ideas that were previously “generated” by humans though, that the LLM learned

        That’s not how modern generative AI works. It isn’t sifting through its training dataset to find something that matches your query like some kind of search engine. It’s taking your prompt and passing it through its massive statistical model to come to a result that meets your demand.

        • Iunnrais@lemm.ee
          link
          fedilink
          arrow-up
          2
          ·
          3 days ago

          I feel like “passing it through a statistical model”, while absolutely true on a technical implementation level, doesn’t get to the heart of what it is doing so that people understand. It’s using the math terms, potentially deliberately to obfuscate and make it seem either simpler than it is. It’s like reducing it to “it just predicts the next word”. Technically true, but I could implement a black box next word predictor by sticking a real person in the black box and ask them to predict the next word, and it’d still meet that description.

          The statistical model seems to be building some sort of conceptual grid of word relationships that approximates something very much like actually understanding what the words mean, and how the words are used semantically, with some random noise thrown into the mix at just the right amounts to generate some surprises that look very much like creativity.

          Decades before LLMs were a thing, the Zompist wrote a nice essay on the Chinese room thought experiment that I think provides some useful conceptual models: http://zompist.com/searle.html

          Searle’s own proposed rule (“Take a squiggle-squiggle sign from basket number one…”) depends for its effectiveness on xenophobia. Apparently computers are as baffled at Chinese characters as most Westerners are; the implication is that all they can do is shuffle them around as wholes, or put them in boxes, or replace one with another, or at best chop them up into smaller squiggles. But pointers change everything. Shouldn’t Searle’s confidence be shaken if he encountered this rule?

          If you see 马, write down horse.

          If the man in the CR encountered enough such rules, could it really be maintained that he didn’t understand any Chinese?

          Now, this particular rule still is, in a sense, “symbol manipulation”; it’s exchanging a Chinese symbol for an English one. But it suggests the power of pointers, which allow the computer to switch levels. It can move from analyzing Chinese brushstrokes to analyzing English words… or to anything else the programmer specifies: a manual on horse training, perhaps.

          Searle is arguing from a false picture of what computers do. Computers aren’t restricted to turning 马 into “horse”; they can also relate “horse” to pictures of horses, or a database of facts about horses, or code to allow a robot to ride a horse. We may or may not be willing to describe this as semantics, but it sure as hell isn’t “syntax”.

  • TORFdot0@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    4 days ago

    If you don’t know what you are doing and ask LLMs for code then you are gonna waste time debugging it without understanding but if you are just asking it for boiler plate stuff, or are asking it to add comments and print outs to console for existing code for debugging, it’s really great for that. Sometimes it needs chastising or corrections but so do humans.

    I find it very useful but not worth the environmental cost or even the monetary cost. With how enshittified Google has become now though I find that ChatGPT has become a necessary evil to find reliable answers to simple queries.

    • corsicanguppy@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      3 days ago

      Ha! I use it to write Ansible.

      In my case, YAML is a tool of Satan and Ansible is its 2001-era minion of stupid, so when I need to write Ansible I let the robots do that for me and save my sanity.

      I understand that will make me less likely to ever learn Ansible, if I use a bot to write the ‘code’ for me; and I consider that to be another benefit as I don’t need to develop a pot habit later, in the hopes of killing the brain cells that record my memory of learning Ansible.

  • Gravitwell@lemmy.ml
    link
    fedilink
    arrow-up
    9
    ·
    4 days ago

    I have a friend with numerous mental issues who texts long barely comprehensible messages to update me on how they are doing, like no paragraphs, stream of consciousness style… and so i take those walls of text and tell chat gpt to summarize it for me, and it goes from a mess of words into an update i can actually understand and respond to.

    Another use for me is getting quick access to answered id previously have to spend way more time reading and filtering over multiple forums and stack exchanges posts to answer.

    Basically they are good at parsing information and reformatting it in a way that works better for me.

  • neon_nova@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    10
    ·
    4 days ago

    I wrote guidelines for my small business. Then I uploaded the file to chatgpt and asked it to review it.

    It made legitimately good suggestions and rewrote the documents using better sounding English.

    Because of chatgpt I will be introducing more wellness and development programs.

    Additionally, I need med images for my website. So instead of using stock photos, I was able to use midjourney to generate a whole bunch of images in the same style that fit the theme of my business. It looks much better.

  • waka@discuss.tchncs.de
    link
    fedilink
    arrow-up
    4
    ·
    4 days ago

    Another point valid for GPTs is getting started on ideas and things, sorting out mind messes, getting useful data out of large amounts of clusterfucks of text, getting a general direction.

    Current downsides are you cannot expect factual answers on topics it has no access to as it’ll hallucinate on these without telling you, many GPT provides use your data so you cannot directly ask it sensitive topics, it’ll forget datapoints if your conversation goes on too long.

    As for image generation, it’s still often stuck in the uncanny valley. Only animation topics benefit right now within the amateur realm. Cannot say how much GPTs are professionally used currently.

    All of these are things you could certainly do yourself and often better/faster than an AI. But sometimes you just need a good enough solution and that’s where GPTs shine more and more often. It’s just another form of automation - if used for repetitive/stupid tasks, it’s fine. Just don’t expect it to just build you a piece of fully working bug-free software just by asking it. That’s not how automation works. At least not to date.

  • communism@lemmy.ml
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    4 days ago

    I use LLMs for search results when conventional search engines aren’t providing relevant results, and then I can fact-check whatever answers the LLMs give me. Especially using them to ask questions that are easy to verify, like mathematical questions where I can check the validity of the answers. Or similarly programming questions where I can read through the solution, check the documentation for any functions used, and make sure the output is logical, and make any tweaks if the LLM gives a nearly-correct answer. I always ask LLMs to cite their sources so I can check those too.

    I also sometimes use LLMs for formatting, like when I copy text off a PDF and the spacing is all funky.

    I don’t use LLMs for this, but I imagine that they would be a better replacement for previous automated translation tools. Translation seems to be one of the most obvious applications since LLMs are just language pattern recognition at the end of the day. Obviously for anything important they need to be checked by a human, but they would e.g. allow for people to participate in online communities where they don’t speak the community’s language.

  • w3dd1e@lemm.ee
    link
    fedilink
    arrow-up
    5
    ·
    3 days ago

    I need help getting started. I’m not an idea person. I can make anything you come up with but I can’t come up with the ideas on my own.

    I’ve used it for an outline and then I rewrite it with my input.

    Also, I used it to generate a basic UI for a project once. I struggle with the design part of programming so I generated a UI and then drew over the top of the images to make what I wanted.

    I tried to use Figma but when you’re staring at a blank canvas it doesn’t feel any better.

    I don’t think these things are worth the cost of AI ( ethically, financially, socially, environmentally, etc). Theoretically I could partner with someone who is good at that stuff or practice till I felt better about it.

  • thepreciousboar@lemm.ee
    link
    fedilink
    arrow-up
    3
    ·
    4 days ago

    I know they are being used to, and are decently good for, extracting a single infornation from a big document (like a datasheet). Considering you can easily confirm the information is correct, it’s quite a nice use case