I’ve seen this term thrown around a lot lately and I just wanted to read your opinion on the matter. I feel like I’m going insane.

Vibe coding is essentially asking AI to do the whole coding process, and then checking the code for errors and bugs (optional).

  • TehPers@beehaw.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 day ago

    For personal projects, I don’t really care what you do. If someone who doesn’t know how to write a line of code asks an LLM to generate a simple program for them to use on their own, that doesn’t really bother me. Just don’t ask me to look at the code, and definitely don’t ask me to use the tool.

  • Reptorian@programming.dev
    link
    fedilink
    arrow-up
    2
    ·
    2 days ago

    Nah. I only used AI as a last resort, and in my case, it has worked out. I cannot see myself using AI for codes again.

  • MXX53@programming.dev
    link
    fedilink
    arrow-up
    5
    ·
    2 days ago

    I probably wouldn’t do it. I do have AI help at times, but it is more for bouncing ideas off of, and occasionally it’ll mention a library or tech stack I haven’t heard of that allegedly accomplishes what I’m looking to do. Then I go research the library or tech stack and determine if there is value.

  • NeuroByte@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    19 hours ago

    lol wut, asking AI to do the work and then going back and fixing bugs…?

    To me, vibe coding is pick a project to work on and get building. Very basic planning stages without much design, like building with legos without instruction manuals. I make design decisions and refactor as I code. I certainly get some AI input when I don’t know how to implement something, but I will usually work “blindly” using my own ideas and documentation. I probably visit stackoverflow while vibe coding more than I do chatgpt.

  • Kissaki@programming.dev
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 hours ago

    I think calling that vibe coding is a very unfitting term. I haven’t seen it called that before.

  • EnthusiasticNature94@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    10
    ·
    2 days ago

    This seems like a game you’d do with other programmers, lol.

    I can understand using AI to write some potentially verbose or syntactically hell lines to save time and headaches.

    The whole coding process? No. 😭

    • Hoimo@ani.social
      link
      fedilink
      arrow-up
      1
      ·
      1 day ago

      You can save time at the cost of headaches, or you can save headaches at the cost of time. You cannot save both time and headaches, you can at most defer the time and the headaches until the next time you have to touch the code, but the time doubles and the headaches triple.

  • A1kmm@lemmy.amxl.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 day ago

    As an experiment / as a bit of a gag, I tried using Claude 3.7 Sonnet with Cline to write some simple cryptography code in Rust - use ECDHE to establish an ephemeral symmetric key, and then use AES256-GCM (with a counter in the nonce) to encrypt packets from client->server and server->client, using off-the-shelf RustCrypto libraries.

    It got the interface right, but it got some details really wrong:

    • It stored way more information than it needed in the structure tracking state, some of it very sensitive.
    • It repeatedly converted back and forth between byte arrays and the proper types unnecessarily - reducing type safety and making things slower.
    • Instead of using type safe enums it defined integer constants for no good reason.
    • It logged information about failures as variable length strings, creating a possible timing side channel attack.
    • Despite having a 96 bit nonce to work with (-1 bit to identify client->server and server->client), it used a 32 bit integer to represent the sequence number.
    • And it “helpfully” used wrapping_add to increment the 32 sequence number! For those who don’t know much Rust and/or much cryptography: the golden rule of using ciphers like GCM is that you must never ever re-use the same nonce for the same key (otherwise you leak the XOR of the two messages). wrapping_add explicitly means when you get up to the maximum number (and remember, it’s only 32 bits, so there’s only about 4.3 billion numbers) it silently wraps back to 0. The secure implementation would be to explicitly fail if you go past the maximum size for the integer before attempting to encrypt / decrypt - and the smart choice would be to use at least 64 bits.
    • It also rolled its own bespoke hash-based key extension function instead of using HKDF (which was available right there in the library, and callable with far less code than it generated).

    To be fair, I didn’t really expect it to work well. Some kind of security auditor agent that does a pass over all the output might be able to find some of the issues, and pass it back to another agent to correct - which could make vibe coding more secure (to be proven).

    But right now, I’d not put “vibe coded” output into production without someone going over it manually with a fine-toothed comb looking for security and stability issues.

  • GissaMittJobb@lemmy.ml
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    20 hours ago

    Somewhat impressive, but still not quite a threat to my professional career, as it cannot produce reliable software for business use.

    It does seem to open up for novices to create ‘bespoke software’ where they previously would not have been able to, or otherwise unable to justify the time commitment, which is fun. This means more software gets created which otherwise would not have existed, and I like that.

  • FizzyOrange@programming.dev
    link
    fedilink
    arrow-up
    5
    ·
    2 days ago

    Based on my experience of AI coding I think this will only work for simple/common tasks, like writing a Python script download a CSV file and convert it to JSON.

    As soon as you get anywhere that isn’t all over the internet it starts to bullshit.

    But if you’re working in a domain it’s decent at, why not? I found in those cases fixing the AI’s mistakes can be faster than writing it myself. Actually often I find it useful for helping me decide how I want to write code because the AI does something dumb, and I go “no I obviously don’t want it like that”…

  • cool@lemmings.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    19 hours ago

    I mean, at some point you have to realize that instructing an AI on every single thing you want to do starts to look a lot like programming.

    Programming isn’t just writing code. It’s being able to reason about a method of doing things. Until AI is at the level of designer, you can expect humans to have to do the brunt of the work to bring software to life.

    • PM_Your_Nudes_Please@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      15 hours ago

      Yeah, there’s also the “debugging is just as hard as writing elegant code” side of things. Vibe coding is largely just putting yourself in a permanent debugging role.

      The big issue I see with vibe coding is that you need to know best practices to build secure code. Even if you don’t adhere to them all the time, best practices exist for a reason. And a programmer who doesn’t even know them is a dangerous thing, because they won’t even be able to see what is insecure (until it’s far too late).

      Studies have found that vibe coders tend to produce less secure code, but have higher confidence in their code being secure; It’s essentially Dunning-Kruger in practice. I’d have no issue with someone using AI to get the broad strokes down. But then they need to be able to back it up with actual debugging. Not just “I didn’t even bother looking at it. If it compiles, push it to prod.”