• Phoenixz@lemmy.ca
    link
    fedilink
    arrow-up
    10
    ·
    18 hours ago

    Eh, no?

    I still haven’t seen AI produce anything remotely reliable within a single function, let alone put multiple functions together, let alone build something with multiple classes, let alone something actually useful, let alone a big project

    Yeah, AI got useful as a rubber Ducky. I use it for getting a sense of possible directions, maybe a fresh idea I hadn’t considered sometimes. It saves me from opening up DDG and Ctrl-clicking the first ten results to check all the pages, sometimes.

    But the AI coder inside my IDE still gets code confidently wrong about 70% of the time, and we’re talking single lines here and the mistakes are fundamental, like variables that haven’t been initialized.

    Having said that, I’m sure that someday someone will.come up with an AI that can do real development and that day it’ll be able to develop itself and that day we’ll all be properly fucked because that will really quickly delve into something we can’t control and something more intelligent than all of us.

    I’m sure some tech bros can’t wait for that to happen and I honestly believe we need to rid the world of these idiots before they doom and destroy us all.

    • ProbablyBaysean@lemmy.ca
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      4 hours ago

      I built a full stack SaaS that is deployed at my work. It is exposed to the internet and I have only used pentesting and asking the ai “what is this” and “fix this” and feature requests.

      It has awful context limitations. Saying “do this” means it overfills context halfway through and loses the nuance as it tries to restart the task after summary. I dont trust it to make a todo list and keep to it. I have to work with the slightly long term “markdown files” as memory.

      I have had good progress when I say “add this pentest_failure/feature_request to an open items list markdown file” then the ai finds context defines the issue and updates the file. Rinse repeat. THEN I say “I want to make a refactor that will fix/implement as many of the open items list issues as possible, can you/the_ai make a refactoring spec”. THEN I carefully review the business logic in the refactoring spec THEN I tell the ai to implement the refactoring spec phase 1 then i test then j say do phase 2… etc.

      Design concerns like single source of truth, dry, separation of concerns, and yagni have come up. I have asked about api security best practices. I have asked about test environment vs production.

      I developed without git, and the sheer amount of dumb duct tape code made by no short term memory ai exposed by pentesting was infuriating, but I got a process that works for my level of understanding.

      Ai Skills, rules, etc are still not quite clear to me