• ProbablyBaysean@lemmy.ca
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    4 hours ago

    I built a full stack SaaS that is deployed at my work. It is exposed to the internet and I have only used pentesting and asking the ai “what is this” and “fix this” and feature requests.

    It has awful context limitations. Saying “do this” means it overfills context halfway through and loses the nuance as it tries to restart the task after summary. I dont trust it to make a todo list and keep to it. I have to work with the slightly long term “markdown files” as memory.

    I have had good progress when I say “add this pentest_failure/feature_request to an open items list markdown file” then the ai finds context defines the issue and updates the file. Rinse repeat. THEN I say “I want to make a refactor that will fix/implement as many of the open items list issues as possible, can you/the_ai make a refactoring spec”. THEN I carefully review the business logic in the refactoring spec THEN I tell the ai to implement the refactoring spec phase 1 then i test then j say do phase 2… etc.

    Design concerns like single source of truth, dry, separation of concerns, and yagni have come up. I have asked about api security best practices. I have asked about test environment vs production.

    I developed without git, and the sheer amount of dumb duct tape code made by no short term memory ai exposed by pentesting was infuriating, but I got a process that works for my level of understanding.

    Ai Skills, rules, etc are still not quite clear to me