• 9point6@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    14 days ago

    “good morning, I’m about to destroy the backend” is exactly the energy I’d welcome from a colleague frankly.

    I think the outage that followed as we fumbled to replace it would probably be cheaper than the ongoing maintenance after a few months

  • squaresinger@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    13 days ago

    In my last job we called that “optimizing”, after a colleague (who usually only did frontent work) used the opportunity when everyone else was on vacation to implement a few show-stopping bugs in the backend and put “optimized backend code” in the commit message. He did the same thing a few months later during the next vacation period, which really solidified the joke.

    • bier@feddit.nl
      link
      fedilink
      arrow-up
      0
      ·
      13 days ago

      Worked with a guy that would always say I’m refactoring X and it would usually give us weird issues and bugs. So after a while the team started calling it refucktoring

    • tooclose104@lemmy.ca
      link
      fedilink
      arrow-up
      1
      ·
      14 days ago

      A typo in software development or other shell based work could completely ass womp a system in ways that could lose a company lots of money.

      Oopsies on prod systems, even with an outage window, can really fuck shit up. Seemingly small mistakes can quickly snowball into systemwide outages.

      • jjjalljs@ttrpg.network
        link
        fedilink
        arrow-up
        1
        ·
        14 days ago

        It’s wild to me how some places I’ve worked are like locked down, all the infrastructure is in terraform or whatever and can be deployed immediately… and other places are like “ssh into prod with the credentials from confluence, edit the config in vim, and paste the new code into a new file”

        • tooclose104@lemmy.ca
          link
          fedilink
          arrow-up
          0
          ·
          13 days ago

          I’m at one of the latter, so I feel this in my bones. I’ve watched what should have been an innocent config change snowball into a pair of VM clusters shitting back and forth for 2 hours. Implemented strict change control that day. Kind of a pain, but the team learned a lot that day!