• 9 Posts
  • 468 Comments
Joined 3 years ago
cake
Cake day: July 7th, 2023

help-circle










  • It seems your team is not ditching AI anytime soon, but you can still use it to tame technical debt. In fact, with the higher rate of code generation, I’d consider trying to write the best possible code when using AI a requirement.

    Look into “skills” (as in the Anthropic’s standard) and how to use them in Cursor. Use custom prompts to your advantage - the fact you’re still getting code with lots of comments as if it was a tutorial, tells me that this can be improved. Push for rules to be applied at the project level, so your colleagues’ agents also follow them.

    Make heavy use of AI to write regression tests for covering current application behavior: they’ll serve as regression warnings for future changes, and are a great tool to overcome the limits of AI context window (e.g. most times your agent won’t know you fixed a bug last week, and the changes it’s suggesting now break that again . The test will protect you there). Occasionally use AI to refactor a small function that’s somewhat related to your changes, if that improves the codebase.

    Stepping away from AI, try introducing pre-commit hooks for code quality checks. See if the tools of your choice support “baseline” so you don’t need to fix 1000s of warnings when introducing that hook.

    AI can write code that’s good enough, but it needs a little push to minimize tech debt and follow best practices despite the rest of the codebase not being at an ideal quality.


  • Often “silent” fails are a good thing

    Silent fails have caused me to waste many hours of my time trying to figure out what the fuck was happening with a simple script. I’ve been using -e on nearly all bash code I’ve written for years - with the exception of sourced ones - and wouldn’t go back.

    If an unhandled error happened, I want my program to crash so I can evaluate whether I need to ignore it, or actually handle it.