• 50 Posts
  • 259 Comments
Joined 3 years ago
cake
Cake day: June 11th, 2023

help-circle




  • They’re bash/shell- and bin-dependent commands rather than Git commands. I use Nushell.
    Transformed to Nushell commands:

    • The 20 most-changed files in the last year:
      git log --format=format: --name-only --since="1 year ago" | lines | str trim | where (is-not-empty) | uniq --count | sort-by count --reverse | take 20
    • Who Built This:
      git shortlog -sn --no-merges
      git shortlog -sn --no-merges --since="6 months ago"
    • Where Do Bugs Cluster:
      git log -i -E --grep="fix|bug|broken" --name-only --format='' | lines | str trim | where (is-not-empty) | uniq --count | sort-by count --reverse | take 20
    • Is This Project Accelerating or Dying:
      git log --format='%ad' --date=format:'%Y-%m' | lines | str trim | where (is-not-empty) | uniq --count
    • How Often Is the Team Firefighting:
      git log --oneline --since="1 year ago" | find --ignore-case --regex 'revert|hotfix|emergency|rollback'

    /edit: Looks like the lines have whitespace or sth. Replaced lines --skip-empty with lines | str trim | where (is-not-empty).

    command aliases
    def "gits most-changed-files" [] { git log --format=format: --name-only --since="1 year ago" | lines | str trim | where (is-not-empty) | uniq --count | sort-by count --reverse | take 20 }
    def "gits who" [] { git shortlog -sn --no-merges }
    def "gits who6m" [] { git shortlog -sn --no-merges --since="6 months ago" }
    def "gits fixes" [] { git log -i -E --grep="fix|bug|broken" --name-only --format='' | lines | str trim | where (is-not-empty) | uniq --count | sort-by count --reverse | take 20 }
    def "gits aliveness" [] { git log --format='%ad' --date=format:'%Y-%m' | lines | str trim | where (is-not-empty) | uniq --count }
    def "gits firefighting" [] { git log --oneline --since="1 year ago" | find --ignore-case --regex 'revert|hotfix|emergency|rollback' }
    


    • Huge growth in tooling and systems making use of “community” dependencies
    • Fewer safeguards and security guarantees and concerns on these platforms
    • Easy entry into these platforms and systems
    • Huge potential scale-effect through global software development tooling
    • Huge additional potential scale effect through developer and development systems - crossing into other such platforms through local credentials, immediate access to internal tooling, platforms, and systems, and potential to attack other downstream systems and platforms
    • Public knowledge about the attack vectors, attack successes and reporting, and continued opportunity, occurrence, and personal successes, investment, and knowledge



  • This post argues something that would never come to my mind. Of course software that annoys users has developers and development too. Of course its development also requires balancing user requests and convenience vs business and technical capability (and priority). Of course you can’t directly conclude to [technical] engineering quality from its perception, behavior, or irritation.

    What’s left after these nothing-burgers?

    Looking back, I’m glad that people have strongly disliked some of the software I’ve built[…]. If I’d happened to work on popular applications for my whole career, I’d probably believe that that was because of my sheer talent.

    Wtf? They think they wouldn’t be able to recognize that it’s not solely on them or their superiority that their software is popular?

    I… don’t get it. Maybe I just don’t get what this is supposed to be about.



  • 5 with reasonable acceptance and use, even advocacy, for up to 1. I don’t see a difference between 4 and 5, though.

    Reviews should be the norm. Even for simple changes, a simple code change should be simple to review and approve, too. At the same time, some formatting changes or small or minimal changes with high confidence can be pushed to main without review - that’d be just wasted time and effort on the reviewer’s side. High urgency can also warrant an immediate push to main, or live hotfixing on prod if possible, with a corresponding PR still open.





  • Think about whether TODOs will be revisited, and how you can guarantee that. What do you gain and lose by replacing warnings with TODOs.

    In my projects and work projects, I advocate for:

    • Warnings and TODOs are fine only in initial development before release/stability and in feature branches during development
    • TODOs are almost never revisited, so document state and information instead of hypotheticals; document opportunities over TODOs, document known shortcomings and risks, etc
    • If there is good reason to keep and ignore warnings, document the reasoning, and we can update our CI/Jenkins quality gate to a new baseline of accepted warnings instead of suppressing them (this pretty much never happens)

    Dotnet warning suppression attributes have a Justification property. Editorconfig severity, disabling, suppression can have a comment.

    If it’s your own project and you know when and how you will revisit, what do you gain by dropping the warning? A no-warning, but then you have TODOs with the same uncertainties?


  • We onboarded our team with VS integrated Copilot.

    I regularly use inline suggestions. I sometimes use the suggestions that go beyond what VS suggested before Copilot license… I am regularly annoyed at the suggestions moving off code, even greyed out sometimes being ambiguous with grey text like comma and semicolon, and control conflicting with basic cursor navigation (CTRL+Right arrow)

    I am very selective about where I use Copilot. Even for simple systematic changes, I often prefer my own editing, quick actions, or multi cursor, because they are deterministic and don’t require a focused review that takes the same amount of time but with worse mental effect.

    Probably more than my IDE “AI”, I use AI search to get information. I have the knowledge to assess results, and know when to check sources anyway, in addition, or instead.

    My biggest issue with our AI is in the code some of my colleagues produce and give me for review, and that I don’t/can’t know how much they themselves thought about the issues and solution at hand. A lack of description, or worse, AI generated summaries, are an issue in relation to that.

    /edit: Here is my comment on the post four months ago.




  • When I was researching keyboards recently, I stumbled over a pro gamer (I believe) YouTuber who was quite vocal about pretty much all gear marketed as “gaming gear” is overpriced marketing bullshit. Apparently, they tested dozens of keyboards, mice, and headsets over the years. It certainly matched my impression of reading tests about products previously.

    “Gamer” chairs are racecar chairs meant to keep you from sliding sideways, not being fit for long sitting sessions on a PC. Prefer a good or decent office chair. “Gamer” headsets are worse and more expensive than other headsets. Keyboards and mice are mostly marketing. etc.

    Regarding input, they made a point about physical human limitations and state like sleep and caffeine intake having much more of an effect than the hardware you use.

    2022 update

    So this article is quite old. There are keyboard switches now that activate as soon as you activate the key, and that can recognize lift and press without passing a trigger point. If you want that kind of edge, those are the top performers right now. I’d be more interested in the technology and maybe playful capabilities than the performance they add.

    I’m always way too thorough when researching products before buying…