• 43 Posts
  • 198 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle




  • While I agree with the later (or middle?) points, maybe for different reasons or maybe I would have reasoned differently, I mostly disagree with the earlier points.

    Any really important comments get lost in the noise

    What kind of comments are they using?

    When I leave comments on GitLab they’re threads that get resolved explicitly. GitHub also uses resolvable threads. The assignee/creator goes through them one by one, and marks them as resolved when they feel they’re done with them. Nothing gets lost like that.

    I also make use of ‘⚠’ to mark significant/blocking comments and bullet points. Other labels, like or similar to conventional comment prefixes, like “thought:” or “note:”, can indicate other priorities and significance of comments.

    Instead of leaving twenty comments, I’d suggest leaving a single comment explaining the stylistic change you’d like to make, and asking the engineer you’re reviewing to make the correct line-level changes themselves.

    I kinda agree, but I often leave the comment on the/a code in question, and often add a code change suggestion to visualize/indicate what I mean. This comment may stand in and refer to all other occurrences of this kind of thing. It doesn’t have to apply exclusively on those lines.

    Otherwise you’re putting your colleagues in an awkward position. They can either accept all your comments to avoid conflict, adding needless time and setting you up as the de facto gatekeeper for all changes to the codebase, or they can push back and argue on each trivial point, which will take even more time. Code review is not the time for you to impose your personal taste on a colleague.

    I make sure that my team has a common understanding of, and the comments adding sufficient context/pretext to make it clear, that code change suggestions and “I would have [because]” are usually or in general can be freely rejected, unless specified otherwise. Often, comments include information of how important or not changes are to me, in comments themselves, and/or comments summarizing a review iteration (with a set of comments). The comments can also serve as a spark for discussion about solutions and approaches, common goals or eventual goals of the changed code that may be targeted after the code changes currently under review.

    Review with a “will this work” filter, not with a “is this exactly how I would have done it” filter

    I wouldn’t want to do it like that, specifically. It’s a question of weighing risks and medium and long term maintainability vs delivery, work, changeset, and review complexity and delay. Rather than “will this work”, I ask my self, “is this good enough [within context]”.

    Leave a small number of well-thought-out comments, instead of dashing off line comments as you go and ending up with a hundred of them

    Maybe I’ve had too many juniors to get into this mindset. But I’ve definitely had numerous times where I did many comments on reviews, even again on successive iterations. Besides reviewing the code technically, the review can also serve as a form of communication, assimilation, and teaching (project an codebase at hand, work style, and other things).

    It’s good to talk about concerns, issues, and frustrations, as well as upsides of doing so and working like that. Retrospectives and personal talks or discussions can help with that. Apart from other discussion, planing, and support meetings, the review is the interface between people and a great way to communicate.



  • Visual Studio provides some kind of AI even without Copilot.

    Inline (single line) completions - I not always but regularly find quite useful

    Repeated edits continuation - I haven’t seen them in a while, but have use them on maybe two or three occasions. I am very selective about these because they’re not deterministic like refractorings and quick actions, which I can be confident in correctness even when doing those across many files and lines. For example invert if changes many line indents; if an LLM does that change you can’t be sure it didn’t change any of those lines.

    Multi-line completions/suggestions - I disabled those because it offsets/moves away the code and context I want to see around it, as well as noisy movement, for - in my limited experience - marginal if any use[fulness].

    In my company we’re still in selective testing phase regarding customer agreements and then source code integration into AI providers. My team is not part of that yet. So I don’t have practical experience regarding any analysis, generating, or chat functionality with project context. I’m skeptical but somewhat interested.

    I did do private projects, I guess one, a Nushell plugin in Rust, which is largely unfamiliar to me, and tried to make use of Copilot generating methods for me etc. It felt very messy and confusing. Generated code was often not correct or sound.

    I use Phind and more recently more ChatGPT for research/search queries. I’m mindful of the type of queries I use and which provider or service I use. In general, I’m a friend of ref docs, which is the only definite source after all. I’m aware of and mindful of the environmental impact of indirectly costly free AI search/chat. Often, AI can have a quicker response to my questions than searching via search ending and on and in upstream docs. Especially when I am familiar with the tech, and can relatively quickly be reminded, or guide the AI when it responds bullshit or suboptimal or questionable stuff, or also relatively quickly disregard the entire AI when it doesn’t seem capable to respond to what I am looking for.






  • I strongly disagree.

    Coloring is categorization of code. Much like indent, spacing, line-breaking, aligning, it aids readability.

    None of the examples they provided looked better, more appropriate, or more useful. None of the “tests” lead me to question my syntax highlighting. Quite the contrary.

    By reducing the highlighting to what they seem important, they’re losing the highlighting for other cases. The examples of highlighting only one or two things make it obvious. When you highlight only method heads, you gain clarity when reading on that level, across methods, but lose everything when reading the body.

    I didn’t particularly like their dark theme choice. Their initial example is certainly noisy, but you can have better themes and defaults with more subtle and more equal strength colors. The language or framework syntax and spacing can also influence it.

    Bolding is very useful when color categorizes code to give additional structure discoverability, just like spacing does.


  • I failed the question about remembering what colour my class definitions were, but you know what? I don’t care. All I want is for it to be visually distinct when I’m trying to parse a block of code

    Between multiple IDEs, text editors, diff viewers and editors, and hosted tools like MR/review diff, they’re not even consistently just one thing. For me, very practically and factually. Colors differ.

    As you point out, they’re entirely missing the point. What the colors are for and how they’re being used.


  • That’s a very one-dimensional view of technical debt.

    I was about to write something more, but I think if I don’t know what they refer to when they say “knowledge”, then it’s too wishy-washy and I may be talking about something different than they intended.


    Contrasting “resolving technical debt” and “investing [improvement] knowledge” we’re moving the reference view point.

    I document state and issues as technical debt, and opportunities for change as opportunities. They cross, but are distinct concepts, and do not always cross. Some technical debt may be documented without a documented opportunity. Opportunities may be open improvements that do not tackle technical debt.

    In my eyes, technical debt is about burdens that reduce maintainability where better alternatives likely exist.

    “Investing knowledge” is something different, and not necessarily about known burdens, but may be improvements unrelated to known burdens.