Did trust signals change? Part of my reviews has always been checking assumptions and broader (project) context. I don’t think polish implied understanding.
Did trust signals change? Part of my reviews has always been checking assumptions and broader (project) context. I don’t think polish implied understanding.
they asked me if I could develop some useful metrics for technical debt which could be surveyed relatively easily, ideally automatically
This is where I would have said “no, that’s not possible” or had a discussion about risks where things you simply can’t cover with automated metrics would lead to misdirection and possibly negative instead of positive consequences.
They then explore what technical debt is and notice that even many things outside of technical debt have significant impact you can’t ignore. I’m quite disappointed they don’t come back to their metrics task at all. How did they finish their task? Did they communicate and discuss all these broader concepts instead of implementing metrics?
There’s some metrics you can implement on code. Test coverage, complexity by various metrics, function body length, etc. But they only ever cover small aspects of technical debt. Consequently, they can’t be a foundation for (continuously) steering debt payment efforts for most positive effects.
I know my projects and can make a list of things and efforts and impacts and we can prioritize those. But I find the idea of (automated) metrics entirely inappropriate for observing or steering technical debt.
As a lead dev I have plenty of cases where I weigh effort vs impact and risk and conclude to “this is good enough for now”. Such cases are not poor management - which I assume you mean something like “we have to ship more faster, so do the shortest”. Sometimes cutting corners is the correct and good decision, sometimes the only feasible one, as long as you’re aware and weigh risks and consequences.
We, and specifically I, do plenty of improvements where possible and reasonable. Whatever I visit, depending on how much effort it is. But sometimes effort is too much to be resolvable or investable.
For context, I’m working on a project that has been running for 20 years.


Next post: “Why I am moving away from Medium” (hopefully)


Yes, that’s what it means.
And apparently, it happened selectively, not generally, but for specific people/request sources.
It would only be if you use the Notepad++'s own update mechanism. If you used other package managers or went and downloaded the installer to update you’d be fine.
I would say doneness is about completeness within context, not immutability.
The environment may change, but within context, it can still be considered done.
It’s fine to say and consider software never done, because there are known and unknown unknowns and extrapolations and expectations. But I think calling something done has value too.
It is a label of intention, of consideration, within the current context. If the environment changes and you want or need to use it, by all means update it. That doesn’t mean the done label assigned previously was wrong [in its context].
We also say “I’m done” to mean our own leave, even when there is no completeness on the product, but only on our own tolerance.
In the same way, if you shift focus, done may very well be done and not done at the same time. Done for someone in one environment, and not done for someone in another.
More often than ‘done’ I see ‘feature complete’ or ‘in maintenance mode’ in project READMEs, which I think are better labels.


and figure out whether the new framework with a weird name actually addresses
Couldn’t name what this is about in the title, nor in the teaser, I guess?
“Latest hotness” and “the new framework with a new name” isn’t very discerning.


From the paper abstract:
[…] Novice workers who rely heavily on AI to complete unfamiliar tasks may compromise their own skill acquisition in the process. We conduct randomized experiments to study how developers gained mastery of a new asynchronous programming library with and without the assistance of AI.
We find that AI use impairs conceptual understanding, code reading, and debugging abilities, without delivering significant efficiency gains on average. Participants who fully delegated coding tasks showed some productivity improvements, but at the cost of learning the library.
We identify six distinct AI interaction patterns, three of which involve cognitive engagement and preserve learning outcomes even when participants receive AI assistance. Our findings suggest that AI-enhanced productivity is not a shortcut to competence and AI assistance should be carefully adopted into workflows to preserve skill formation – particularly in safety-critical domains.


And there are cookies 🍪
Do they share my cookies with third parties? /s


Making XML schemas work was often a hassle. You have a schema ID, and sometimes you can open or load the schema through that URL. Other times, it serves only as an identifier and your tooling/IDE must support ID to local xsd file mappings that you configure.
Every time it didn’t immediately work, you’d think: Man, why don’t they publish the schema under that public URL.


You can just take the L and say you didn’t see that the function definition that was “added” was just “removed” at the top.
That’s not what happened though.
Changing the indent of the def changes the definition. That’s my whole argument.
I don’t get why you say “of course”, agreeing with my point, but then “it was only the indentation that was changed”.


What I wrote. I wouldn’t want to do AI Thursday and kinda malicious compliance for a prolonged time.


I see, thank you for the clarification. I was quite confused because it seemed to be missing, this one didn’t quite seem correct. If they never even pushed it as a MR then that makes more sense. Then the whole “hasn’t been merged yet” is missing that it hasn’t even been created.


I see, thank you for the clarification. I was quite confused because it seemed to be missing, this one didn’t quite seem correct. If they never even pushed it as a MR then that makes more sense. Then the whole “hasn’t been merged yet” is missing that it hasn’t even been created.


An indentation change is a definition code change. And as I pointed out, it’s a py file, and Python is an indent-significant language.
So you’re using [] as an alternative function call syntax to (), usable with nullable parameters?
What’s the alternative? let x = n is null ? null : math.sqrt(n);?
In principle, I like the idea. I wonder whether something with a question mark would make more sense, because I’m used to alternative null handling with question marks (C#, ??, ?.ToString(), etc). And I would want to see it in practice before coming to an early conclusion on whether to establish as a project principle or not.
math.sqrt?() may imply the function itself may be null. (? ) for math.sqrt(?n)? 🤔
I find [] problematic because it’s an index accessor. So it may be ambiguous between prop or field indexed access and method optional param calls. Dunno how that is in Dart specifically.


The issue, presumably the PR (linked at the top of the issue because of reference).
Look at the code change. It gets inputs and loops over them and seems to do an in-place fixup. But the code indent is wrong, and it even changed the function definition of the unrelated next function. In Python, the indent-logic-significance language.
I assume they briefly showed the code on stage. Even then it should have been obvious to any developer. py file, messy indent, changes unrelated function.
Please correct me if this is the wrong PR.


I would make Thursday AI day and do everything with AI. And Friday is recovery day, where I discard everything that didn’t work, and do what I want, to recover motivation for long-term sustainability.
I wonder if and when they would notice a productivity difference. I certainly couldn’t and wouldn’t be able to do that indefinitely.


Makes me think used tokens, which is very easy to fake.
If I were in a malicious environment, I’d be interested in gaming the system, excessively producing AI code even if I never use it.
The only way out of this is regulation, which requires political activism.
The EU did some good process on that through GDPR and the newer digital laws regarding safety, disclosure, maintenance, and due diligence requirements. Prosecution with fines is there, but slow, and arguably too sporadic.
Political activism in this direction is unthankful work and a lot of effort. I am reminded of someone who has pushed for public institutions to move away from US big tech for many years. Now Trump is the reason for change, and their effort can surely feel pointless.
I do occasionally report GDPR violations, etc. That can feel pointless as well. But it’s necessary, and the only way to (support/influence) agencies to take action.