

“Yeah, I can do that task. I’m very experienced in struggling to implement stuff like that.”


“Yeah, I can do that task. I’m very experienced in struggling to implement stuff like that.”


Thanks for the suggestion. As a first step, I set it up in Nushell with a ctrl+t shortcut:
$env.config.keybindings = (
$env.config.keybindings | append {
name: fzf_file_picker
modifier: control
keycode: char_t
mode: [emacs, vi_insert, vi_normal]
event: {
send: ExecuteHostCommand
cmd: "commandline edit --insert (fzf | str trim)"
}
}
)
Maybe I will look into more. :) I’ve known about fzf but I guess never gotten around to fully evaluating and integrating it.
Nushell supports fuzzy completions, globbing, and “menus” (TUI) natively. Still, the TUI aspect and possibly other forms of integrations seem like they could be worthwhile or useful as extensions.


For software to be perfect, can not be improved no matter what, you’d have to define a very specific and narrow scope and evaluate against that.
Environments change, text and data encoding and content changes, forms and protocol of input and output changes, opportunities and wishes to integrate or extend change.
pwd seems simple enough. cd I would already say no, with opportunities to remember folders, support globbing, fuzzy matching, history, virtual filesystems. Many of those depend on the environment you’re in. Typically, shells handle globbing. There’s alternative cd tools that do fuzzy matching and history, and virtual filesystems are usually abstracted away. But things change. And I would certainly like an interactive and fuzzy cd.
Now, if you define it’s scope, you can say: “All that other stuff is out of scope. It’s perfect within it’s defined target scope.” But I don’t know if that’s what you’re looking for? It certainly doesn’t mean it can’t be improved no matter what.


The original one? Because there’s numerous extensions to it. I wouldn’t be confident it won’t evolve further.


Do you exclude inventory management from that “will never change” so that that’s only about software?
I imagine there will be new products to be listed.


…that supports Unicode? Which encodings? Or only ASCII? Unicode continues to change.
I wouldn’t be very confident that it won’t change or offer reasonable opportunities for improvement.


Your sentence abruptly ends in a backtick - did you mean to include something more? Maybe “wc”?
I’m surprised it wasn’t reallyblue


No, it’s not on the user’s end. It’s because you didn’t use correct Markdown syntax for your link. I verified this in your post source before commenting.
You used: [https://joinhideout.vercel.app/]() - which is a link without a target, so it defaults to this page we’re on.
You should have used one of
<https://joinhideout.vercel.app/>[https://joinhideout.vercel.app/](https://joinhideout.vercel.app/)[joinhideout.vercel.app](https://joinhideout.vercel.app/)

Great analysis / report. At times a bit repetitive, but that could be useful for people skimming or jumping or quoting as well.
Despite 91% of CTOs citing technical debt as their biggest challenge, it doesn’t make the top five priorities in any major CIO survey from 2022–2024.
Sad. Tragic.
I’m lucky to be in a good, small company with a good, reasonable customer, where I naturally had and grew into having the freedom and autonomy to decide on things. The customer sets priorities, but I set mine as well, and tackle what’s appropriate or reasonable/acceptable. Both the customer and I have the same goals after all, and we both know it and collaborate.
Of course, that doesn’t help me as a user when I use other software.
Reading made me think of the recent EU digital regulations. Requiring due diligence, security practices, and transparency. It’s certainly a necessary and good step in the right direction to break away from the endless chase away from quality, diligence, and intransparency.


“You can save 20% time by using Robo for automation!” Click. Can’t even automate what I do.


That’s wonderful to read, that it caught and motivated you.
I suspect these systematic issues are much worse in bigger organizations. Smaller ones can be victims, try to pump out, or not care about quality too, but on smaller teams and hierarchies, you have much more impact. I suspect the chances of finding a good environment are higher in smaller companies. It worked for me, at least. Maybe I was just super lucky.


A library with no code, no support, no implementation, no guarantees, no bugs are “fixable” without unknown side effects, no fix is deterministic even for your own target language, …
A spec may be language agnostic, but the language model depends on trained on implementations. So, do you end up with standard library implementations being duplicated, just possibly outdated with open bugs and holes and gaps and old constructs? And quality and coverage of spec implementation will vary a lot depending on your target language? And if there’s not enough conforming training it may not even follow the spec correctly? And then you change the spec for one niche language?
If it’s a spect or LLM template, then that’s what it is. Don’t call it library. In the project readme don’t delay until the last third to actually say what it is or does.


your link is broken
… which arguably makes them not “normal people” (referring to the earlier comment).
Surely, most people use different, more integrated tooling.


The only way out of this is regulation, which requires political activism.
The EU did some good process on that through GDPR and the newer digital laws regarding safety, disclosure, maintenance, and due diligence requirements. Prosecution with fines is there, but slow, and arguably too sporadic.
Political activism in this direction is unthankful work and a lot of effort. I am reminded of someone who has pushed for public institutions to move away from US big tech for many years. Now Trump is the reason for change, and their effort can surely feel pointless.
I do occasionally report GDPR violations, etc. That can feel pointless as well. But it’s necessary, and the only way to (support/influence) agencies to take action.
Did trust signals change? Part of my reviews has always been checking assumptions and broader (project) context. I don’t think polish implied understanding.
they asked me if I could develop some useful metrics for technical debt which could be surveyed relatively easily, ideally automatically
This is where I would have said “no, that’s not possible” or had a discussion about risks where things you simply can’t cover with automated metrics would lead to misdirection and possibly negative instead of positive consequences.
They then explore what technical debt is and notice that even many things outside of technical debt have significant impact you can’t ignore. I’m quite disappointed they don’t come back to their metrics task at all. How did they finish their task? Did they communicate and discuss all these broader concepts instead of implementing metrics?
There’s some metrics you can implement on code. Test coverage, complexity by various metrics, function body length, etc. But they only ever cover small aspects of technical debt. Consequently, they can’t be a foundation for (continuously) steering debt payment efforts for most positive effects.
I know my projects and can make a list of things and efforts and impacts and we can prioritize those. But I find the idea of (automated) metrics entirely inappropriate for observing or steering technical debt.
I thought I remembered a standardized metadata file format you can place on your website, but I can’t find it.
GitHub defines
FUNDINGBrave webbrowser attempted something like that with Brave Rewards, but through ads, and basically collected for themselves until the websites actually signed up for Brave Rewards.
I remember Flattr.