One candidate we placed in the past told us they wanted $90k. We advised them not to say that number, because it’d get them filtered out. They ended up getting hired for close to 200.
Crazy
an Android Linux translation layer called Android Translation Layer (we never said developers were good at naming)
wth is that jab?
I like descriptive names on products.
Should they have called it koalupetta?
This talks about one issue. You seem to be confident that this one case is representative of the whole FOSS space? I am not.
Can you elaborate how it would be much easier in closed source software? Because as far as I can see, it’s different. In most cases, you need an actual person instead of an online persona, pass interview and contracting, and then you’re still “the new guy” or Junior in the company or project. It’s not like closed off from public eyes means anyone can do anything without any eyes.
At the end, pointing to their Bugzilla issue tracker
I’ve always found Bugzilla incredibly inaccessible. It’s so overloaded, so complicated, so noisy with unrelated and irrelevant things. It always baffled me how projects use it and keep using it, and especially projects like Thunderbird and Mozilla, for such a long time.
I regularly use bug trackers, to report, comment, or work on. When I see Bugzilla, in most cases, I give up/leave right away.
Consequently, I find it ironic that they point to Bugzilla at the end.
That being said, I think this video is a good intro to accessibility, common issues, and study findings.
How do you guys view Bugzilla as an issue tracker, bug tracker, and work task tracker?
One file, almost 7k lines of code.
https://github.com/microsoft/BASIC-M6502
This assembly language source code represents one of the most historically significant pieces of software from the early personal computer era. It is the complete source code for Microsoft BASIC Version 1.1 for the 6502 microprocessor, originally developed and copyrighted by Microsoft in 1976-1978.
Are you attempting to indicate D much better than Nim or from D to Nim?
Feature richness as a user, documentation as a developer.
"For weeks I typed random letters into the command line, and when I entered ls /usr/bin
and man
finally something happened!
This is the next level of learning. Not only do you read how it is, you have to deduct, to assess and explore. Writing your own documentation is the best way to learn after all.
I’ve not used it much, I think I only had to use it in two instances due to customers. From what I remember, the structure and navigation was not hierarchical making navigating very inefficient and irritating.
I’m used to GitLab (and Phabricator in the past, and outside of work GitHub), and much prefer their repo, project, group representation and review UI/UX/workflow.
Codeberg only hosts open source.
I don’t think there’s a need to switch away.
Many people in Lemmy think otherwise, and have thought so for a long time.
Nothing changed yet due to product integration into Corp.
There’s a threshold where good integration does not trump shit product. Bitbucket sucks. I’m glad we’re not using it even when we’re still stuck with shit Jira and confluence.
I’ve found in-line completions/suggestions useful at times, but multi-line completions always irritating to the point that I disabled them completely. Much more often I want to read surrounding and following code, and not have it be pushed out of view, and rarely was it useful to me.
Of course, that may be largely the project and use case. (And quite limited experience with it.)
I’ve been using phind as a technical-focused AI search engine, which is a great addition to my toolset.
I’m mindful of using it vs searching [ref docs etc], not only in the kind of search and answer I’m looking for but also energy consumption impact, but it’s definitely very useful. I’m a senior dev though, and know what to expect and I am able to assess plausibility, and phind provides sources I can inspect too.
As for code assistance, I find it plausible that it can be useful, even if from my personal experience I’m skeptical.
I watched an Microsoft talk from two devs, which was technically sound and plausible in that it was not just marketing but they talked about their experience, including limits of AI, and where they had to and to what degree they had to deal with hallucinations and cleanup. They talked about where they see usefulness in AI. They were both senior, and able to assess plausibility, and do corrections where necessary. What I remember; they used it to bounce ideas back and forth, to do an implementation draft they then go over and complete, etc.
Microsoft can do the investment of AI setup, code sharing to model, AI instructions/meta-description setup investment, etc.
My personal experience was in using copilot for Rust code, for Nushell plugins. I’m not very familiar with Rust, and it was very confusing, and with a lot of hallucinations.
The PR descriptions CodeRabbit did were verbose and not useful for smaller PRs I made. That has been a while ago.
At work we have a voluntary work group exploring AI. The whole generate your whole app kind of thing seems plausible for UI prototypes. But nothing more. And for that it’s probably expensive.
I’m not sure how much the whole thing does or can do for efficiency. Seems situational - in terms of environment, setup, capabilities, and kind of work and approach.
What do you mean in particular?
The only thing that comes to mind for me is the “restore after commit” being a different chunk-add workflow than add --patch
- but I don’t think it’s worse.
TortoiseGit.
Through settings, I move the Show Log to the top context menu level, and it’s my entry point to every Git operation.
I see a history tree to see and immediately understand commit and branch relationships and states. I can commit, show changes, diff, rebase interactive or not, push, fetch, switch, create branches and tags, squash and split commits, commit chunk-wise through “restet after commit”, … And everything from a repo overview.
/edit: To add; other clients I tried never reached what I want from a UI/GUI, never reached TortoiseGit. Including IDE integrations where I’m already in the IDE; I prefer the separate better TortoiseGit.
GitButler is interesting for it’s different approach, but when I tried it out the git auth didn’t remember my key password. (Since trying out jj I found out it may have been due to disabled OpenSSH Service.)
Enable squash commits. Each PR should be squashed to a single commit. This makes the master branch linear and simple. This ensures each individual commit on master has been reviewed and is in a working state.
In non-minimal changesets, I would miss information/documentation about individual logical changes that make up the changeset. Commit separation that is useful for review will also be useful for history.
I prefer a deliberate, rebase- and rewrite-heavy workflow with a semi-linear history. The linear history remains readable, while allowing sum-of-parts changesets/merges.
It’s an investment, but I think it guides into good structuring and thoughts, and whenever you look at history, you have more than a squashed potential mess.
Squash-on-merge is simpler to implement and justify, of course. Certainly much better than “never rebase, never rewrite, always merge”, which I am baffled some teams have no problem doing. The history tree quickly becomes unreadable.
While exploring solutions, I use f
or ff
to mean “follow-up/to-squash” and a
to mean logically separate. Sometimes other (additional) short abbreviations to know where to move, squash, and edit the changes to.
Other than maybe initial development until the first stable/usable version, these never persist, though. And even then, only if it’s not a collaborative project. If it is shared or collaborative, “Iterate on x” is preferable as a non-descriptive title.
I guess my commit descriptions get better with project lifetime, not worse.
In the bottom notes, they link to their Quantifying the cost of RTO, which is a worthwhile read too, with visualized numbers.