0.02% means you’re saving a fraction of a second for every hour of runtime. A lot of adding up is required to make it significant enough for anyone to notice.
Better to spend that time and effort on things that actually bring value. These kind of micro optimizations can also make the code unnecessarily complicated and difficult to work with, which is a hindrance for the optimizations that truly matter.
In a single one-off program or something that’s already fast enough to not take more than a few seconds—yeah, the time is spent better elsewhere.
I did mention for a compiler, specifically, though. They’re CPU bottlenecked with a huge number of people or CI build agents waiting for it to run, which makes it a good candidate for squeezing extra performance out in places where it doesn’t impact maintainability. 0.02% here, 0.15% there, etc etc, and even a 1% total improvement is still a couple extra seconds of not sitting around and waiting per Jenkins build.
Also keep in mind that adding features or making large changes to a compiler is likely bottlenecked by bureaucracy and committee, so there’s not much else to do.
I saw an article last week about a one-liner they were adding to the Linux kernel that would reduce the startup time by .03 seconds, and let me tell you, I was relieved.
Optimizing CPU usage by 0.02% is something only the truly deranged do
#gentoo
Not necessarily. It depends on what you’re optimizing, the impact of the optimizations, the code complexity tradeoffs, and what your goal is.
Optimizing many tiny pieces of a compiler by 0.02% each? It adds up.
Optimizing a function called in an O(n2) algorithm by 0.02%? That will be a lot more beneficial than optimizing a function called only once.
Optimizing some high-level function by dropping into hand-written assembly? No. Just no.
0.02% means you’re saving a fraction of a second for every hour of runtime. A lot of adding up is required to make it significant enough for anyone to notice.
Better to spend that time and effort on things that actually bring value. These kind of micro optimizations can also make the code unnecessarily complicated and difficult to work with, which is a hindrance for the optimizations that truly matter.
In a single one-off program or something that’s already fast enough to not take more than a few seconds—yeah, the time is spent better elsewhere.
I did mention for a compiler, specifically, though. They’re CPU bottlenecked with a huge number of people or CI build agents waiting for it to run, which makes it a good candidate for squeezing extra performance out in places where it doesn’t impact maintainability. 0.02% here, 0.15% there, etc etc, and even a 1% total improvement is still a couple extra seconds of not sitting around and waiting per Jenkins build.
Also keep in mind that adding features or making large changes to a compiler is likely bottlenecked by bureaucracy and committee, so there’s not much else to do.
I saw an article last week about a one-liner they were adding to the Linux kernel that would reduce the startup time by .03 seconds, and let me tell you, I was relieved.