It’s both lack of competition and the end of Moores law. We’ve effectively reached the end of silicon gate sizes and the tooling complexity required to keep shrinking process nodes and increase transistor density is increasing exponentially, so semiconducters no longer get cheaper… and it’s starting to push these cutting edge nodes outside of economic viability for consumer products. I’m sure TSMC is taking a very healthy profit cut for sure but the absolute magic they have to work to have 2nm work at all is beginning to be too much.
The number has some connection to transistor density, in the sense that a lower number means generally higher density. However there is not any physical feature on the chip that is actually 3nm in length.
It is marketing and it does have meaningful connection to the litho features, but the connection is not absolute. For example Samsung’s 5nm is noticeably more power hungry than TSMC’s 5nm.
I’m of the opinion that this is why liquid cooling is so important to next gen hw. I think they’re going to start spreading out the chips more and sandwiching them like with the dh200s Nvidia is working on
Absolutely. 3D stacking is becoming viable too, as AMD has proven with their X3D chips with massive gobs of L3 cache stacked on top of the logic dies. Vertical stacking and sheer die size is going to make total power density only continue to go up.
Liquid cooling has become more needed because processors and gpu’s have become outrageous power hogs. Desktops needing 1,000 watt psu’s is just outrageous.
That’s not really true, except at the ultra high end. My 4070 barely draws more than my old 1070. The 4080 draws the same as a 3080 with double the performance.
I would argue water cooling is far less needed today. What has changed is Nvidia selling chips that would have been considered extreme aftermarket overclocking 10 years ago.
It’s been talked about a lot. Lots of people have predicted it.
It does eventually have to end though. And I think even if this isn’t the end, we’re close to the end. At the very least, we’re close to the point of diminishing returns.
Look at the road to here-- We got to the smallest features the wavelength of light could produce (and people said Moore’s Law was dead), so we used funky multilayer masks to make things smaller and Moore lived on. Then we hit the limits of masking and again people said Moore’s Law was dead, so ASML created a whole new kind of light with a narrower wavelength (EUV) and Moore lived on.
But there is a very hard limit that we won’t work around without a serious rethink of how we build chips- the width of the silicon atom. Today’s chips have pathways that are in many cases well under 100 atoms wide. Companies like ASML and TSMC are pulling out all the stops to make things smaller, but we’re getting close to the limit of what’s possible with the current concepts of chip production (using photolithography to etch transistors onto silicon wafers). Not possible like can we do it, but possible like what the laws of physics will let us do.
That’s going to be an interesting change for the industry, it will mean slower growth in processing power. That won’t be a problem for the desktop market as most people only use a fraction of their CPU’s power. It will mean the end of the ‘more efficient chip every year’ improvement for cell phones and mobile devices though.
There will be of course customers calling for more bigger better, and I think that will be served by more and bigger. Chiplets will become more common, complete with higher TDP. That’ll help squeeze more yield out of an expensive wafer as the discarded parts will contain fewer mm^2. Wouldn’t be surprised to see watercooling become more common in high performance workstations, and I expect we’ll start to see more interest in centralized watercooling in the server markets. The most efficient setup I’ve seen so far basically hangs server mainboards on hooks and dunks them in a pool of non-conductive liquid. That might even lead to a rethink of the typical vertical rack setup to something horizontal.
If we’re talking about what Moore originally formulated, then the law isn’t just about transistors. He actually claimed that the cost per integrated component is cutting in half every x months. The exact value of x was tweaked over the years, but we settled on 18 months.
If we were just talking about transistor count, the industry has kept up. When we bring price into the mix, then we’re about an order of magnitude behind where we “should” be.
When he wrote it, the first integrated circuit had only been invented about 6 years prior. He was working from only 6 years of data and figured the price per integrated component would continue to drop for another decade. It’s remarkable that it lasted as long as it did, and I wish we could find a way to be happy with that. We’ve done amazing things with the ICs we have, and probably haven’t found everything we can do with them. If gate sizes hit a limit, so what? We’ll still think of new ways to apply the technology.
The reason we went multicore was because the frequencies weren’t scaling, but the number of transistors were. We’ve been around the 2-300ps clock cycle for a long time now.
Yup, that’s basically what I mean. Free transistor density increases via node shrink to improve processor performance are long gone, and the cost to get usable yield out of the smaller nodes is now increasing exponentially due to the limits of physics
No, there’s still competition. Samsung and Intel are trying, but are just significantly behind. So leading the competition by this wide of a margin means that you can charge more, and customers decide whether they want to pay way more money for a better product now, whether they’re going to wait for the price to drop, or whether they’ll stick with an older, cheaper node.
And a lot of that will depend on the degree to which their customers can pass on increased costs to their own customers. During this current AI bubble, maybe some of those can. Will those manufacturing desktop CPUs or mobile SoCs be as willing to spend? Maybe not as much.
Or, if the AI hype machine crashes, so will the hardware demand, at which point TSMC might see reduced demand for their latest and greatest node.
This is what a lack of competition looks like.
However… Twice the price of 4nm? The gains are fairly marginal from what I gather. I don’t think many will bother.
It’s both lack of competition and the end of Moores law. We’ve effectively reached the end of silicon gate sizes and the tooling complexity required to keep shrinking process nodes and increase transistor density is increasing exponentially, so semiconducters no longer get cheaper… and it’s starting to push these cutting edge nodes outside of economic viability for consumer products. I’m sure TSMC is taking a very healthy profit cut for sure but the absolute magic they have to work to have 2nm work at all is beginning to be too much.
I was under the impression that anything under like 10nm was just marketing and doesn’t actually refer to transistor density in any meaningful way?
The number has some connection to transistor density, in the sense that a lower number means generally higher density. However there is not any physical feature on the chip that is actually 3nm in length.
This has been true since the late 90s probably.
Late 90s was 350nm down to 180nm (Known as 0.35um and 0.18um respectively). Things were still pretty honest around then.
2010s is probably where most of the shenanigans started.
It is marketing and it does have meaningful connection to the litho features, but the connection is not absolute. For example Samsung’s 5nm is noticeably more power hungry than TSMC’s 5nm.
I’m of the opinion that this is why liquid cooling is so important to next gen hw. I think they’re going to start spreading out the chips more and sandwiching them like with the dh200s Nvidia is working on
Absolutely. 3D stacking is becoming viable too, as AMD has proven with their X3D chips with massive gobs of L3 cache stacked on top of the logic dies. Vertical stacking and sheer die size is going to make total power density only continue to go up.
Liquid cooling has become more needed because processors and gpu’s have become outrageous power hogs. Desktops needing 1,000 watt psu’s is just outrageous.
That’s not really true, except at the ultra high end. My 4070 barely draws more than my old 1070. The 4080 draws the same as a 3080 with double the performance.
I would argue water cooling is far less needed today. What has changed is Nvidia selling chips that would have been considered extreme aftermarket overclocking 10 years ago.
It’s been talked about a lot. Lots of people have predicted it.
It does eventually have to end though. And I think even if this isn’t the end, we’re close to the end. At the very least, we’re close to the point of diminishing returns.
Look at the road to here-- We got to the smallest features the wavelength of light could produce (and people said Moore’s Law was dead), so we used funky multilayer masks to make things smaller and Moore lived on. Then we hit the limits of masking and again people said Moore’s Law was dead, so ASML created a whole new kind of light with a narrower wavelength (EUV) and Moore lived on.
But there is a very hard limit that we won’t work around without a serious rethink of how we build chips- the width of the silicon atom. Today’s chips have pathways that are in many cases well under 100 atoms wide. Companies like ASML and TSMC are pulling out all the stops to make things smaller, but we’re getting close to the limit of what’s possible with the current concepts of chip production (using photolithography to etch transistors onto silicon wafers). Not possible like can we do it, but possible like what the laws of physics will let us do.
That’s going to be an interesting change for the industry, it will mean slower growth in processing power. That won’t be a problem for the desktop market as most people only use a fraction of their CPU’s power. It will mean the end of the ‘more efficient chip every year’ improvement for cell phones and mobile devices though.
There will be of course customers calling for more bigger better, and I think that will be served by more and bigger. Chiplets will become more common, complete with higher TDP. That’ll help squeeze more yield out of an expensive wafer as the discarded parts will contain fewer mm^2. Wouldn’t be surprised to see watercooling become more common in high performance workstations, and I expect we’ll start to see more interest in centralized watercooling in the server markets. The most efficient setup I’ve seen so far basically hangs server mainboards on hooks and dunks them in a pool of non-conductive liquid. That might even lead to a rethink of the typical vertical rack setup to something horizontal.
It’s gonna be an interesting next few years…
If we’re talking about what Moore originally formulated, then the law isn’t just about transistors. He actually claimed that the cost per integrated component is cutting in half every x months. The exact value of x was tweaked over the years, but we settled on 18 months.
If we were just talking about transistor count, the industry has kept up. When we bring price into the mix, then we’re about an order of magnitude behind where we “should” be.
When he wrote it, the first integrated circuit had only been invented about 6 years prior. He was working from only 6 years of data and figured the price per integrated component would continue to drop for another decade. It’s remarkable that it lasted as long as it did, and I wish we could find a way to be happy with that. We’ve done amazing things with the ICs we have, and probably haven’t found everything we can do with them. If gate sizes hit a limit, so what? We’ll still think of new ways to apply the technology.
I mean technically moores law has been dead for 15 years. The main reason we went to multi-core was we couldn’t keep up otherwise.
And now chiplet systems and 3dvcache
The reason we went multicore was because the frequencies weren’t scaling, but the number of transistors were. We’ve been around the 2-300ps clock cycle for a long time now.
Its not even entierly a tooling issue, the gates are now just getting so small that interferance from quantumn effects is becomming a genuine problem.
Yup, that’s basically what I mean. Free transistor density increases via node shrink to improve processor performance are long gone, and the cost to get usable yield out of the smaller nodes is now increasing exponentially due to the limits of physics
No, there’s still competition. Samsung and Intel are trying, but are just significantly behind. So leading the competition by this wide of a margin means that you can charge more, and customers decide whether they want to pay way more money for a better product now, whether they’re going to wait for the price to drop, or whether they’ll stick with an older, cheaper node.
And a lot of that will depend on the degree to which their customers can pass on increased costs to their own customers. During this current AI bubble, maybe some of those can. Will those manufacturing desktop CPUs or mobile SoCs be as willing to spend? Maybe not as much.
Or, if the AI hype machine crashes, so will the hardware demand, at which point TSMC might see reduced demand for their latest and greatest node.