So if 14 nm comes on time, what then? It appears that development will slow to a crawl slowly. I have not idea how will it impact CPU development but shrinking by just 2-4 nm every 2-3 years for the next 10 years won't bring much progress.
i would think that shrinking 2-4nm every 2-3 years would be a huge progress, because the percentage shrink will be greater and greater, and we're already hitting the limits of lithography.
Don't look at the absolute numbers, put them in relation to one another and the 2-4nm jumps look quite good. 130 nm was introduced in 2000 and in 2012 we had 32 nm (going by a table on Wikipedia): 4.0625:1 We have 22 nm in 2012 and will have 5 nm in 2022 (approx.): 4.4:1 Nothing wrong with that in my book. :-)
It would be the same level of progress as we have seen over the past decade, process shrinks have always brought about a 30% decrease in die size for the same chip, this is what Intel is still aiming for in its future processes. The only question is whether or not Intel will be able to keep up their time frame for shrinks as they are really going to start pushing the laws of physics soon.
It is different. At 14 nm, fewer than 30 Si atoms span the channel and quantum tunneling is significant. It will be far worse at each successive node. Ultimately, at about 2.5 nm, the channel will be spanned by just 5 Si atoms and the Heisenberg uncertainty principle tells us that it will no longer be possible to know that an electron is contained within a single trace. So I think we are witnessing the end of Moore's law. First, the rate at which the number of transistors doubles will decrease over the next 10 years and then it will hit zero. Why do you think Intel, IBM, and other are investing in research into moving from electronics to photonics?
Each of those jumps in the figure above reduces the gate size by ~30%, which is a linear measurement. The trick is that CPUs are 2-dimensional designs (ignore the 3rd dimension for now). This means you get that 30% reduction in each of 2 dimensions (because they aren't only reducing gate length when they make a smaller node), which means that your transistor density (in terms of transistors/area) approximately doubles each time you reduce transistor gate length by ~30%.
Using the example from the article "100mm^2 die at 22nm would measure only 6.25mm^2 at 5nm" nm die size 22 100 14 50 10 25 7 12.5 5 6.25 Each generation is half the die size of the previous generation. So, despite the nm value seeming to slow, the die size shrinkage continues at a staggering rate.
Except you are looking at the wrong thing,the math for die size is not relevant since power consumption doesn't scale with it. And afterall meaningful compuute is not all that meaningful in a connected device.
Power consumption doesn't scale down by the same rate but it does scale down if ever so slightly by raw process technology.
The other thing to look as is interconnects and the associated power consumption to go off die. Say at 7 nm a manufacturer can integrate a full 1 GB of eDRAM on a reasonably sized die, that significantly reduces power consumption compared to have 1 GB of RAM in an external package.
There has also been research into near threshold voltage for operation. Intel's current experiments into this have yielded a Pentium class device running as low as 2 milliwatt running off of a solar cell. If you can run it off of solar with enough energy to spare to recharge a small battery, you may never even need to be connected to start.
A while back NVIDIA posted some slides about "nvidia-deeply-unhappy-with-tsmc-claims-22nm-essentially-worthless", where it projected that starting from 28nm the gains in price from shrinking are nearing zero. The development of these new processes is so complex and expensive, that the big guys are forced to collaborate more in their research.
In 2010, an Australian team announced that they fabricated a single functional transistor out of 7 atoms that measured 4 nm in length.
Being so close to the limit, I am pretty sure everybody is going to drag this out for as long as possible.
The smaller fabs might want to drag it out; but Chipzilla is unlikely to stop throwing billions into R&D to maintain it's ~1 year process lead over the competition; being able to expand their lead to 2 or 3 years would make them jump for joy. Avoiding falling even farther behind means that Global Foundries/TSMC/Samsung/IBM can't stall even if they want to.
last years AMD conf had a presentation from ARM that basically explained the issue going forward was not about die shrinkage but how to dissipate heat and manage power. shrinking the die does not get you more performance per sqmm because you can't dissipate the heat fast enough. Best they said we could hope for was a bunch of different functionality silicon that was switched on and off as needed, but the power sloshing was a real challenge. Fascinating presentation. Mike.
I vaguely recall that, it was their "dark silicon" presentation, right? They said that while today's chips dissipate heat quickly enough to have the whole die active at once, future processes would have to power gate increasingly large sections of the die to keep thermals in check.
The way I prefer to think about this is in terms of single precision gigaflops per TDP watt and double precision gigaflops per watt (for HPC, servers and desktops).
Could you share your own perspectives on how fast single precision gigaflops per watt will grow over time?
Right now the fastest GPGPU accelerator is 20 single precision gigaflops per watt, with smarphone SoCs running in the 2 to 8 gigaflops per watt range. What gigaflops per watt do you see with 20 nm, 14 nm, 10 nm for the ARM ecosystem and what gigaflops per watt do you see for Atom N, Atom Z, 10 watt TDP Haswell at 22 nm; for Intel at 14 nm, for Intel at 10 nm, 7 nm etc.
This is why solutions like Ubuntu for Android are a hint of the future of computing. Ubiquitous means processing power will be everywhere, you'll simply change which interface you use for each task.
The thing is, then we should be able to get 1/2 of smartphone power now at single digit costs, which is more than enough for a "smart table." Problem is, nobody needs or wants a smart table.
Smart TV's and cheap devices to connect to dumb TV's (FXI Cotton Candy) are basically the end. The input/outputs didn't shrink. Maybe once we can print cheap touch screens on tables,fridges, walls that would be something.
What does "Meaningful Compute" mean? Googling it just brings up other reports of this intel announcement. This entire article is about a phrase that was not explained.
I can get a 24" 1920x1200 LCD monitor for 160€, 23" 1920x1080 for 110€. When I got my first 19" LCD monitor (1280x1024) it cost 450€, my first 24" LCD monitor (1920x1200) was 540€. And that is the whole package (panel, electronics, outputs...). Of course this stuff is getting super cheap and continuing to get even cheaper.
but it's going much slower than CPUs, combined with the fact that good CRTs were way better looking than any TN panel is, makes it feel kinda stagnated.
true, there have been lots of new IPS LED models out in the past 3 years or so, but they are mostly 23" and 16:9 and image is better than TN but not as good as the best IPS screens from 10 years ago.
So, what I'm saying is that, at the rate we're going now, the "bathroom mirror" display is going to be much more expensive than our cheap embedded processing unit. And that I wish 4K IPS goodness would hurry up and come out and then get within purchasing distance. hopefully before I kick the bucket.
I think the point of this article should be that the need for computational power is becoming less important than form factor and interface.
Some day soon, your couch will tell your friends and stalkers which cushion you're sitting on, and giant advertising firms will personalize ads for you based on your every move, facial expression, and the contents of your refrigerator.
as someone suggested, until we can make displays very cheaply, i.e. printable, it doesn't really matter how densely packed we can make compute nodes.
We need flexible (more roll-up than fold-up) displays initially, but in the long run I think that the HUD, recently promoted as Google Glasses, will promote ubiquitous wearable computing, because at the end of the day people can't spend 24x7 getting their smartphone/phablet/tablet out and staring at it to the exclusion of the outside world!
Quality touchscreens have rewritten the book in terms of user input, now we need a better way for people to be fed images and video.
Maybe I don't understand what compute is, but my understanding is that the rendering process that goes from a 3D model in Maya, 3DS Max, Mudbox, Vue, etc... those renderings from the model to the final image after multiple passes = COMPUTE
That would imply that the Server farms that are employed to render single frames will now be replaced by a single chip running on a smart-phone by 2012
if this is what Intel is implying then I call BS, if not, then my mistake. Even if it were possible to render Avatar on a cell phone in 8 years (which I'd say is patently ludicrous) the demands of the compute crowd will always scale to push the processing powers beyond the breaking point.
In short, "Meaningful Compute" will always be a meaningless phrase, and compute will always demand 100,000x more power from CPUs than are available in the current day on the most powerful single chip in existence.
This is a all very cool and awesome, but we were talking about ubiquitous computing, "free" (zero) compute, and low-cost display walls in my Human/Computer Interface classes 15 years ago. :P It's great that there is a "real" time frame now, though. There are bigger problems than the cost of the silicon when it comes to truly ubiquitous computing devices, however. Yay, my cup has a microprocessor in it now. To do anything interesting it will also need sensors and probably a display and network connectivity. All of this integrated in a SoC? Cool, now we need some very clever software to actually do something useful and some very, very clever people to think of things that will justify my paying $5 for a "smart" cup rather than $.50 or $.05 for a dumb one.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
37 Comments
Back to Article
Zingam - Monday, September 10, 2012 - link
So if 14 nm comes on time, what then? It appears that development will slow to a crawl slowly.I have not idea how will it impact CPU development but shrinking by just 2-4 nm every 2-3 years for the next 10 years won't bring much progress.
ezorb - Monday, September 10, 2012 - link
Think in terms of percentage rather then absolute numbers, this will serve you better in most things in life.menting - Monday, September 10, 2012 - link
i would think that shrinking 2-4nm every 2-3 years would be a huge progress, because the percentage shrink will be greater and greater, and we're already hitting the limits of lithography.Death666Angel - Monday, September 10, 2012 - link
Don't look at the absolute numbers, put them in relation to one another and the 2-4nm jumps look quite good.130 nm was introduced in 2000 and in 2012 we had 32 nm (going by a table on Wikipedia): 4.0625:1
We have 22 nm in 2012 and will have 5 nm in 2022 (approx.): 4.4:1
Nothing wrong with that in my book. :-)
bbordwell - Monday, September 10, 2012 - link
It would be the same level of progress as we have seen over the past decade, process shrinks have always brought about a 30% decrease in die size for the same chip, this is what Intel is still aiming for in its future processes. The only question is whether or not Intel will be able to keep up their time frame for shrinks as they are really going to start pushing the laws of physics soon.Arbie - Tuesday, September 11, 2012 - link
"... they are really going to start pushing the laws of physics soon."
I remember hearing the same thing in the mid-1980s. True then, true now.
JKflipflop98 - Tuesday, September 11, 2012 - link
We've been pushing the laws of physics for years and years. This is no different.Jaybus - Thursday, September 13, 2012 - link
It is different. At 14 nm, fewer than 30 Si atoms span the channel and quantum tunneling is significant. It will be far worse at each successive node. Ultimately, at about 2.5 nm, the channel will be spanned by just 5 Si atoms and the Heisenberg uncertainty principle tells us that it will no longer be possible to know that an electron is contained within a single trace. So I think we are witnessing the end of Moore's law. First, the rate at which the number of transistors doubles will decrease over the next 10 years and then it will hit zero. Why do you think Intel, IBM, and other are investing in research into moving from electronics to photonics?sefsefsefsef - Monday, September 10, 2012 - link
Each of those jumps in the figure above reduces the gate size by ~30%, which is a linear measurement. The trick is that CPUs are 2-dimensional designs (ignore the 3rd dimension for now). This means you get that 30% reduction in each of 2 dimensions (because they aren't only reducing gate length when they make a smaller node), which means that your transistor density (in terms of transistors/area) approximately doubles each time you reduce transistor gate length by ~30%.Arbie - Tuesday, September 11, 2012 - link
I agree. We're talking nanometers for chrissake. That's so small you can't even see them.Sothryn - Tuesday, September 11, 2012 - link
Using the example from the article "100mm^2 die at 22nm would measure only 6.25mm^2 at 5nm"nm die size
22 100
14 50
10 25
7 12.5
5 6.25
Each generation is half the die size of the previous generation. So, despite the nm value seeming to slow, the die size shrinkage continues at a staggering rate.
Sothryn - Tuesday, September 11, 2012 - link
Never trust formatting:nm .....die size in mm^2
22 ......100
14 ........50
10 ........25
..7 ........12.5
..5 ...........6.25
jjj - Monday, September 10, 2012 - link
Except you are looking at the wrong thing,the math for die size is not relevant since power consumption doesn't scale with it.And afterall meaningful compuute is not all that meaningful in a connected device.
Kevin G - Monday, September 10, 2012 - link
Power consumption doesn't scale down by the same rate but it does scale down if ever so slightly by raw process technology.The other thing to look as is interconnects and the associated power consumption to go off die. Say at 7 nm a manufacturer can integrate a full 1 GB of eDRAM on a reasonably sized die, that significantly reduces power consumption compared to have 1 GB of RAM in an external package.
There has also been research into near threshold voltage for operation. Intel's current experiments into this have yielded a Pentium class device running as low as 2 milliwatt running off of a solar cell. If you can run it off of solar with enough energy to spare to recharge a small battery, you may never even need to be connected to start.
uibo - Monday, September 10, 2012 - link
A while back NVIDIA posted some slides about "nvidia-deeply-unhappy-with-tsmc-claims-22nm-essentially-worthless", where it projected that starting from 28nm the gains in price from shrinking are nearing zero. The development of these new processes is so complex and expensive, that the big guys are forced to collaborate more in their research.In 2010, an Australian team announced that they fabricated a single functional transistor out of 7 atoms that measured 4 nm in length.
Being so close to the limit, I am pretty sure everybody is going to drag this out for as long as possible.
DanNeely - Monday, September 10, 2012 - link
The smaller fabs might want to drag it out; but Chipzilla is unlikely to stop throwing billions into R&D to maintain it's ~1 year process lead over the competition; being able to expand their lead to 2 or 3 years would make them jump for joy. Avoiding falling even farther behind means that Global Foundries/TSMC/Samsung/IBM can't stall even if they want to.drmrking - Monday, September 10, 2012 - link
last years AMD conf had a presentation from ARM that basically explained the issue going forward was not about die shrinkage but how to dissipate heat and manage power. shrinking the die does not get you more performance per sqmm because you can't dissipate the heat fast enough. Best they said we could hope for was a bunch of different functionality silicon that was switched on and off as needed, but the power sloshing was a real challenge. Fascinating presentation. Mike.Mr Perfect - Tuesday, September 11, 2012 - link
I vaguely recall that, it was their "dark silicon" presentation, right? They said that while today's chips dissipate heat quickly enough to have the whole die active at once, future processes would have to power gate increasingly large sections of the die to keep thermals in check.bobbozzo - Monday, September 10, 2012 - link
s/prefect/perfect/1008anan - Monday, September 10, 2012 - link
Anand,The way I prefer to think about this is in terms of single precision gigaflops per TDP watt and double precision gigaflops per watt (for HPC, servers and desktops).
Could you share your own perspectives on how fast single precision gigaflops per watt will grow over time?
Right now the fastest GPGPU accelerator is 20 single precision gigaflops per watt, with smarphone SoCs running in the 2 to 8 gigaflops per watt range. What gigaflops per watt do you see with 20 nm, 14 nm, 10 nm for the ARM ecosystem and what gigaflops per watt do you see for Atom N, Atom Z, 10 watt TDP Haswell at 22 nm; for Intel at 14 nm, for Intel at 10 nm, 7 nm etc.
chadwilson - Tuesday, September 11, 2012 - link
What about operations that are more integer based in performance? Using FLOPS as a measurement for performance completely ignores this segment.CaioRearte - Monday, September 10, 2012 - link
This is why solutions like Ubuntu for Android are a hint of the future of computing. Ubiquitous means processing power will be everywhere, you'll simply change which interface you use for each task.bh192012 - Monday, September 10, 2012 - link
The thing is, then we should be able to get 1/2 of smartphone power now at single digit costs, which is more than enough for a "smart table." Problem is, nobody needs or wants a smart table.Smart TV's and cheap devices to connect to dumb TV's (FXI Cotton Candy) are basically the end. The input/outputs didn't shrink. Maybe once we can print cheap touch screens on tables,fridges, walls that would be something.
Ronamadeo - Monday, September 10, 2012 - link
What does "Meaningful Compute" mean? Googling it just brings up other reports of this intel announcement. This entire article is about a phrase that was not explained.Ronamadeo - Monday, September 10, 2012 - link
What does "Meaningful Compute" mean? Googling it just brings up other reports of this intel announcement.softdrinkviking - Monday, September 10, 2012 - link
doesn't seem to be getting so cheap. yes, it's cheaper that it was, but not single-digit costs, more like 3-4 digits depending on size/quality.Death666Angel - Tuesday, September 11, 2012 - link
I can get a 24" 1920x1200 LCD monitor for 160€, 23" 1920x1080 for 110€. When I got my first 19" LCD monitor (1280x1024) it cost 450€, my first 24" LCD monitor (1920x1200) was 540€. And that is the whole package (panel, electronics, outputs...). Of course this stuff is getting super cheap and continuing to get even cheaper.softdrinkviking - Tuesday, September 11, 2012 - link
"it's cheaper than it was..."but it's going much slower than CPUs, combined with the fact that good CRTs were way better looking than any TN panel is, makes it feel kinda stagnated.
true, there have been lots of new IPS LED models out in the past 3 years or so, but they are mostly 23" and 16:9 and image is better than TN but not as good as the best IPS screens from 10 years ago.
So, what I'm saying is that, at the rate we're going now, the "bathroom mirror" display is going to be much more expensive than our cheap embedded processing unit.
And that I wish 4K IPS goodness would hurry up and come out and then get within purchasing distance. hopefully before I kick the bucket.
Azethoth - Monday, September 10, 2012 - link
Probably in our life-times. Also, one of these process nodes + denser 3d packaging means AI!Henk Poley - Tuesday, September 11, 2012 - link
Then there is this graph with current data: http://www.semiwiki.com/forum/content/1388-scaries...(Though I suspect chip manufacturing processes that are currently being rolled out to come down in price eventually)
Crazy1 - Tuesday, September 11, 2012 - link
I think the point of this article should be that the need for computational power is becoming less important than form factor and interface.Some day soon, your couch will tell your friends and stalkers which cushion you're sitting on, and giant advertising firms will personalize ads for you based on your every move, facial expression, and the contents of your refrigerator.
crust3 - Tuesday, September 11, 2012 - link
Limited transistor knowledge here.....when we say 28 nm node what exactly does it mean? Is it the channel length or is it something else?speculatrix - Tuesday, September 11, 2012 - link
as someone suggested, until we can make displays very cheaply, i.e. printable, it doesn't really matter how densely packed we can make compute nodes.We need flexible (more roll-up than fold-up) displays initially, but in the long run I think that the HUD, recently promoted as Google Glasses, will promote ubiquitous wearable computing, because at the end of the day people can't spend 24x7 getting their smartphone/phablet/tablet out and staring at it to the exclusion of the outside world!
Quality touchscreens have rewritten the book in terms of user input, now we need a better way for people to be fed images and video.
3ogdy - Tuesday, September 11, 2012 - link
After reading this post I was reminded of this video:https://www.youtube.com/watch?v=6Cf7IL_eZ38
We're getting there - we're closer with every single day that passes...and of course, it's gonna be awesome!
Part 2:
https://www.youtube.com/watch?v=jZkHpNnXLB0&fe...
So I assume that they were talking about "Life with 5nm technology around you" in the video.
Great News, Anand.
colonelciller - Tuesday, October 2, 2012 - link
Maybe I don't understand what compute is, but my understanding is that the rendering process that goes from a 3D model in Maya, 3DS Max, Mudbox, Vue, etc... those renderings from the model to the final image after multiple passes = COMPUTEThat would imply that the Server farms that are employed to render single frames will now be replaced by a single chip running on a smart-phone by 2012
if this is what Intel is implying then I call BS, if not, then my mistake.
Even if it were possible to render Avatar on a cell phone in 8 years (which I'd say is patently ludicrous) the demands of the compute crowd will always scale to push the processing powers beyond the breaking point.
In short, "Meaningful Compute" will always be a meaningless phrase, and compute will always demand 100,000x more power from CPUs than are available in the current day on the most powerful single chip in existence.
colonelciller - Tuesday, October 2, 2012 - link
*2020... not 2012tjoynt - Monday, March 11, 2013 - link
This is a all very cool and awesome, but we were talking about ubiquitous computing, "free" (zero) compute, and low-cost display walls in my Human/Computer Interface classes 15 years ago. :P It's great that there is a "real" time frame now, though. There are bigger problems than the cost of the silicon when it comes to truly ubiquitous computing devices, however. Yay, my cup has a microprocessor in it now. To do anything interesting it will also need sensors and probably a display and network connectivity. All of this integrated in a SoC? Cool, now we need some very clever software to actually do something useful and some very, very clever people to think of things that will justify my paying $5 for a "smart" cup rather than $.50 or $.05 for a dumb one.