LOL, no way... You can't stop the progress, once there's better HW out the software requirements will catch up soon enough.
Besides, overclocking is a fun thing to do! :D
Printable View
New crunching machine! GO! GO!
The next fab that is built should be equipped to build graphene based chips. Comp Engineers need to start designing for this process tech immediately. Why wait 10 years when you are perfectly capable of producing chips based on this tech in 5.
Transistor speed is not indicative of actual real world performance. Prescott proved that...
Just think, you could use your outdated CPUs as pencils :D
graphene, graphite, whatever..
Very true... Even if we built this tomorrow, would we really expect to clock it that high?
I think a clock that high wouldn't give the transistors time to stabilize their signal... Sure we could probably clock it higher, but until they have a chip designed for those speeds, they are only theoretical or some proof of concept.
well assuming an ideal implementation using current designs with under 32 stages in their pipeline; the ball park estimate for max clock speed is under 20Ghz plus or minus 8%
Unfortunately this technology also doesn't address any actual design problems that currently exist. Largely due to the fact that transistor switch speeds are not the limiting factor for performance in modern CPUs. The top 7 are all implementation mistakes in software [poor locality of code/data, conflicts in execution, etc]
The following 15 deal with the inability to get enough work to the CPU, keeping the pipeline filled, inaccurate predictions, power limitations, etc.
Also given that Graphene transistors are 240 nanometers [bigger and less energy efficient than Intel's 180nm Process (which had a minimum of 193 nm gate length)]
The decision is obvious from a design stand-point:
If you want the best performance and/or performance per watt; use standard lithography 40nm or lower.
If you Require 100Ghz in a circuit made in under 3Million transistors Graphene might be worth the extra cost/time/effort.
big deal, intel had 20ghz+ transistors at room temperatures in 2000 iirc... yet they never managed to push netburst, or any other chip for that matter, past 3.7ghz...
still, this is interesting news... even if it will take more than a decade to make chips based on this...
As someone who does research with graphene, I must point out that these devices have no intention of ever working in a digital environment. The on/off ratio of graphene devices is pathetically low due to a lack of band gap, or in the case of bilayer graphene a very tiny band gap during an applied field. The large challenge of making a reliable digital circuit that does not waste power out of graphene is still up in the air.
Ha, ha... more IBM nonsensical, academic crud...
http://gtresearchnews.gatech.edu/new...-terahertz.htm
They chest thumped a few years ago about a 500 GHz transistor.
BTW, theoretically, today's silicon CMOS transistor can switch as fast at 400 GHz.... (reported gate delays of 1.25 ps at the 65 nm node, one period would be 400 GHz), this cruddolla about it smoking a silicon transistor is to fool the weak minded and a poorly written commentary by the author.
Look, the papers statements are still sound. They are comparing .25um gate length between the best Si tech (.25um best) and now graphene. Graphene will trump Si, in pure switching speed, its a fundamental material property due to the ultra thin body nature. However as I said before, competing on digital speed will be a whole different ballgame.
I don't really have a REASON to run my Phenom II 955 at anything above it's stock 3.2Ghz when I only use it for games or encoding a blu-ray into an mp4 overnight, none of my games need more than that even with my 5850, and it encoding is done well before I wake up. But, that does not stop me from pushing it towards 4Ghz anyways :D
Most people here are here because it's fun to play with the numbers. If that means making a 100Ghz cpu the size of hard-drive go 200Ghz, let it be so.
Ah come on guys, why so much IBM bashing? If it wasn't for them, the open standard PC would not exist, and we would all be using factroy built off-the-shelf computers, like mac's :(
And Intel, AMD, Microsoft and many other corps would be nobodys..
Besides, some day we will all benefit from the R&D the're doing today.
Then Patenting them and licensing them to Intel et al and raking in the royalties.
GHz has been stuck at 3-4MHz since the P4 (90nm) era for aircooled CPUs this wont change in Si tech.
Si tech wont by pushed aside by Carbon (Graphene/Diamond), Germanium or III-IV based CPU tech
Megahurtz war is over, begun the core war has.
The ARM and Core will fight it out for Total Annihilation - "What began as a conflict over the transfer of consciousness from flesh to machines escalated into a war which has decimated a million worlds. The Core and the Arm have all but exhausted the resources of a galaxy in their struggle for domination. Both sides now crippled beyond repair, the remnants of their armies continue to battle on ravaged planets, their hatred fueled by over four thousand years of total war. This is a fight to the death. For each side, the only acceptable outcome is the complete elimination of the other.
It could take some time if this prophecy is true. When it's over we might see a computational medium other than silicon in consumer grade CPUs.
Look :) ... I don't want to take away from the technical prowess of the achievement, but as you point out, when they hit 35 to 40 nm Lg and a switching speed > 1 THz, then I will be impressed.
IBM does this all the time, put out a popularized public statement on a purely academic science project. It is pure academic, nonsensical crap and does not belong on the front page of some two-bit technobabble website. A publication of the achievement garners the professional respect it deserves and should stay in the IEEE sanction journals, when they can make an actual product that exceeds that of what is no the market for those who can use it, then make that announcement, otherwise it is nothing more than arrogant bolstering.
That is so ture, in order to get more done on CPU's now a days they have simply aded more cores, while the speed is lowered some what because of heat issues.
We might be able to make a transistor that is 10x faster but when its used in a complex assembly like a CPU with millions of other transistors, what will be the heat out put of this, and will it be forced to clock down in order to be managable.
Not only do we have transistor speed race and the search for new materials in which to make them out of, but a cooling tech race will also follow. Simple Air Coolers are also reaching the limits of the heat they can remove.
Going faster and then generating less heat is what will be required for new tech to be common place and installed on everyday machines, coupled with advancements in cooling.
Will advanced water cooling units that are self containted, perhaps like Coolit's or others that have been modified to handle higher heat loads, or even micro Phase units be required.
It will be very interesting to see.
But I hardly think simple stock coolers like the ones used now by Intel or AMD will be sufficient to handle CPU's that break into the 4-5ghz range at stock settings for the average comsumers machine.
Intel's 1156 stock cooler can just about handle 4GHz on Clarkdale @80-85C, good air coolers can handle Hexacore @ 4GHz.
If the rumour is true the 980X Hexacore will be the first Intel CPU to ship with a heat-pipe based tower cooler. This makes sense since the 975 stock cooler can't cope with it at stock speeds.
CoolIT ECO significantly out performs Asetec LCLC 120 (AKA Corsair H50) and Coolit Domino. These units and other advanced coolers e.g. CM v8, Danamix LMC, Tital Amanda generally can't compete on price with good core contact CPU cooler or a on performance with relatively basic watercooling loop. The problem is that anything but passive phase change or thermally pumped liquid cooling costs significantly more energy than a few good fans.
There is a fundamental limit to how much information processing density can be squeezed out of a given medium. Si will probably reach this at around 5nm gate length. Possibly 2nm in FINFETs (IBM). Then areal density may be increased by stacking for few generations (IBM again). Optical interconnects may help reduce TDP. But P=(1/2)fV^2 isn't going to change so packing more transistors into the same space means that for f to stay the same V has to come down but it can't go lower than the noise floor and the noise floor increases with temperature...
It was believed in the Si industry that a similar wall to this would be hit between 250nm and 90nm but IBM came up with Copper Interconnects and CMP to get around that. They also invented SOI technology in current AMD CPUs. So don't dismiss IBMs R&D department.
i will be impressed when a pll can generate a clean frequency at 1THz.
oh wait. http://insidetech.monster.com/news/a...us-to-1000-ghz
IBM does great research but the actual implementation is very expensive which is where intel wins. cost per wafer would be ridiculous with graphene.Quote:
IBM does this all the time, put out a popularized public statement on a purely academic science project. It is pure academic, nonsensical crap and does not belong on the front page of some two-bit technobabble website. A publication of the achievement garners the professional respect it deserves and should stay in the IEEE sanction journals, when they can make an actual product that exceeds that of what is no the market for those who can use it, then make that announcement, otherwise it is nothing more than arrogant bolstering.