Considering how hot and big the single card is, it doesn't look too likely that we'll see a multi-GPU card anytime soonQuote:
Originally Posted by fade2green514
Printable View
Considering how hot and big the single card is, it doesn't look too likely that we'll see a multi-GPU card anytime soonQuote:
Originally Posted by fade2green514
Yea, but they will do a die shrink and put them together (mabe even use GDDR4) in time for the R600;)Quote:
Considering how hot and big the single card is, it doesn't look too likely that we'll see a multi-GPU card anytime soon
Today 11:41 AM
Actually what die size is the G80, lol.
No, they won't do a die shrink in 3 months :eek: Unless they have been already working on it for a while.Quote:
Originally Posted by rodman
Even then, the power reduction would probably not be enough unless they clock it down a LOT.
Yields are already horrible, so no way they would have enough chips for dual-chip cards anyway.
What die size is the G80?
I think both have been working on smaller die varients for some time. The G80/R600 cards were never supposed to be energy efficient. They were designed to be fast at all costs and get to market ASAP. Your going to have to wait until later next year for something that takes less energy.
I wonder if it will really have 2 power connectors:eek: (edit) Nope I guess the retail versions will only have one, the prototype cards had them for testing. Either that or the GTX will have 2 and the GS one.
bingoQuote:
Originally Posted by ewitte
sure, sure.. just like how each year 9700pro gained 1000 in 3DMark2005... right?Quote:
Originally Posted by ewitte
The only time you ever really see any significant performance increases from drivers is when a new game just comes out, its all rough, rushed and unoptimized and they're patching it and the drivers to fix the mistakes. DX10 is new. Vista is new. We may very well see big driver performance improvements.. but I think that 12000 (or some say 11000) score came from WinXP setup.
Usually the first 3-4 months are pretty big with gains. Then its pretty minimal from there. There were several times I got 500-1000 points gain. It tapered back to 100,50 or even nothing or negative after that. Usually thats with the newer versions while the older versions drop a little.Quote:
Originally Posted by ***Deimos***
The 12k number floating around for 06 supposedly came from a system with an E6600 @ 3.6 or 400 x 9. If you look @ the 06 rankings here on the forum that's as fast as X1900 Crossfire @ around 750|850 with the same CPU support. Example, the number 7 score:
7. rob[GL] - 12004.00 - dual Radeon X1900 XT @ 756/828mhz - Intel Conroe 3.75GHz X1900XT Crossfire 756/828
So if it really puts up 12k for a single card, I'll certainly be looking @ it.
I agree! Sounds like it's going to drain stupid amounts of power, when are they going to work on improving the efficiency of these things.Quote:
Originally Posted by Master_G
The stock cooler can deal with the heat and the stock cooler doesn't look like anything TOO special. No doubt this will have the heartiest power consumption of any card, ever.....but it won't be 300W, 250W, or even pushing the 225W envelope nV gave themselves at stock....but it will eat up a lot of power, and OCing should only push that higher.
lets assume hypothetically for a minute, that we're taking existing control logic from 7900 and doubling execution units, and using similar to 90nm fabrication. 7900GT uses 1.2V set at 450Mhz, and oc to about 550 without overvolting. GTX uses 1.4V set at 650Mhz.. and oc only little bit (700) without overvolting. The increases Mhz only slighly raises the power consumption.. but the voltage makes the big difference... GTX almost twice the power of the GT.Quote:
Originally Posted by Vapor
G80 will be a big complex chip. Like I earlier rationolized, they will use lower voltage and lower clocks to reign in the power consumption. Lets say for example 1.2V. But, you get Macci, PCIce, OPPainter etc.. over-volting one of these devils on cascade... 1.3, 1.4, 1.5.. maybe even 1.7V!! That alone is going to be a HUGE increase in power. Will certainly put those 1000W PSU to good use. And, the increased clockrate on such an SLI system might just push even the high end PSU past the breaking point.
http://www.pcwelt.de/news/hardware/v...081/index.html
Can anyone here translate this?
Do a google for PC welt and it should translate.
Diagram chip Geforce 8800 GTX
Code name G80
Road price approximately 650 euro
Transistors about 700 million
Manufacturing 90 nanometers
Chip clock 575 MHz
Streaming processors (FR) 128
Working frequency of the SPs 1350 MHz
Theoretical pixel filling rate 36800 MPix/s
Memory quantity 768 MT GDDR3
Number of memory chips 12
Storing act 900 MHz
Memory interface 384 bits
Memory range 86.4 GB/s
Shader model 4.0
Direct 3D version 10
Open GL version 2.0
SLI yes
What does 650 euro end up in U.S dollars?
€650.00 = $819.848
http://www.xe.com/ucc
Likely to not be a direct currency conversion on the retail price in the USA though.
G
Does anybody else here think that 1350Mhz for the Streaming Processors is a really odd number. Why isn't it the chip clock of 575Mhz? Why isn't it at least a rational fraction multiple? How is nVidia able to create such a drastically different clockrate on the chip for a portion of it? How has nVidia managed to double the clockspeed compared to 7900's pixel/vertex shaders? If its true, what kind of IPC sacrifices were required (ie P4 vs Athlon)?
In tech....Euros are usually 1:1 with USD.
In regards to the varying clocks, they could do it with the G7x as well, and did. As for why the difference is so much....we'll probably find out in a few weeks....
Yikes 0_0
Its time to make a trip with 4 of those guys with me :D
good news
Why do I get the strange feeling that using a Kentsfield over a conroe will be needed to get the most out of the G80:rolleyes: I wonder if Intel and Nvidia worked something out with how the Intel chipset/cpu work together with the G80 (like using the 3rd and or 4th core for help in rendering somehow) to get the most performance:confused:
They say the scores are like 1500 point higher with quad core, but I wonder if it's just cuz the cpu score is higher, thus giving you a higher final score. 05 and 03 do not include the cpu mark for the final score so i say 05 would be a better bench for pure GPU performance.
By *getting the most out of* if you mean running 3DMarks, then yeah maybe. Single-core A64 @2.4GHz+ (preferably 1MB L2) will be all you need for gaming. Don't! Let's not talk about Alen Wake, YET.Quote:
Originally Posted by rodman
New pics...
http://bbs.mychat.to/read.php?tid=578438
WOW...Quote:
Originally Posted by rodman
that PCB looks completely different.
Its shorter. Still has 2 PCIE power connectors. But, many more electrolytic capacitors. I can count quite a few large inductors too (as expected for large power consumption device). Can't see under heatsink to check the 12 memory chip thing though. Dual slot dense fin heatsink similar to 7900GTX heatsink.. probably a bit bigger/heavier. I certainly hope that nVidia can continue the tradition of the excellent 7900GTX heatsink.. elegant, quiet and runs the card cool.
From what I see elsewhere its 12000 with quad core and about 10500 with a x6800. I'm almost thinking it would be a good idea to grab a used 7950gx2 for cheap of someone upgrading and wait for R600 ;) I already know I'll probably not be able to resist waiting without something to keep me occupied.