Oh so this will be theyr quad-sli -.-
Well its just 2 x G92 GTS in 1 card, they should release G100 instead of this crap.
::: Desktop's - Intel *** Intel 2
2 x Xeon E5-2687W *** Intel i7 3930k
EVGA SR-X *** Asus Rampage IV Extreme
96Gb (12x8Gb) G.Skill Trident X DDR3-2400MHz 10-12-12-2N *** 32Gb (8x4Gb) G.Skill Trident X DDR3-2666 10-12-12-2N
3 x Zotac GTX 680 4Gb + EK-FC680 GTX Acetal *** 3 x EVGA GeForce GTX780 + EK Titan XXL Edition waterblocks.
OCZ RevoDrive 3 x4 960Gb *** 4 x Samsung 840 Pro 512Gb
Avermedia LiveGamer HD capture card
Caselabs TX10-D
14 x 4 TB WD RE4 in RAID10+2Spare
4 x Corsair AX1200
::: Basement DataCenter :::
[*] Fibreoptic connection from operators core network
[*] Dell PowerConnect 2848 Ethernet Switch [*] Network Security Devices by Cisco
[*] Dell EqualLogic PS6500E 96Tb iSCSI SAN (40 2Tb Drives + 8 Spare Drives, Raid10+Spare Configuration, 40Tb fail safe storage)
[*] Additional SAN machines with FusionIO ioDrive Octal's (4 total Octals).
[*] 10 x Dual Xeon X5680, 12Gb DDR3, 2x100Gb Vertex 2 Pro Raid1 [*] 4 x Quad Xeon E7-4870, 96Gb DDR3, 2x100Gb Vertex 2 Pro Raid1
[*] Monster UPS unit incase power grid failure backed up by diesel powered generator.
I hope it's more than just "2 x G92's" -- my problem with all the AA at super high resolutions, is that it's sometimes just a texture fillrate benchmark. Take a look at some of the benchmarks here (the Sapphire active cooling method is a good read, too):
http://www.elitebastards.com/cms/ind...1&limitstart=4
/rambling begin/
Nvidia is a fill-rate monster, but not necessarily the best at shader/ALU ops. While I'm digging some of the newer games out there (BioShock, CoD4, and Crysis), they seem to be more focused on AA+hires than more realistic physics, AI, gameplay, and visual effects -- although CoD4 is one of the best shooters I've played in a long time. I'm pretty content playing at 1680x1050 =)
We're in the 1st generation of DX10 games; game makers are still trying to figure out how to "do it fast" -- the game makers can't keep up with the evolving hardware =) we've had multi-core machines for years, and developers are still trying to "do it right" cpu side.
/rambling end/
Why only 30% performance increase? Possibly due to Ahmdal's law (google it), but I'm still not sure what benchmark they are using -- 30% faster in fill-rate, shader, or ALU ops?
What I want from the graphics cards is more performance with similar/lower power consumption, heat, and footprint -- is that too much to ask? =P
XFX offering I think.
http://www.tomshardware.com/2008/01/...force_9800gx2/
Bookmarks