Xtreme SUPERCOMPUTER
Nov 1 - Nov 8 Join Now!
Athlon64 3700+ KACAE 0605APAW @ 3455MHz 314x11 1.92v/Vapochill || Core 2 Duo E8500 Q807 @ 6060MHz 638x9.5 1.95v LN2 @ -120'c || Athlon64 FX-55 CABCE 0516WPMW @ 3916MHz 261x15 1.802v/LN2 @ -40c || DFI LP UT CFX3200-DR || DFI LP UT NF4 SLI-DR || DFI LP UT NF4 Ultra D || Sapphire X1950XT || 2x256MB Kingston HyperX BH-5 @ 290MHz 2-2-2-5 3.94v || 2x256MB G.Skill TCCD @ 350MHz 3-4-4-8 3.1v || 2x256MB Kingston HyperX BH-5 @ 294MHz 2-2-2-5 3.94v
Why upgrade hardware in the first place if its handling what you need it to? ONE of my HD4830s handles all the games I play with max settings @ 1680*1050, but I still have two of them in CF.
Power is good![]()
"Prowler"
X570 Tomahawk | R7 3700X | 2x16GB Klevv BoltX @ 3600MHz CL18 | Powercolor 6800XT Red Devil | Xonar DX 7.1 | 2TB Barracuda | 256GB & 512GB Asgard NVMe drives | 2x DVD & Blu-Ray opticals | EVGA Supernova 1000w G2
Cooling:
6x 140mm LED fans, 1x 200mm LED fan | Modified CoolerMaster Masterliquid 240
Asrock Z77 thread! | Asrock Z77 Extreme6 Review | Asrock P67 Extreme4 Review | Asrock P67 Extreme4/6 Pro3 thread | Asrock Z68 Extreme4 thread | Asrock Z68 Extreme4 Review | Asrock Z68 Gen3 Thread | 8GB G-Skill review | TK 2.ZERO homepage | P5Q series mBIOS thread
Modded X570 Aorus UEFIs
Interestingly/Strangely enough I would prefer if Zotac was to make a GT 240 x2 card. It's TDP would be less than 150W, therefore it would only use one 6-pin cable. The clocks could/would be easily adjusted at the GTS250 levels (GT 240 allows this headroom), while it would be both cheap to manufacture (due to the 40nm scale of the GPUs) and it would also have DX10.1 support. Of course it would have 33% less shaders (192 total, instead of 256) but it would still be squarely at the GTX260/HD4870 level of performance, using less power and being cheaper to manufacture.
At $150 using a single power connector, whilst having nVidia's badge (TWIMTBP, physx) and DX10.1, it would be a good match against HD5770 certainly up until nVidia (*actually*) decides to create a mid-Range 40nm part.
Xtreme SUPERCOMPUTER
Nov 1 - Nov 8 Join Now!
Athlon64 3700+ KACAE 0605APAW @ 3455MHz 314x11 1.92v/Vapochill || Core 2 Duo E8500 Q807 @ 6060MHz 638x9.5 1.95v LN2 @ -120'c || Athlon64 FX-55 CABCE 0516WPMW @ 3916MHz 261x15 1.802v/LN2 @ -40c || DFI LP UT CFX3200-DR || DFI LP UT NF4 SLI-DR || DFI LP UT NF4 Ultra D || Sapphire X1950XT || 2x256MB Kingston HyperX BH-5 @ 290MHz 2-2-2-5 3.94v || 2x256MB G.Skill TCCD @ 350MHz 3-4-4-8 3.1v || 2x256MB Kingston HyperX BH-5 @ 294MHz 2-2-2-5 3.94v
lol spdif
DFI P965-S/core 2 quad q6600@3.2ghz/4gb gskill ddr2 @ 800mhz cas 4/xfx gtx 260/ silverstone op650/thermaltake xaser 3 case/razer lachesis
INability you mean, YOU try that resolution on a 4650![]()
Xtreme SUPERCOMPUTER
Nov 1 - Nov 8 Join Now!
Athlon64 3700+ KACAE 0605APAW @ 3455MHz 314x11 1.92v/Vapochill || Core 2 Duo E8500 Q807 @ 6060MHz 638x9.5 1.95v LN2 @ -120'c || Athlon64 FX-55 CABCE 0516WPMW @ 3916MHz 261x15 1.802v/LN2 @ -40c || DFI LP UT CFX3200-DR || DFI LP UT NF4 SLI-DR || DFI LP UT NF4 Ultra D || Sapphire X1950XT || 2x256MB Kingston HyperX BH-5 @ 290MHz 2-2-2-5 3.94v || 2x256MB G.Skill TCCD @ 350MHz 3-4-4-8 3.1v || 2x256MB Kingston HyperX BH-5 @ 294MHz 2-2-2-5 3.94v
lower tdp, yes
only one 6pin connector, yes, but who cares?
clocks easily adjusted to gts250 levels... im not sure man... nvidias 40nm parts dont clock that well...
dx10.1 support, yes, but how useful is that?
gtx260/4870 perf level, yes but it wont oc...
cheaper to make, im not sure... 40nm costs 30% more than 55nm and has worse yields, so i think its actually about the same, and clocks slightly worse, so...
you get dx10.1 and gddr5 and lower tdps but lose some clocks AND 30% of the gpus performance... tough trade... id go for a 250X2 over a 240X2 every day... this card will only last a year or two anyways... for that time you dont need 10.1 support. a 240X2 would support 10.1 but would probably be too slow to render games in that mode, so... heh...![]()
@saaya: Indeed the low yields of the 40nm architecture is a problem which may or may not be fixed in the upcoming months. Still a GT 240 x2 would be a good card for the *mid range*. I have to agree though that GTS 250 x2 is quite more powerful, but also it comes in the cost of a much more complex chip, which both consumes too much for the performance it gives and it's not exactly cheap to be manufactured...
Supposedly/Apparently G92 chips are easier to come about than GT200s ones, which also says something about the "success" that GT200s had as chips...
Galaxy GTS250 X2
http://en.expreview.com/2010/03/10/g...sion/6851.html
Humm so it would apper that if this gets launched around Gf100/Fermi does you will get a nice tree:
GTX 480
GTX 470
GTS 250 X2
The result G92 lives on even after G100 has shown up.
Coming Soon
Murray Walker: "And there are flames coming from the back of Prost's McLaren as he enters the Swimming Pool."
James Hunt: "Well, that should put them out then."
nvidia did that... but they cancelled it... they tried to shrink gt200 to 40nm and add 10.1 and gddr5 support, and they also tried to shrink G92... but it didnt work...
all they managed to get out was the small 40nm 10.1 chips... the biggest one is 30% smaller than G92... nobody knows what ever happened to the rest... i guess nvidia figured it would be too late for 10.1 parts, especially with the 40nm delays, so they skipped those and went for dx11 fermi right away?
no idea... charlie posted that their shrinks failed because G92 and gt200 were originally 80nm and 65nm, and shrinking them past 55nm isnt possible, you basically have to redesign them cause each major node step uses different transistors and you have to follow different rules... so instead of taking g92 and gt200 apart and putting it back together, they probably figured why not pull ahead gt300 instead for full dx11 and improved perf...
then again, they DID take G92 apart and put it back together and made some 10.1 40nm parts...
so they managed to do it... they def COULD have used more blocks and make a G92 style 10.1 40nm chip... or even gt200 sized... but they didnt... but why? who knows...
youd think bad yields... but then it doesnt make sense to INSTEAD come up with a much bigger gt300... which suffers even worse from bad yields... its a mystery to me...
heheheh nice smiley
hmmm so there are several 250x2 cards?
the ebga g92 gt200 hybrid was most likely cooked up by evga AND nvidia...
im starting to think the dual g92 card is actually an nvidia design as well... and every partner that wants to, can use it...
Last edited by saaya; 03-10-2010 at 01:22 PM.
Oh man it will be another 8600 GX2 I've wrote aboutTotal fail. If you want this kind of performance than go for Radeon HD 5770/HD 5830. You have performance without fear if it will actually scale good or bad due to SLI. Some games even perform like dual-core card has only one core! You'll got the same or lower FPS as on 9800 GT in this case.
![]()
G84 was a profoundly bad and wasteful architecture, putting it in any amount of numbers in a single card it would still suck. I actually find GT 240 one of the best chips nVidia produced lately, it runs cool and it's quite powerful (almost 9800GT level of performance with almost half the consumption).
As for SLI support I find it excellent lately, my GTX 295 card behaves as a single card in any amount of games. SLI is mature, better in so many levels from the G8x days. The "RUSE Beta" I played lately -for example- gives almost 100% scaling at medium resolutions -with no hiccups- and it's still in Beta phase.
Even if they got anything else wrong, nVidia -lately- produces excellent drivers with -almost- catholic support. The games that do not support SLI are probably too old/simple to make use of the extra juice in the first place...
Workstation:
3960X | 32GB G.Skill 2133 | Asus Rampage IV Extreme
3*EVGA GTX580 HC2 3GB | 3*Dell U3011
4*Crucial M4 256GB R0 | 6*3TB WD Green R6
Areca 1680ix-24 + 4GB | 2*Pioneer BDR-205 | Enermax Plat 1500W
Internal W/C | PC-P80 | G19 | G700 | G27
Destop Audio:
Squeezebox Duet | Beresford TC-7520 Caiman modded | NAD M3 | MA RX8 | HD650 | ATH-ES7
Man Cave:
PT-AT5000E | TXP65VT30 | PR-SC5509 | PA-MC5500 | MA GX300*2, GXFX*4, GXC350 | 2*BK Monolith+
Gaming on the go:
Alienware M18x
i7 2920XM | 16GB DDR3 1600
2*6990 | WLED 1080P
2*Crucial M4 256GB | BD-RW
BT 375 | Intel 6300 | 330W PSU
2011 Audi R8 V10 Ibis White ABT Tuned - 600HP
you mean they masage your back and carress your thives? :S
sorry, couldnt resist ^^
id still prefer a gts250... while sli support is much better, its still not perfect, and some games scale nicely with sli, but there are still glitches and some games stutter... so the ability to fall back to a single G92 at 750mhz+ is very welcome, at least by me...![]()
Interestingly enough its TDP would (had) be(en) lower than a single GTS250, offering 50% more shader power.
Anyway obviously my "recommendation" is not even that, companies do not seem too inclined towards that direction (the bad 40nm yield should also play some role); thus my idea is not of any consequence anyhow.
The grievances that many you have about SLI, though, I think are unfounded, you'd find more glitches at any given game -nowadays- due to independent reasons than SLI - also I can think of no modern (post 2007) AAA title that has no SLI support...
mass effect 1, for example... it had SLI support but in one map of the game FPS would be around 10 when you enabled SLI (60 when you disabled it).
That map turned out to be where the boss battle took place... And even months after the game's release, it still wasn't fixed.
INTEL Core i7 920 // ASUS P6T Deluxe V2 // OCZ 3G1600 6GB // POWERCOLOR HD5970 // Cooler Master HAF 932 // Thermalright Ultra 120 Extreme // SAMSUNG T260 26"Has anyone really been far even as decided to use even go want to do look more like?
250x2 should be almost as fast as a 280 witch the last time i looked is faster then 4890
_________________
5770 is closer to GTX 260 in performance than a single GTS 250.
Besides is it not true that 4870 performed close to 9800 GX2 and 5770 performs close to 4870.
This is a 9800 GX2 with faster speeds but also higher consumption than the 5770, not to mention it lacks DX 11 support.
Coming Soon
Bookmarks