Aaaaah, someone finally understands why I'm fed up with ATI :)
Before you call me a fanboy, I'm just as sick of waiting for Fermi so both companies have a lot to prove to me before I upgrade again:shrug:
Printable View
Why upgrade hardware in the first place if its handling what you need it to? ONE of my HD4830s handles all the games I play with max settings @ 1680*1050, but I still have two of them in CF.
Power is good :D
Interestingly/Strangely enough I would prefer if Zotac was to make a GT 240 x2 card. It's TDP would be less than 150W, therefore it would only use one 6-pin cable. The clocks could/would be easily adjusted at the GTS250 levels (GT 240 allows this headroom), while it would be both cheap to manufacture (due to the 40nm scale of the GPUs) and it would also have DX10.1 support. Of course it would have 33% less shaders (192 total, instead of 256) but it would still be squarely at the GTX260/HD4870 level of performance, using less power and being cheaper to manufacture.
At $150 using a single power connector, whilst having nVidia's badge (TWIMTBP, physx) and DX10.1, it would be a good match against HD5770 certainly up until nVidia (*actually*) decides to create a mid-Range 40nm part.
lol spdif
INability you mean, YOU try that resolution on a 4650 :rofl:
lower tdp, yes
only one 6pin connector, yes, but who cares? :D
clocks easily adjusted to gts250 levels... im not sure man... nvidias 40nm parts dont clock that well...
dx10.1 support, yes, but how useful is that?
gtx260/4870 perf level, yes but it wont oc...
cheaper to make, im not sure... 40nm costs 30% more than 55nm and has worse yields, so i think its actually about the same, and clocks slightly worse, so...
you get dx10.1 and gddr5 and lower tdps but lose some clocks AND 30% of the gpus performance... tough trade... id go for a 250X2 over a 240X2 every day... this card will only last a year or two anyways... for that time you dont need 10.1 support. a 240X2 would support 10.1 but would probably be too slow to render games in that mode, so... heh... :D
@saaya: Indeed the low yields of the 40nm architecture is a problem which may or may not be fixed in the upcoming months. Still a GT 240 x2 would be a good card for the *mid range*. I have to agree though that GTS 250 x2 is quite more powerful, but also it comes in the cost of a much more complex chip, which both consumes too much for the performance it gives and it's not exactly cheap to be manufactured...
Supposedly/Apparently G92 chips are easier to come about than GT200s ones, which also says something about the "success" that GT200s had as chips :rolleyes: ...
Galaxy GTS250 X2
http://en.expreview.com/2010/03/10/g...sion/6851.html
Humm so it would apper that if this gets launched around Gf100/Fermi does you will get a nice tree:
GTX 480
GTX 470
GTS 250 X2
The result G92 lives on even after G100 has shown up.
it's like GTS 500 http://img695.imageshack.us/img695/932/awesomet.png
nvidia did that... but they cancelled it... they tried to shrink gt200 to 40nm and add 10.1 and gddr5 support, and they also tried to shrink G92... but it didnt work...
all they managed to get out was the small 40nm 10.1 chips... the biggest one is 30% smaller than G92... nobody knows what ever happened to the rest... i guess nvidia figured it would be too late for 10.1 parts, especially with the 40nm delays, so they skipped those and went for dx11 fermi right away?
no idea... charlie posted that their shrinks failed because G92 and gt200 were originally 80nm and 65nm, and shrinking them past 55nm isnt possible, you basically have to redesign them cause each major node step uses different transistors and you have to follow different rules... so instead of taking g92 and gt200 apart and putting it back together, they probably figured why not pull ahead gt300 instead for full dx11 and improved perf...
then again, they DID take G92 apart and put it back together and made some 10.1 40nm parts...
so they managed to do it... they def COULD have used more blocks and make a G92 style 10.1 40nm chip... or even gt200 sized... but they didnt... but why? who knows...
youd think bad yields... but then it doesnt make sense to INSTEAD come up with a much bigger gt300... which suffers even worse from bad yields... its a mystery to me... :shrug:
heheheh nice smiley :D
hmmm so there are several 250x2 cards?
the ebga g92 gt200 hybrid was most likely cooked up by evga AND nvidia...
im starting to think the dual g92 card is actually an nvidia design as well... and every partner that wants to, can use it...
Oh man it will be another 8600 GX2 I've wrote about :rofl: Total fail. If you want this kind of performance than go for Radeon HD 5770/HD 5830. You have performance without fear if it will actually scale good or bad due to SLI. Some games even perform like dual-core card has only one core! You'll got the same or lower FPS as on 9800 GT in this case. :down:
G84 was a profoundly bad and wasteful architecture, putting it in any amount of numbers in a single card it would still suck. I actually find GT 240 one of the best chips nVidia produced lately, it runs cool and it's quite powerful (almost 9800GT level of performance with almost half the consumption).
As for SLI support I find it excellent lately, my GTX 295 card behaves as a single card in any amount of games. SLI is mature, better in so many levels from the G8x days. The "RUSE Beta" I played lately -for example- gives almost 100% scaling at medium resolutions -with no hiccups- and it's still in Beta phase.
Even if they got anything else wrong, nVidia -lately- produces excellent drivers with -almost- catholic support. The games that do not support SLI are probably too old/simple to make use of the extra juice in the first place...
you mean they masage your back and carress your thives? :S
:lol:
sorry, couldnt resist ^^
id still prefer a gts250... while sli support is much better, its still not perfect, and some games scale nicely with sli, but there are still glitches and some games stutter... so the ability to fall back to a single G92 at 750mhz+ is very welcome, at least by me... :)
Interestingly enough its TDP would (had) be(en) lower than a single GTS250, offering 50% more shader power.
Anyway obviously my "recommendation" is not even that, companies do not seem too inclined towards that direction (the bad 40nm yield should also play some role); thus my idea is not of any consequence anyhow.
The grievances that many you have about SLI, though, I think are unfounded, you'd find more glitches at any given game -nowadays- due to independent reasons than SLI - also I can think of no modern (post 2007) AAA title that has no SLI support...
mass effect 1, for example... it had SLI support but in one map of the game FPS would be around 10 when you enabled SLI (60 when you disabled it).
That map turned out to be where the boss battle took place... And even months after the game's release, it still wasn't fixed.
250x2 should be almost as fast as a 280 witch the last time i looked is faster then 4890
http://media.bestofmicro.com/3/H/226...%20No%20AA.png
5770 is closer to GTX 260 in performance than a single GTS 250.
Besides is it not true that 4870 performed close to 9800 GX2 and 5770 performs close to 4870.
This is a 9800 GX2 with faster speeds but also higher consumption than the 5770, not to mention it lacks DX 11 support.