What our friend Mr. Oslo needs to do is just stop playing the victim. It's that easy. :yepp:
Printable View
What our friend Mr. Oslo needs to do is just stop playing the victim. It's that easy. :yepp:
Oh, we're tolerant... on our good days ;)
So how bad are my odds of it making evga stepup expiring in 8 days?
Golden ... everyone is trying to guess if this is going to be 2x580 and huge power/heat house .... or it will be 560x2 ... with more shaders... and more memory ... or ... w8 ... GTX 560 with more shaders and more memory ... damn, isn't that a under-clocked GTX580? ehehehhe
We really don't know ... i was more into the 560x2 ... lol I don't really like to have huge hungry animals in my watercooling loop ... ( huge as ... too much power reqs ... too much heat ... ) ... so i will try to believe Nvidia is not going to create a Halley's comet :D
I mean ... i'm not saying that i won't get it ... my 295 will have to be up to the task 2 more months, because... i believe that only in March we'll see some good boards :)
I think it's going to be gf110 based simply based on the notion Nvidia doesn't cater very well to 2nd place and consider the gpu's AMD is using for the 6990.
Nvidia has the go big or go home mentality and if they felt they wouldn't be able to nab the number one spot I don't think they would bother releasing a ultra high end product considering its a very low volume segment.
but the high heat and leakage and the throtling involved with the 580 ... wouldnt that limit the performance too much???
some "Xtreme Mentor" you are... :D
your telling us that according to your research a 560 is the best card for YOU... and then concluse that the 560 is THE best card and the 6950 1gb is dead, and thats a fact...
you know the 560 is an awesome card, and id probably get one over a 6950 1gb, but thats my opinion, not a fact everybody has to accept and if they dont i call them noobs... and when they get upset i cry "foul" and complain why they are making it personal... and after all this, calling yourself a mentor and trying to lecture people here on the forums... chillax man, we are all on the same side here... we are all looking for the best price perf hardware for us, its just that we all have different experiences, preferences and requirements and live in different regions of the world, so there is no one truth... :peace:
and even if there WAS, arguing over it wont change a thing... its just a waste of time...
try to make your case instead and tell people how good a or b is, and how much you like it, and you might actually convince somebody and change his mind...
http://bbs.expreview.com/thread-39467-1-1.html
Hopefully this info is legit, sorry its in Chinese, I don't even understand it myself as well but this source says there is indeed two GF110 chips.
I believe those were the same pics leaked long ago as a prototype for something else, 375W and 8+8 pin isn't going to fly in retail
Even a 375 watt card can make it to retail. Simply look at cards like the sapphire 4gb 5970 or the 4gb Ares or Gtx 295 mars edition, all of which can potentially consume more than 400 watts according to techpowerup. These cards will be made because the vendors want it. Expect some heavy duty power regulation though unless they make the cooler akin to something like the ares.
Yeah, maybe OC on these babies will be crap due to power regulations ... i don't know... but it will be a b-e-a-s-t ... yes sir ... if they launch it with those specs :)
http://tpucdn.com/reviews/ASUS/GeFor...power_peak.gif
Thats peak realistic usage, not a furmark one, but both companies have ways to clamp that.
If we take that approximately 25W could be saved because of the fact that some components dont have to be doubled building X2 card
ATI can use full 6970 Chips and stay in the power envelope, nvidia will need 400W.Both companies can use cherry picked chips, so using less default voltage.
Unlimited furmark would make them blow tho :P.I expect ATI to utilise power tune to great extent preventing that, on the nvidia side probably some hardware method.
I did not change any meaning of your lines, in order to do that I would have to start thinking on the same level like you.
About the same level I mean. Please go back here, and read carefully and slowly what you said. Off course my reply
was sarcastic, sorry you didn't get it. Just guessing by your reply here.
I believe you must be the ONLY person in the whole world speculating AMD might be scared to release the HD 6990 because of the rumored Nvidia dual GPU card. Some strange stuff you must be smoking.
Where did I say delay is good move? Anyway HD 6970 & 6950 was delayed 3weeks, therefore HD 6990 also needs to be delayed since it's using binned chips. The new Q1 2011 release time was announced by AMD long time before this rumored Nvidia dual GPU card, so the delay has no connection as you are trying to speculate about it.Quote:
Are you saying it was a good move to delay the 6990? Got'a be a limit for what you can accept and defend, even as a hard core AMM fan.
For example the HD 5970 was released 2 months after the HD 5870. In my opinion AMD should have waited even longer to build up the inventory, since the card was sold out the first day, guess they underestimated the sales,
Who the heck is "hard core AMM fan", is this something about :banana::banana::banana::banana:. :)
"AMD has chickened out" is complete nonsense, why don't you think twice before you post, same goes for the "new" redesigned/re-freshed . I know AMD is good :), but nobody can redesigned or re-fresh Graphic Card in 3 weeks.Quote:
I'm afraid the reason wasn't so simple as you are trying to explain. I said "AMD has chickened out" already last year when they announced that nasty delay. They had to go back to drowning board and make a "new" redesigned/re-freshed (or whatever you want to call it).
Now where the heck you got the idea that HD 6990 "got redesigned/re-freshed".Quote:
Now the big question is: can 6990 (which got redesigned/re-freshed before GTX560) still be competitive against nVidia's next double-GPU? I don't think so, because nVidia has several option to match/beat at right at the day of release.
I have no idea which card is going to be faster and sure I'm not going to speculate about it based on some rumors, but since you're such expert I leave to you. In any case one of the card is going to be second fastest graphic card on the market, not so bad if we like too see any competition.
I'm really getting scared about the future. The only advantage i have that I don't believe graphic cards are about religion, even less about fanatical religion.Quote:
That nasty delay has actually killed AMD's wild card, it can't do anything good any longer. You may mean otherwise, but the future comes fast enough.
AMD doesn't have any wild card so nothing is going to be killed except in your head. AMD strategy for the flagship card is to use dual GPU, nothing will change this round.
Sometime I wonder when I read your posts what you smoke. I also wonder why do you need to include in every post AMD vs Nvidia, ALWAYS being negative about AMD and having the guts calling others AMD fanboys, when they don't agree with your highly speculative post based not even on rumors. You just make up your own rumors without thinking twice about it, if at all.
wow, so much test for such a speculative subject
Why does anyone care about the power, heat, or stock clock speed. Only enthusiasts would purchase this card, the dollar for performance would likely make the GTX580 the nVidia card for the mainstream high performance buyers. The additional cost of a 1kW+ power supply to feed the card would also limit the amount of people who would seriously consider such a card.
Who is going to purchase a dual GF110 and leave it stock?
Exactly how much PCB space is left? Just wanted to point out that that is one full PCB. But yes its probably a dual 570 unless you clock down the chips severely.
probably everyone if it ends up with VRM's like the 570's.
That depends on what chips and cooling solution Nvidia has planned. That is if this card is real though and not something fake.
Exactly. If you are the kind of person that looks at power usage when buying a video card, then both the GTX590 and the ATI 6990 aren't for you. Both I would think will be at or over the 300Watt limit.
If I was buying these cards, I would if anything, the more power it consumes the better with some rationality. Basically the more power these things consume, the closer these things will be to original spec. A 300 watt limit is going to castrate AMD just as much Nvidia, as the gtx 570 have similar performance and power usage.
The question is not who will buy them, but who will reviews them and produce them.. If you end with 375W on it and with a chips who run at 100°C average gaming, you go in trouble... Without saying the additionnal cost for a beastly cooler and all the power phase, the PCB layer for support thoses load etc etc.. All of this have a cost, a cost you will need to put in the final price... I have see a lot of 295 or 5970 users who don't even know a **** about hardware buying thoses cards, they wanted the faster and the higher price... they have the money, they buy it ( there's full of peoples on "classic "hardware forums like that )
If they use GF114. they can maintain fast clock, and high OC margin... if they use GF110, they will need reduce the core and memory speed ( less of 700mhz for the core... ) this is what ask Zallbard about performance and where will be the real fight between the 6990 and this one. if the card perform around or under 570 level, 2x 6970 with 6950 clock speed ( who close 2x 580 SLI without problem ) will surely be enough... All in one, this will be a nice fight between both cards... i will not be surprised to see quickly Evga release Superclocked 590 and why not Sapphire / Asus release faster 6990 in response then. ( can be funny to watch as this is not really mainstream market... )
AIB as Evga, Asus, can pass the limit, no problem ( Ares etc ), they provide a better cooling ( include watercooling as for the GTX295 ). But Nvidia or AMD can't ... they need to comply to certain rules ( and one is the PCI express 300W limit ).
Hey guys, in my community they are saying that the GTX 590 will be really limited because Nvidia is doing some cherry picking process to use the best GF110, and because of this the cost will blow up any pocket.
Can someone confirm this please or is just a stupid rumour?
Seems like it will end like that. I was trying to find a decent place with that very same thing said but nope, only in an underground third hand blog of my country and someone post it in mi community, I believe I should warn them to pick it like a grain of salt.
Beside of that, have you heard the same rumour?
I believe it originates from here, but I wouldn't exactly trust the source.
Can't believe how many people believe NVIDIA will launch a dual gf110 card... wake up guys! Dual 560 is plenty so what's the point? NVIDIA doesn't do monster cards, Asus and sapphire etc do, and asus is kinda hesitating cause they don't make that much money on those cards... the volume is tiny...
It's hard to keep being quiet and watch you people arguing about it when you know what GPUs do power these cards :D
You should know Nvidia's mentality, winning comes first power consumption, heat & price come second.
If 560 is plenty for them then they would have launched a dual 460 based card since they had ample time to bring such a card to market.
As you mention volume is tiny at the ultra high end so why pull your punches with a dual 560. The people buying these cards want monster cards and are willing to pay the monster price. You don't win this tiny segment by offering products that are enough, you win this segment by offering products that are over the top OMG I can't believe they did it and charge for it.
This segment is about bragging rights and marketing for the manufacturer for delivering the single fastest graphics adapter which should help sell the brand in all segments.
Last weeks we repored about graphics circuit maker NVIDIA working on the launch of its new flagship GeForce GTX 590. With 1024 CUDA cores and dual GF110 graphics circuit it is an extreme graphics card demanding a lot from the other component, where NVIDIA only uses the finest of its GPU samples and availability will be limited.
NVIDIA had a successful launch of GeForce GTX 580 where it knocked down AMD with the new GF110 GPU that with high performance and relatively reasonable power consumption took over as the king of the market. AMD couldn't counter with a single Cayman GPU, but it still has the fastest card around with its dual-GPU Radeon HD 5970.
AMD's plan was to maintain this, for PR mostly, trump with the launch of Radeon HD 6990 "Antilles" sporting dual Cayman circuits, but NVIDIA's plan is to crash the party through an unsuspected launch of a Fermi-based graphics card with dual GPUs.
As we revealed we will be dealing with fully featured GF110 circuits with 512 CUDA active cores each. To make sure the graphics card won't go above any power consumption specifications NVIDIA has had to turn down both voltages and clock frequencies, but it has also starting sorting out the circuits at TSMC where the finest GF110 samples are removed from the belt and put on line for the flagship GeForce GTX 590.
NVIDIA wants the GF110 circuits with the least leakage to maximize the clock frequencies without power consumption skyrocketing. It comes as no surprise that GeForce GTX 590 will be in limited supply. NVIDIA has no hopes of making any real money from this monster card, but it sees an opening to whip AMD and launch what it could call the fastest card in the world - no bars.
NVIDIA's reasoning is most likely that the positive PR is going to boost sales overall.
http://www.nordichardware.com/news/7...e-gtx-590.html
No 460 dual card cause it didn't make sense compared to a 480 and 5970
In the past NVIDIA never used their top of the line gpus on dual gpu cards... not a single time...
If they use gf110 is be surprised but it might actually work, but dual 580... that would be a first for nvidia and I just don't see it happening even with superbinned parts
You are joking right.
Nordichardware is a very respected site and all make perfect sence to even make it possible to use the 580 core and not break 400W. I would agree if it was SA, fud, The Inq or other known crapsite and author happens to be a familiour Charlie we love to hate and he has made it a mission to take a crap on everything nvidia does.
uh wut?
8800gtx, 9800GX2, GTX 280, GTX 295, GTX 480.... Nvidia IS monster cards... sure board partners do bigger ones like the Asus MARS or ARES but still the GTX 295 when it came out made the 4870X2 look like a small little thing in every way, performance, power consumption, heat output, noise...
You have forget RMA in your list :rofl: ( it's a joke don't take it bad and surely far to be true.) But it's what i see coming with 2 GF110 untill they are clocked at 400mhz each .
Seriously, you imagine the size of this monster with 2x 550mm2 chips, heat, power consumption, the massive cooling system... and the price for produce it and sell it... I don't imagine Nvidia want sell it at the production price and don't do money with it, or their PR is going crazy.. Let's imagine it beat the 6990..at what cost ? even if it beat the 6990, it will be a win win for AMD cause the card will cost nearly same to produce of a dual 560.... far of what can cost a dual GF110, with both high performance results, AMD adjust his price and win on all aspect.
tripple slot cooling fan from nvidia .. and quad slot for the whatever special omgwtfbbq special edition oc'd dual 580 aka 590???
i now can see it ... it cost over 900 ... takes up so much space .. gives out so much heat .. and needs a nuclear reactor to power up
i really like nordichardware, but... they have, a few times, been way off in the past when it came to vga stuff...
wouldnt even blame them, probably just bad sources, or sources seeding wrong info on purpose...
well, let me do some googling... :D
all these peercentage numbers are theoretical specs, not actual performance!
but it should be around that range in actual perf as well on average...
7950gx2 was based on the 7950, not the 7900GTX 512, the fastest card at that time. compared to the latter it came downclocked by 150mhz on the core and 400mhz on the memory or 23% and 25%
a 7950gx2 consumed only 25W more than a 7900GTX
its only slightly above 100W, but nvidia didnt go higher than that since heatsinks back then werent that great (no heatpipes) and seeded psus werent that beefy...
9800gx2 was based on the 9800gt, not the 8800GTX/8800Ultra.
compared to those it came with 300mhz/666mhz lower shader clocks and 200mhz faster/160mhz slower memory clocks or 20%/44% and +11%/8% AND the 9800gx2 only had 16rops per gpu, while a 8800GTX/Ultra had 24, so thats 50% less rops per gpu for the gx2.
the 9800gx2 wasnt even based on the fastest G92 chip, the 9800gtx and 9800gtx+ were notably faster than the 9800gt bin nvidia used.
the 9800gx2 was more of a modern dual gpu card hitting TDPs of over 200W, but it was still not limited by this... it ran very hot but thats cause nvidia didnt spend as much on the heatsink as it could have.
gtx295 was based on the same chip as the 280/285 but came with a bin/gpu config of a 275 and clocked at the same speed as a 260. it came with only 4 rops less than the highest gpu config, the 280/285, only a 15% cut down, but it came clocked 25mhz/75mhz lower on the core and shader and 200mhz/500mhz lower on the memory or 5%/11% and 10%/25% respectively.
this made it the fastest dual gpu card so far, compared to the fastest single gpu card, as it wasnt as cut down as previous dual gpu cards. its bin/config was still around 20% slower than that of the highest end gpu bin/config at its time though.
lets wrap it up:
7950gx2 - based on mainstream/highend bin which was around 40% slower than nvidias highest end sku at the time
9800gx2 - based on mainstream/highend gpu+bin which was around 30% slower than nvidias highest end sku at the time
gtx295 - based on mainstream/highend gpu+bin which was around 20% slower than nvidias highest end sku at the time
so THERE! :D
NO! :P
nvidia NEVER did a monster card using their best and fastest gpus on a dualgpu card. not a single time!
and now nvidia has the hottest gpu on their hands in their company history that is close to the tdp spec for an add in card all on its own, and you want to tell me they will do a dual gpu card based on that, with all bells and whistles.... come on man :D
dual gf110 is definitely possible! but id be surprised if they did that... dual gf114 costs them less, works on simpler pcbs and can be tweaked to around the same performance within a 300W envelope as a dual gf110 card can (im guessing, but look at the perf/watt numbers) so whats the point in using a dual gf110...?
the only point would be for enthusiasts...
what id LOVE nvidia to do, is do something like ati...
make it a dual gf110 card, use crippled 110 chips and clock them down, BUT allow us to unlock those gpus to full 512sps and overclock it... :slobber:
now that would be an EPIC card!
but i highly doubt nvidia would do that...
they want to have the fastest stock card... and for that gf114 x 2 is just as good as dual gf110... actually worse, cause gf114 is cheaper
they could still go for dual gf110 because while at stock it would probably be as fast as dual gf114 when crippled to fit in 300W, when overclocked, a dual gf110 would probably be a tad faster than a dual gf114 card, even if the gf110 is crippled gpu wise and it cant be unlocked...
Lol Nvidia are the kings of monster cards. That's the whole point on why ppl think they can't fit 2 GPU's on one card - cause they are monstrous.
Even if they put 2 570 instead of of 2 580 that is STILL a monster card with regards to die size, etc...
not really... the first nvidia dual gpu card was from asus, based on 6800GT chips, and was bigger than any dual gpu card to date...
since then asus has created several REAL monster cards... all nvidia dual gpu cards dont look monstrous at all to me...
they are SLIGHTLY longer than normal cards, thats all... asus dual gpu cards are HUGE and have massive heatsinks and the best of the best components...
while nvidia tends to use cut down gpus and mainstream components, not the best of the best... i wouldnt call that a monster card, but thats just my opinion... :D
"monster" is a relative expression. This double-GPU (whatever it is) doesn't need to be a monster, it only needs to beat/match 6990, and in case it will probably fall from the clear sky right on the day of release.
Then you can call it a "monster"-eater, LMAO, but it remains to be seen if that's the final plan.
by that reasoning the AMD/ATI is not a monster card provider either because the 5970 was downclocked so thus only Asus is a real monster card provider...
to be honest I find it hard to compared cards like the MARS and ARES to any full production card because with only around 2000 of each produced they are not widely available for purchase. IMO card like the GTX 295 and the 9800GX2 should be called monsters, the MARS should something else...
I also find it funny that people blindly accept that AMD can do it and Nvidia can't I have used both GTX 570's and 6950/6970 and I can tell you both cards are equally as large physically but the 6970 produces MORE heat and is a good bunch louder then the GTX 570 from what I noticed.... people said Nvidia would never slap a pair of GT200b's together but look at the GTX 295...
for people to think that they will take two full GTX 580's and slap them together is plain naive neither AMD or Nvidia has ever done that. only Asus lol. but it does not change the fact that whatever each company comes out with will be a total "monster" I would however be interested to see if Nvidia play the GF114 card and take a performance hit to offer a lower heat, lower noise and lower power use card then the 6990...
IMO it's really simple. Which chip has better perf/watt? GF114, of course. It's already near 150W, so a little binning will do just fine. And it will still offer nice OC headroom. The battle between AMD and Nvidia here is simple: who can deliver the highest performance at 300W.
I honestly believe this is a joke, because dual GTX 580s on a single PCB would be way too hot and consume way too much power.
5970 is two 5870 at 5850 clock speed. 5850 and 5870 both use about 170W under load. 170W+170W = 340W. how can 5970 exist? because they can make optimizations.
that said, gf114 (170W) is a lot closer in wattage to 5870 and gtx 285
(gtx 285 is 180W and they made that dual), so making gf114 dual makes a lot of sense. there is precedent for this.
I'm not sure there is precedent for making the 210W gtx570 or 280W gtx580 dual.
but dual 6970 versus dual 560 is too close for comfort for nvidia. I dont know that nvidia can win that. I dont know they want to spend the next year in 2nd place again. so maybe they've found a way.
http://hardocp.com/images/articles/1...icTzMA_2_2.gif
http://hardocp.com/images/articles/1...AMEkyP_4_6.gif
Who cares about power consumption when we talk about high-end dual-GPU cards?
I cant undesrtand why all this ranting because the power draw of a dual gpu GF110 ...... with all this 3/4way SLI whe see in XS.....
You guys rant about 1 card and buy THREE OR FOUR !!!!
We should've never reached the 300w limit in the first place... It's kinda sad that we did actually, considering that Nvidia said that they would never make a more powerhungry card, when they released the 8800GTX...
Best Regards :toast:
Well there must be some good reason why both Nvidia and AMD up to now stayed within the 300W TDP PCIe standard, with the Dual GPU reference cards. Only the limited special edition Dual GPU cards by the partners went over the limit.
For example the reference ATI HD 5970 has two 5870 chips but the clocks and voltage was reduced in order to stay within the 300W TDP. It also used binned chip that can run on lower voltage. The end result was that the HD 5970 is only about as fast as HD 5850 CrossFire.
Now if the HD 6990 is using two 6970 chips as rumored, it needs to be gutted more, but the same goes for the Nvidia dual GPU card if it's using two GTX 570 which would need even more gutting to stay within the standard. The OP about using two GTX 580 is little hard to believe.
My speculation is, if both cards will stay within the 300W, none will be on average much faster than the other.
The above post is why I hate all these standards, who in their right mind buys a card like this and thinks 'hmmm I do need to save on electricity, better make sure it's under 300W'. It would be like being annoyed that my car can't do more than 50mpg even though it's a Type R. Doesn't make any sense!!
TO heck with the standard -
It will be sad if they somehow ruin the potential to stay at 300w -
if that is the case then go ahead and wait for a vendor to make their own specialty version - who would buy this and be concerned about power consuption - it is silly
There has to be a reasonable power envelope somewhere which is why current standards are in place. Sure they could produce a 400 watt monster with impressive performance but there would he substantial comprises in doing so. You run into form factor limitations ( I honestly hope and don't expect to see a reference cooling solution longer than the 5970 and more than dual slot ) As it stands these high power using parts are feasible cooling wise at the current standards. Unless people suddenly become content on requiring a full tower + enclosures and a 3 slot cooling assembly, I doubt we will see any such over the top reference designs, which is why asus and the like offer their frankenstein creations to those who seek such beasts.
The way things are progressing semi conductor and design wise expect we will see dual gpu offerings disappear from one or both camps in the coming few years, that is unless a new obscene pcie certification and general tolerance for overly large products with irritating noise levels develops...
Well ATI used their highest end skus on their last dual gpu cards... and the best memory available as well... calling them monster cards is debatable as well I guess... I'd say yes...
The sapphire 3 slot 4gb 5970 is def a monster card... not as much as the asus cards though...
I don't get your def of monster cards... you say nvidias cards are more hardcore cause they ship in volume? Isn't that rather an argument that they are NOT monster cards?
If f1 cars would be built on automated assembly lines in the thousands they would then be REALLY hardcore instead of losing their appeal? :P
Well many people here have the will and capabilities to remove most if not all heat limitations, so its about max perf only. For those people a highly throttled dual 580 is better even if it would be slower at stock speed and cost more than a dual 560 or 570
ATI did a great job in the past building dual gpu cards that do both, fast at stock, can be mass produced, AND satisfy the demands of overclockers. Not 100% that's where asus and msi etc step in, but ATI does better than nv imo.
NVIDIA focuses less on enthusiasts when it comes to dual gpu.cards imo
Hear hear! Like I said, a dual 580, heavily underclocked would be epic!
That's what ATI does, from nv I wouldn't expect this... they wouldn't sacrifice profits and stock perf to give enthusiasts some extra headroom... then again ATI did what they did cause they could... I'm curious whether the 6990 is actually going to be two fully enabled rv970s...
ok so the heck with the standards then right ... no standards = no compatible parts for us ... proprietary solutions from every company ... and this makes them cost alot more etc...
but hey your right .. having limits on things to a certain degree is bad ....
if you do think so im sure you must have an asus ares ... and an asus mars gpu or two sitting somewhere right ;)
Not that I know of.
I believe boards which fell under the HPCES v1.0 specification (remember, the PCI-E 2.0 base spec never included 225W - 300W inputs) will now be incorporated directly into the PCI-E 3.0 base spec. However, there weren't any increases in power delivery limitations.
This basically means that companies producing higher-end cards will no longer need certifications for BOTH HPCES and PCI-E but the maximum remains 300W.
Who cares what the power consumption is? More important are the cost and performance.
awesome article on xbitlabs testing 560 SLI
http://www.xbitlabs.com/articles/vid...i_7.html#sect0
so nvidia is in a tight spot...Quote:
At the highest resolution the SLI configuration is comparable to the GeForce GTX 570 SLI tandem we tested earlier in scalability. Its performance increases by an average 70% over that of the single card. The maximum performance boost is as high as 105%. Its advantage over the GeForce GTX 580 and Radeon HD 5970 remains the same at 19-21%, though. Compared to that, the GeForce GTX 570 SLI tandem enjoyed an advantage of 42-43% over those cards. Thus, the GeForce GTX 560 Ti SLI delivers comfortable performance where the single GeForce GTX 580 can do the same and even fails in BattleForge and S.T.A.L.K.E.R.: Call of Pripyat.
So, a dual-GF114 graphics card wouldn’t be competitive as a flagship solution. It might be fast but only about as fast as an ordinary single-chip GeForce GTX 580.
dual 114 scales well, costs less and runs cooler than dual 110, BUT it reaches a perf spot their 580 already conquered months ago...
so they HAVE to go dual gf110... which means heat heat heat and some more heat... but how are they going to take care of the memory width issue? and where will they put all the memory chips? dual pcb again? phew!
*Cough* 1GB vram *Cough*
But still, i agree, dual GF114 aint fast enough.
i think they are... its just that gf110 is so fast, it gets there all on its own :D
Unfortunatelly from what i know GF104/114 supports maximum 256bytes bus memory, i wonder what more boost a 320bytes bus would do.
Could be just enough to surpass GTX 580/5970 by 30%.
I think it would be more useful as a card, if NV clocked a gf110 card up to 1ghz mhz(950 has been done with some stock coolers) and used 1.4ghz memory and used 128TMU as original specified as texture and memory bandwidth seem to be a limiting factor on the gtx 580. You would have a card that is undoubtly faster than a 5970 by 15% or so and be a single chip.
Hardwarecanucks clocked the gtx 580 up to 950 and it used 40 more watts
http://www.hardwarecanucks.com/forum...dition-13.html
so 1 ghz seems possible with great binning and a better cooler. So a card that is about 25-30% faster than a gtx 580, with 3gb of ddr 5 at 650 dollars would be fair I think.
For me I would rather have a single chip that is 85% of the speed, than a dual chip that is 100%. AMD driver support for the 4870x2 is non-existance(and their performance is getting worse with newer drivers) and I am not sure NV is better for their dual cards.
So what everybody's saying is that both companies will pretty much be stuck performance-wise until they can finally transistion to a newer transistor-tech?
I don't see them re-designing chips on the current 40nm node, so I guess they'll have to suck it up, that they've hit the ceiling... And if your current single-chip solution is pretty much using all of the 300w max, and it's as efficient as possible, a handicapped dual-chip card just doesn't make sense, as you would have to cut the chip in half performance-wise to fit two of them on a card... And then you're just back to square zero...
I'm looking forward to seeing what NVIDIA and AMD will do with their dual-chip solutions, but I think the reason we haven't seen them yet is the same: They can't make them perform admirably better than their single-chip counterparts...
Best Regards :toast:
Yeah 4870x2 was terrible when it came out driver wise... crisis was an artifact adventure... no idea if its better now
In my experience nv is doing a slightly better job with dual gpu drivers but I'd still prefer a single gpu, hill
But about tdp... when you push silicon to the limit you get 10% extra perf for maybe 20% more power... IF that's the case with gf110 then maybe a dual gpu card is able to get more performance out of the 300 w limit... if NVIDIA can pull this off I'd be very impressed!
Damn that's disappointing. I was hoping to see Nvdia/AMD not have to tip toe around the 300W limit to unleash some real future video card beasts. I know there are a few limited exceptions like the past limited edition dual GPU from NVidia, but not appealing from a financial point of view.
yeah i dont get it either...
its not like the 300W limit actually limits power consumption of systems anyways...
people who want extra power get up to 4 300W cards...
actually it would be more power efficient to allow a single vga with 400W then, cause then people who want ultra performance would run 2x400 instead of 3x300 or 4x300...
and people who want this kind of power overclock anyways, so then the limitation doesnt matter at all...
300W is clearly not enough and limiting cards... so im surprised why nvidia and amd are not pushing for more...
think about it, if there wouldnt have been a 300W limit then nvidia could actually have launched a GTX480 Ultra with 512sps and a 350W tdp and a massive heatsink...
and ati could have launched the 5970 at full 5870 clocks instead of clocking it down...
ah well... if ati and nvidia dont care... :shrug:
From what little I know the reason that they won't raise the 300W limit is the OEM market. 300W heat is alreadyy pushing the nevelope regarding the amount of heat that a normal medium sized case can handle, not to mention that it is per PCI-E slot. OEM certifications (user safety, fire protection, shock protection, whatever) would be impossible to get with higher limits. Just imagine if soemone burns his/her fingers while touching the outside of the case which can get really hot with 1 or 2 "heaters" inside.
Also most OEM machine VRMs and other stuff are built to these limits to remain cheap.
We xtreme guys are only 1% or even less of the total market, the big business is in the OEM market, they are the ones ordering thousands of parts.
Also as far as I can tell for both AMD and Nvidia the discrete graphics card market's incomes are decreasing each and every year and now with the intorduction of APUs like Fusion it will get even smaller as 90% of this ever limited income is coming in from low/medium end cards, not from the beasts that most of us here own.
So unless the OEMs change their minds or AMD/Nvidia seriously lower there prices (neither likely to happen) the PCI-E powerlimit will remain.
http://tof.canardpc.com/view/cdfc9eb...58360da87e.jpg
http://tof.canardpc.com/view/ac0d419...1643e9fe8b.jpg
http://bbs.expreview.com/thread-39523-1-1.html
edit : already posted by cold2010 page 3, sorry :(
There is a way for NVIDIA and AMD to overcome issues with the PCI-E certification: allow their board partners to do the leg work. We've seen this done quite successfully with the ASUS Ares, ASUS Mars, Sapphire HD 4850 X2, XFX HD 5970 Black Edition, etc.
All they have to do is send a set of design parameters and specifications to companies who are able to implement their own solutions from the ground up and let them deal with the rest. It's a money-saving proposition, allows NVIDIA / AMD to introduce class-leading products and skirts the certification OEMs want attached to products.
I won't be surprized if Asus brings an Ares like with 2 GF110, full specs, with 6 GB ram and 3X8pin power supply :D
exactly .. asus evga etc.. are the ones who are suposed to go beyond the limitations with their special omgwtfbbq edition gpus .. not amd or nvidia ...
Since partners got total freedom to do whatever they want with GTX560, is there any hope to see a tipple-GTX560 on a single PCB from some partners?. That could beat the hell out of 6990 too, LOL.
Right. AMD and Nvidia surely would sell more if the PCI-E spec allowed +300 W cards. :rolleyes:
Those monster cards probably sell with loss, but have an marketing impact so they actually increase sales across the board. Thats the sole reason why the cards exist. The market is non-existent and half of the cards are sent around/given for free(press samples, competition prizes) anyway to gain as much PR as possible.