Your sarcasm skill are finely polished. :D I personally will wait for proper reviews before I pass judgment, although I must admit that Fermi may very well be shaping up to be a letdown.
Printable View
yep amd did exactly the same they cherry picked res where gtx 295 ran out of memory and made us believe that they had gtx 295 killer when in reality it got spanked by gtx 295 nearly in all res except 2560 !
Well I'm not saying it will be a letdown "per se" ,but it is indeed very late to the market and not promising big increases in gaming performance compared to what we have today in high-end segment(I'm not talking about GPGPU though,in this segment it will rock,I'm sure).
5870 is a great card but that doesnt mean that the gtx480 will be bad. if power rumors are true that probably means good overclocking. the high power consumption is from leakage power which is great for clockspeed but bad for efficiency. keep in mind that enthusiasts want peak performance. they want 32x AA and 16x AF on the latest titles on their 2560x1600 monitor.:up:
how do you know how 400 series costs much more? you should do some research first. even if the die costs 2x more the other costs will lower difference in BOM.
leakage doesnt mean high oc ...... it only means bad transistor....
are the c2 revision phenom better then the c3 revision ???? nope ... they got a better process etc... less leakage .... see where im going ....
wich all result in a higher and more stable oc
You are (mostly) correct :) .The high power of the 480 model indicates NV needed to boost core voltage beyond their initial targets(low voltage was the goal from the start). By raising the core voltage they entered the unfortunate skyrocketing of the power draw because the already leaky trans. in already castrated die (512SP->480/448)began to leak even more. The OC headroom will largely depend on one's ability to cool this monster so water cooling may be the best option for enthusiasts who will buy this card and OC it. For stock/default users, heat shouldn't be a problem at all (if one likes hot piece of electronic equipment radiating heat in their case :) ).
I agree that there isn't enough objective data to make any conclusions about GTX480 right now. I'll wait for a real review. It could even be a 5970 killer or whatever, it's not impossible it just seems unlikely. We will see the reality soon enough. I just hope there is price competition. It'd be nice to have GTX470 be about as fast as a 5870 and then GTX480 would probably be right between 5870 and 5970.
Anyway, my stuff is just an honest guess and I am more then willing admit as such. But with all the conflicting information being spread as fact, some people will soon have to recall the counter-factual information they disseminated. It really was shameful to watch so many people claiming to have inside sources and then spreading different information. And yet we still don't have any real solid performance information. But that's ok, that's what a real review is for anyway. I'm sticking by my GTX480 on average 30% faster then 5870 prediction because I made it in good faith. I'd note that it was with full shaders and clocks at the time. Anyway, on with the reviews.
it applies to any process. that's why 45nm AMD phenoms are almost hitting 7GHz under ln2. r600 was good under ln2 because it was on leaky 80nm process which was chosen for its raw performance although having an unfinished architecture didnt help.
if the rumors about leaky xtors are true then 480 should be more power efficient by going for higher clocks with less shaders.
so cooling your card will limit the leakage to a bare minimum ???? and thats the reason top oc'er claim high leakage part oc better ????
but get temps closer to absolute zero and you get hyperconductivity wich would allow voltage to flow more freely ????? so maybe there is a reason they like high leakage part on ln2 and liquid hellium????
Won't limit the leakage but will get the uncontrollable temperature (under stock cooling) to manageable levels ;),so it will not hamper the functionality of the device. Remember ,the parts may leak as hell,but if you cool them down they will work (for how long it's unknown as it is with any MPU that you drive out of specs).
thanks for the explanation
I don't like showing pics over and over but these are few and far between so here they are again on a new page for you.(so you don't have to keep searching back in the thread);)
Does anyone know what ATI driver version they used? 10.3a?, 10.2? 10.1? ..maybe 11.9? :rolleyes: Who know?
Also, can anyone confirm these Farcry2 and Dirt2 scores with their own 5870? What are others getting at these same rezs? Put settings on highest, but without AA, except for the one that mentions AA. Thanks. :)
http://img717.imageshack.us/img717/2629/uploadee.png
I made this using what hardware-infos say's is the final specs and those prices are launch ones not retail.
That table seems to look correct. Although any of the information so far could still be false, I think a lot of it is right now.
5870 eyefinity edition draws more nearly 210 watt or something and price is 449
Hexus has a similar table
http://www.hexus.net/content/item.php?item=23008
Not the best system in the world, but I ran FarCry 2 just then:
System:
3x ATi 4870 (stock clocks)
AMD Phenom II X4 B50 @ 3.7GHz
4GB DDR3 1333
Windows 7 x64
FC2 Settings:
Settings: Demo(Ranch Small), 1920x1200 (59Hz), D3D10, Fixed Time Step(No), Disable Artificial Intelligence(No), Full Screen, Anti-Aliasing(None), VSync(No), Overall Quality(Ultra High), Vegetation(Very High), Shading(Ultra High), Terrain(Ultra High), Geometry(Ultra High), Post FX(High), Texture(Ultra High), Shadow(Ultra High), Ambient(High), Hdr(Yes), Bloom(Yes), Fire(Very High), Physics(Very High), RealTrees(Very High)
FPS:
GTX480 (From graph):
Min: 72
Avg: 92
5870 (From graph):
Min: 47
Avg: 66
Mine:
Min: 66
Avg: 88
One thing you guys should note though, my average graphics load was 50 - 60% So yeah, they were held back a bit :)
Thanks for the benches. ;)
but I was hoping for someone with a 5870. Where are ya? come out of the woodworks and bench with 10.3a.(if ya got 'em) :up:
Will there be a 2+ GB version of the GTX 480?
Maybe at release. But not after the crossfire improvements in drivers released since then. I have been using a 5970 since shortly after release and it has improved substantially. In games that scale properly across 2 gpus and aren't cpu bound a 5970 is about 45-65% faster then a 5870. The average was dragged down by many games without proper 5970 support in the driver, but many were added in more recent drivers. Crossfire/SLI scaling isn't great, but it isn't 40% with proper profiles either. :rolleyes:
That's why it's important to pay attention to driver version used in reviews. Some of the new fermi reviews will doubtlessly use old data for the rest of the cards. That doesn't really give you a picture of the reality of the marketplace now.
Last time HD 4870 and HD 4870 X2 were reviewed using Cat. 9.5, the difference was 46% on average. That should be around a year after the initial release, so plenty of time.
I don't doubt, that HD 5970 is doing better now, but across many games in different resolutions and settings, the average won't be much better. Definitely not good enough to justify going dual GPU.
http://i39.tinypic.com/bednrr.jpg
http://i43.tinypic.com/fylvsg.jpg
http://i41.tinypic.com/2m7djpg.jpg
http://i40.tinypic.com/209qav5.jpg
http://i44.tinypic.com/2irn0x2.jpg
http://i44.tinypic.com/zunoyd.jpg
http://i42.tinypic.com/2yz0axd.jpg
http://i40.tinypic.com/5vbp14.jpg
http://i43.tinypic.com/j9odu8.jpg
Are there any real measurements to backup the announced 2.7 TFLOPs "Computational Power" of HD5870?
I believe that these "computational power" figures have nothing to do with reality.
Did you use the formula [Computational Power=Shader clock*Shader units*2] to calculate the single precision floating point operations per second (FLOPS)?
Nvidia announced double precision floating point peak performance for it's Fermi based Tesla C2050 / C2070 GPUs to be between 520 - 630 GFlops.
Nothing else.
How about we wait for some reviews.
I don't really agree with your method to average a range of resolutions and settings. I don't buy an expensive graphics card to run a range of resolutions - I buy it to run high resolution and high settings. When I'm looking at averages I'm looking at a certain resolution and quality or higher because I'm not going to be running lower. For the same reason I look at which games scale with which architecture (ati or nvidia) and compare to the games I play. The resolutions I use and games I play may not be what someone else uses. And it's ok, we can all make a rational choice for our individual needs.
http://forum.beyond3d.com/showthread.php?t=55830
5770--1Tflops
5850--1.4Tflops
5870@1G--2.2Tflops
GT200 never achieved announced 1Tflops either.Those announced numbers are just peak performance.
The average comparison of GPUs is not so bad.
There are many ways to express this percentage.
For example we may play the 10 most demanding games with all eye candy enabled.
Then compare the percentage of games that are fluently playable at certain resolutions with each card.
Cheers mindfury for those terrific pics. The heat sink looks quality (as does the card in general)!
thats probably the biggest heatpipe i have ever seen on non custom model. card must be really hot at 700mhz
It's 40% faster on average at 2560x1600 with some AA (2x-4x) in general still for a stock 5970 compared to a 5870, from what I've seen with newer drivers (I haven't seen a good article on 10.3a's yet). Regardless, 40% on a slower 5970's GPU's (core clocks/etc.) isn't 40% over a 5870, anyway: the 5870 has higher stock speeds and can clock higher when oc'ing as well. Crossfire tends to spike your max FPS making an awesome-looking average, while not really helping the minimums a whole lot for the most part. I don't need 20min, 75avg. 140 max, I need 50min, 80avg, 110 max! :)
Nice finds on those pics Mindfury, thanks!
Excellent? :rofl: :rofl:
Can anyone remember a time in Nvidia history when a big release on final info and benches has been this secretive? 5 days left now and it had better not be a paper launch or some :banana::banana::banana::banana: of 100 cards available. They would embarass themselves linking it to PAX with all those gamers there and word of mouth.
Next, how close are we going to get to launch day when we see 100% specs with lots of benches? I'm guessing they have the NDA fear imbedded but people in countries like China etc why would they care? Not like Nvidia is going to sue them across the world.
Thanks to everyone contributing to the info in this thread. To whoever commented about Quadfire 5970s the performance is POOR, as is the Quad GTX 295, I'm thinking Tri-GTX 480s will be highest end shortly. Not for me though GTX 480 SLI is my limit only because of 2560x1600, hopefully will be happy with them as long as I have been with the former GTX 280s that lasted from Nov 08 till now.
@GoldenTiger: i don`t understand why are you so biased against Ati?have you even tried the new drivers to see how they stack up?but as I have read your past posts it seems your are an Nvidia fanboy.
Ontopic:I can sense the tension Nvidia has started until now with Fermi, although as I am not a fanboy of either the green team or the red team, i just hope both gtx470 and 480 are very good.Especially the 480 because in the pics that mindfury posted, those heatpipes and that serious cooling indicated that it might actually be a serious overclocker.I don`t know, in the end we`ll see on Friday and be thoroughly amazed or simply dissappointed.
Pixel Fillrate is only 700*32=22.4 GPix/s.
There are 48 ROPs but the chip can only rasterize 32 pixels per clock. Can't output more than you input. The extra ROPs help with AA and/or in cases where you don't get nice compression on the writes to the framebuffer.
Or maybe this cooler was designed for something hotter and they just used it for this one, who knows :cool:
You kids have been behaving while I've been busy with OCIng my new CPU?
That cooler looks really good. There is not such a thing as hot component, there is only bad cooling salutations.
Because they designed it for a card with higher TDP and changed it at the last minute, so they didn't have a replacement?
Unlikely, but still possible.
Err they changed the GTX 480's cooling 2 months before the launch. Nearly painfully obvious that they wanted to push for more ambitious/less embarrassing performance figures and this had to be the way to go.
250W TDP on a single board with a single big hotspot isn't easy to deal with, in fact as heat and power go up more it seems to get harder to deal with them. And really, nVidia of all companies wouldn't hesitate a minute to cut down GTX480 cooling if it was "cool-running" (look at how they even butchered the GTX470 power segments)
The 5870 Eyefinity6 has the same core just more RAM, and in a sense is much easier to cool, so you don't see super buffed cooling despite the extra TDP.
LOL, you knew who I was talking to. :ROTF:
Because all high end GPU should come with a high end cooler. Those old crappy coolers have been limiting the OC headroom. These cards are for enthusiasts and they want to OC, so a good cooler makes scenes.
Maybe it has other reasons too, but everybody should be happy with a good cooler, not matter why it is there.
except 480 isn't high end. BTW I'm also sick of the "whatever happens it must be because Nvidia sucks" comments.
"480 prices not $700 as previously mentioned??? Then the cards must suck!!" Really? So you think it would be better news if the card was $700?
"The cooler looks good! Which means the cards will be very hot and they won't OC at all!!" So you could OC better if the cooler was worse?
http://www.itechnews.net/wp-content/...Processors.jpg
geforce cooler looks quite similar to tesla.
i think reviewers will recognize that people want 10.3 drivers in their latest reviews. if they read any tech forum they would know that.
I can't believe Nvidia GTX480 reference design has heatpipes that increase the height of the card out of specs.
It's ugly and doesn't fit in my case. So no Fermi for me.
Isn't that the old design? I suspect They've made the gaps in the heatsink wider so more air can pass through (and a bit of heat can get out on the AL plate now that it isn't shrouded)
You do wonder how long it needs to cool down after a gaming session to take it out of the PC though.
$500USD isn't high end now? Is that mid-range? Wow.
Just because I post that I had bad experiences with ATI products doesn't make me a fanboy of nvidia's stuff :rolleyes:. I tried the 10.2 drivers prior to selling my 5870, as well, same issues mostly, as I said in my posts if you had actually read them instead of just going "OMG must be an nV fanboy!!!!" at first glance ;). Just because someone doesn't like one product doesn't make them a zealot of the competitor automatically, unless you are a fanboy of the one they don't like you wouldn't construe comments like I make as that.
The heatsink on the 480 looks very heavy duty indeed :eek: .
Benchmark extreme territory?
Well, with Nvidia fanboys, it comes in two ways. Either Nvidia will crush Intel and AMD, or they will go bankrupt.
Take for for example AMD. One bad quarter and every self-proclaimed economic professor prophecies bankruptcy.
The Empire Strikes Back. The Rebel snowspeeder makes one last daring attempt on General Veer's AT-AT. But it is struck mid-air by laser blasts, loses flight control, and spins helplessly through the air until it crashes. Remember that scene? Metaphorically speaking, that is the result Nvidia fan boys expect when ATi faces the overwhelming power of FERMI.
Failing that, they will slide into a shame spiral.
Just want to remind you Nvidia makes most of their money off workstation Quadro cards, Tegra, and laptop GPU's. Oh, and stuff for Apple. Fermi can only really help the company, as no matter how hot/big/expensive it can be it will be the #1 in performance for a single GPU card and that is a HUGE marketing tool for Nvidia's low end stuff. They don't make the money off the sales of Fermi...
I get called a fanboy of every company under the sun hehe from time to time, because I don't stick with one company, I go with whatever product looks good at the time: ATI, nvidia, AMD, Intel, OCZ, Corsair, Crucial, Blizzard, Mythic Entertainment, Sony, Nintendo, Sega, Microsoft, etc. etc. :rofl: . If I have a bad experience with one, such as happened this time (I used to be called a major ATI fanboy back in the 9700pro and 4870 pre-launch days haha), I will call out about it. Just because I had a bad experience with the 5870 doesn't mean I never ever will look at an ATI product again, just not for this generation at least though, likely.
Well said, it's called a flagship :D.
Have you downloaded the newest profiles? It's said to contain a crossfire profile for AvP too, so it might fix whatever problem you are currently facing. :up:
http://www.xtremesystems.org/forums/...d.php?t=247780
[edit]
ahh nm, the profile has already been menitoned!
[edit 2]
If all else fails, you could always ask Brent and Kyle for help over at [H], as they never seemed to have any problems. Its seems so even without the new profile. I assume you've investigated the issue prior to printing 'fail' in said column! :D
When crossfire has a good profile for a particular game the scaling is usualy more then 40%. Minimum framerates go up as well, usually as much or more then the average framerate does. I wouldn't use dual GPU cards if they didn't improve the average AND the minimum.
Since you guys trust SKYMTL I used his 5970 review for the basis of the numbers: http://www.hardwarecanucks.com/forum...review-10.html
I included the % faster 5970 is versus 5870 in average and minimum framerates:
It was just a quick review of the numbers so I may have messed a few up. And that review was with old drivers so some games have been fixed or improved. The point is that you will get different results out of CF/SLI depending on the game and resolution you use. But on average if the game isn't CPU bound or the drivers lacking a profile the scaling of a 5970 is usually significantly better then 40%.Code:COD:WAW
1920x1200 4xAA/16xAF: +54% AVG, +41% MIN
2560x1600 4xAA/16xAF: +45% AVG, +38% MIN
COJ
No crossfire profile
Crysis: Warhead (DX10)
1920x1200 2xAA: +57% AVG, +62% MIN
2560x1600 2xAA: +48% AVG, +0% MIN
Dawn of War 2
No profile
Fallout 3
CPU Bound
Far Cry 2 (DX10)
1920x1200 4xAA: +45% AVG, +60% MIN
2560x1600 4xAA: +46% AVG, +55% MIN
2560x1600 8xAA: +44% AVG, +108% MIN
L4D
1920x1200 4xAA/16xAF: CPU bound
2560x1600 4xAA/16xAF: +49% AVG, +54% MIN
2560x1600 8xAA/16xAF: +50% AVG, +51% MIN
Hawx
1920x1200 4xAA: +64% AVG, +79% MIN
2560x1600 4xAA: +62% AVG, +0% MIN
2560x1600 8xAA: +43% AVG, +29% MIN
Also we should take into consideration that ATI isn't just going to sit around waiting. They are encouraging board makers to make higher clocked versions and there is the looming prospect of a revision. Even if a 5970 was only 40% faster then 5870 and if a Fermi was enough to match it, would it be enough to match an extra 15% or more in the newer cards? Ultimately we won't know until some reviews, which aren't that far off anyway.
But this is quite off topic. We aren't here to discuss 5970 scaling. We are here to discuss Fermi and only ATI products in so far as they relate to Fermi. If you want to talk multi-GPU scaling further I'm sure we could start a thread about it.
I wouldn't call ATI's drivers excellent. But I haven't been having any worse trouble then I have with Nvidia drivers lately. After swapping my GTX285 and a friend's 5970 a number of times I can say I am not really impressed with either company's drivers and am quite disappointed in the downturn in quality on the NV side (I used to hold them in high regard).Quote:
Excellent? :rofl: :rofl:
I agree, personal attacks are unwarranted. We should all remember that not everyone has the same experiences, needs, and desires. My anecdotal evidence doesn't necessarily apply to you any more then yours applies to me.
Unfortunately it's one or the other at this point. 10.3a makes AvP all but unplayable while many people (myself included) have reported major issues with the Profile + BF BC2 on Crossfire / Dual GPU cards. I am going to try and reinstall the whole shebang later today so I am crossing my fingers.
Good post. I must have misremembered the reviews I read initially at release... so it is definitely better than 40%, looks more like 65-70% on the 5870 crossfire scaling assuming a profile is present. I remembered having read the minimums as not improving that much offhand, my bad :(.
On your last paragraph, thank you for being one of the couple to have respect on forums lately... I've been careful to (most of the time) restrict comments to include a phrase like "for me" or "in my system" so as not to blanket-statement things. I'm sure it works 100% awesomely for many people, it just doesn't for me. If people could keep themselves in check ego-wise like you are and actually respond with facts/analysis instead of personal attacks, forums would be a lot more useful and better in general (some have gone VERY much downhill such as hardforum lately, I appreciate that the people here even when being hostile are not going to those levels).
No sorry I wasn't kidding at all, I was asking if it would be a fair comment to say, I guess you don't feel it is.
A small flash back
I can see with hindsight I should of said "Final specifications" and not just information.
As for your statements.
"Nvidia has been anything but quiet about fermi with one sneak peek," Just as well then they had the good sence to keep it down to just one sneak peek then, hay.
" intentional leak," Well as there is no info or link about this I am not sure but I guess they could have.
"demonstration," Humm I do love a good demo, especially if it's about an up and comming new card.
"and architecture review after another." As to have lots of reviews on the net about nvidias new architecture,
I think that has a lot to do with the fact theres alots of review sites on the net that are interested and want to make money.
"They were so eager to start showing off fermi that they couldn't wait for a real card for their CEO to hold up in front of everyone." Well I guess they didn't have a real one to show everyone, in say that,
it's one thing to have a mock-up card and to let people know it's just a mock card.
If in fact the CEO was trying to give the impression that it was in fact a real working card, then thats just plain Dumb, I have no idea about this, only read here say.
Anyhow please correct me if I am wrong, but none of the above statements you have made came under the nvidias NDA.
The whole point of this was from what I know (and thats not much) in the past by now a lot more information about the final specification and benchmarks would have been known by now.
As ati was able to keep eyefinity a secret right up to it's release date this whole lockdown of information about the final specification and benchmarks just feels like a knee-jerk reaction by nvidia.
There was no question mark so I took that as a rhetorical statement rather then a question. Sorry if that's not the case.
Ok, I agree that we have very poor data on the final specifications and game benchmarks. There are too many rumors but nothing solid. But we do have lots of other information about fermi - much more then we did about the 4000 and 5000 series. We even already have significant information about the architecture.Quote:
I can see with hindsight I should of said "Final specifications" and not just information.
Well maybe I didn't use the right choice of words. But I'm referring to the pics Nvidia put up on their facebook page. There is surely, IMO, also more information being leaked and/or intentionally distorted by both ATI and Nvidia.Quote:
As for your statements.
"Nvidia has been anything but quiet about fermi with one sneak peek," Just as well then they had the good sence to keep it down to just one sneak peek then, hay.
" intentional leak," Well as there is no info or link about this I am not sure but I guess they could have.
They have demoed fermi cards several times in the past monts. SC09, PDXLAN15, CES2010.Quote:
"demonstration," Humm I do love a good demo, especially if it's about an up and comming new card.
Lots of review sites would love to start talking about HD6000 series too but ATI hasn't given us much info yet, that's my point. Nvidia had two (IIRC) arch previews with one as far back as september 09. The review sites published the information because they want visitors to their site, but the information was provided by nvidia - it wasn't a leak.Quote:
"and architecture review after another." As to have lots of reviews on the net about nvidias new architecture,
I think that has a lot to do with the fact theres alots of review sites on the net that are interested and want to make money.
I don't know what they were thinking only what they did. In either scenario you present they still wanted to show the card off rather then keep it under wraps.Quote:
"They were so eager to start showing off fermi that they couldn't wait for a real card for their CEO to hold up in front of everyone." Well I guess they didn't have a real one to show everyone, in say that,
it's one thing to have a mock-up card and to let people know it's just a mock card.
If in fact the CEO was trying to give the impression that it was in fact a real working card, then thats just plain Dumb, I have no idea about this, only read here say.
I'm not sure what you mean. Do you mean that architecture information, board layout and cooling, and box design isn't under NDA but performance is? Well then they aren't keeping a very tight lid on that information either. We have some numbers for Far Cry 2 and Unigine from Nvidia themselves and leaked numbers for Dirt 2.Quote:
Anyhow please correct me if I am wrong, but none of the above statements you have made came under the nvidias NDA.
I'm willing to bet that someone already knows and has stated or hinted at those specifics. But we won't know until after release who was right and who was merely spreading disinformation.Quote:
The whole point of this was from what I know (and thats not much) in the past by now a lot more information about the final specification and benchmarks would have been known by now.
Being the card has two DVI and one mini-HDMI outputs, will they be able to support two monitors (via the DVI's) and a TV (via the HDMI) at the same time?
High-end means the most expensive and best product, which is 5970. A GTX 480 is an enthusiast product but it's not high end.
I believe some people will say "BUT GTX480 ARE NVIDIAS BEST GPU SO IT HAVE TO BEING HIGHEND LOL???". Lol indeed, Fermi hasn't been released yet so is GTX 295, a product on par with a $400 ATI card, a high end card because it's the best Nvidia has to offer right now?
Common sense maybe?
Larger die = more expensive
Larger memory interface = higher complexity PCB = more expensive
Higher TDP = higher complexity PCB, better/more power components, larger heatsink = more expensive
More memory = more expensive
I said it before... GF100 will cost Nvidia more than ~2x to make than Cypress and that is being conservative.
Speaking bluntly, if you keep speaking of your own experiences as purely anecdotal, then why do you debate about people on driver issues? It's like you're trying to assert what you've experienced is general and are trying to influence other people by that 'fact.'
BOM is set by Nvidia...
Nvidia has contracts with TSMC, they pay TSMC money.
Nvidia sets the reference design for the cards, they purchase the components from other manufacturers and sell them in kits to AIBs.
All AIBs have to worry about is the final assembly, packaging, warranty/service and distribution of the cards.
They are definitely more expensive, but don't forget, that they are salvage parts, so the yields are better. More memory, but slower afaik, that can also make a difference. And lastly, they don't have the same contract as ATI with TSMC. They are producing more chips, so they can have much better deal.
Die in between 280 & 285 dies = similar cost to last gen assuming die size alone is the sole factor
PCB less complex than their previous 285 with smaller memory bus
Lower binned ddr5 offsets costs of larger bus
I say we can all armchair build a full product and assume manufacturing costs its not so hard....
Depends on the salvage part...
Correct with the TSMC contract, we don't know anything about either contracts, though I suspect AMD/ATi has been getting a decent discount as well over the last few years.
I also highly doubt Nvidia is seeing 60% yields right now...
40nm wafers are more expensive than 65/55nm wafers...
Lower binned GDDR5? It is clocked slower but I don't think it is a lower bin.
so many semiconductor pro's here .....
lol agreed, you wonder if the peeps in this forum where in charge we would have had any problems at all with the yield on Fermi silicon :rofl:
this comment is purist logic in its finest form, and truly smacks of GPU genius....im only joking Lord EC :up:
anyways, a blast from the past, remember this pic....seems like years ago now lol,
http://www.hardwaresphere.com/wp-con...engine-gpu.jpg
heres the link to the so called 'digital citizens' that where supposedly rendered from scratch, but where in fact actors in still shots animated to look like live motion, on Fermi of Course,
http://www.onesize.nl/projects/playgrounds-titles-2009