I guess it's more 4890 CF outperforming 4870 CF.
Printable View
We dont know what the X2 will be at this point. With the HD3870 X2 you got a higher clock speed then the HD3870 Single.
With the HD4870 X2 we had the sideport that was never used. Because it did not boost performance enough and did use extra power.
There is a chance that the HD5870 X2 will have something that would make it more interesting then a HD5870 CF setup. But we wont know for sure until its out.
The fastest way to get the best performance is to go for a HD5870 CF setup.
The X2 come after the HD5870 Single. Could be one or two months later. And AMD does not need if the leaked results are right. The HD5870 will stand ground VS the GTX295. The best that nVidia has for now....
And i'm not gonna do any conclusions about the cooler. Yes its not optimal to have a smaller duct. But it could still cool good enough. I hope AMD is no idiot and its will not make more noise then a HD4890.
5870x2 will be bottlenecked because each GPU will only be getting 8x PCIe. 5870 CF will be faster unless ATi can pull some magic with the sideport + PCIe switch chip.
No doubt, Stalker looks like a very average game in comparison to Crysis yet it runs about 80% as fast when there's sun lighting. The rest of the game runs much smoother than Crysis although that's not saying much considering the graphical quality it presents.
Regarding microstuttering, whether it exists or not, I feel X2 solutions are not efficient because it does not scale linearly all the time--when the combat gets really congested (which is where you need performance the most), like dual-cores scaling, work is not load balanced and a single overclocked card might surpass the dual card in that situation. You see this particularly during per-frame, minimum framerate graphs.
Not only are the cards not the same (the 4890 is faster than the 4870), the 955 is a 3.2GHz CPU running against a 2.66ghz CPU. The 920 also takes zero effort to overclock as you can hit as high as 3.8ghz without even tweaking the voltages. But sure, if you are the casual gamer, the AMD will be slightly faster. Although casual gamers probably won't have the best hardware out there anyway, so the framerate differences are moot...they aren't gonna be able to max settings like the enthusiast.
We dont know that for sure. We dont know what kind of bandwith a single HD5870 will need. And we dont know what AMD does to minimize the overhead.
What about CF / SLI setups that work on 8x config's. Like the P55 boards ? They should have the same problems then of not bigger.
We cant say anything about it untill the cards are out. Then we can do tests by overclocking the PCI-E bus so see if you get big performance gains. I think the PCI-E bandwith wont be a problem on 2560x1600. Especially with AA en AF. Then you can use all the horsepower you can get.
Yes we already do know this for sure. Read the P55 reviews, they're already suffering 5-10% hit when using CF/SLI setups vs the X58. 5870 will only use MORE PCIe bandwidth not less, so the hit while using 8x PCIe will only be higher.
X58 + 5870 CF is the best setup because that is the only way to provide 16x PCIe to both cards while having the fastest CPU to fully utilize both cards.
Not sure exactly what you are talking about HK...
4870x2 had a PEX8647 which has 3 ports and 48 pci-e 2.0 lanes...
16 lanes per GPU to the switch and 16 lanes from the switch to the pci-e connector.
Seeing as only one card will actually need to output the frames, it will still perform roughly the same.
I have not idea why people are bad mouthing a card when it delivers about the same performance as 4870 x2. I expect the GT300 to do better than the GTX 295 "a whole better"
Lets take this example most likely GT380 will go against 5870 X2, the GT360 will go against 5870 but GTX 390 "2xGT300" and 5850 will be a league of their own.... This is wht i think may happen, the GT300 is rumored to be a beast with 512 SU's now that's more that GTX 295 and its performance in double PP most likely will surpass the 5870 but not the 5870 x2.
http://www.fudzilla.com/content/view/15499/1/Quote:
Hot trouble
We've heard that ATI is working on an X2 version of Radeon HD 5870 card but the biggest obstacle is the power and how to cool such a card.
We’ve learned that the current TDP for X2 is 376W and that the company is working on this issue, as apparently they will have to slow down the GPUs down by quite a lot to get rid of the heat.
Even if they use downclocked Radeon 5850 cores that run at 725MHz, the power goes down by only 36W (2x170W) to 340W. The hottest card from ATI so far was HD 4870 X2 that had TDP of 286W. To release a Radeon HD 5870 X2 card ATI should go down at least to 300W, especially due to thermal issues, but who knows, maybe ATI will launch 300W+ card.
We might be looking at the dawn of graphics cards with three or even four power connectors, as two might not be enough this time.
Mmm...I've read it 5min ago, but you know...that's from fud!! I would like to see any experts seeing this. Because I cannot explain my self how two 5870 chips toghether needs so much power while the single chip uses less power than the current generation. If someone has the answer...(don't tell me that is fud, I know that!) :shrug:
im a little worried that nvidia's offering wont be as strong as once thought after reading this http://www.gurufocus.com/news.php?id=69204 .
selling off 781k shares of stock since 6/18/09 with over 260k of those being sold the day after the prerelease of the 5XXX cant be a good sign...:shrug:
ill still wait to see what the nvidia part brings to the table before i buy though.
Start worrying about things you understand and less about things of which you have no idea how they work.
The fact of the matter is, when a CEO of a publicly owned and traded company wants to sell any of his/her stock of that company, he/she has to file an intent to exercise his/her option to sell that stock with the SEC months before the sale is to be completed. The sales being hysterically reported by all the "news" sites were put into motion probably two quarters ago.....like around January '09. And the timing of the sale is NOT up to the CEO, either. That's carried out by a third party with no ties to the stock's owner.
The sky is not falling. Sorry.
I don't believe it. 188W(single 5870 TDP)x2 = 376W; that's waaaay too convenient. I mean really, he's saying they didn't save a single watt making a combined card? Anyway, IIRC, TDP will have to be under 300W if they want it to be PCIe 1.1 compatible.
Except that months ago he still knew where his company was heading ;). Simple fact of the matter is that a company CEO selling off that kind of volume of stock usually isn't a good sign.
I don't understand the NVIDA stock news report that is floating around. All that is mentioned is that they sold stock, they don't mention that they also bought stock at the time...
http://www.reuters.com/finance/stock...?symbol=NVDA.O
LOL at all the drama coming from those who know nothing about the market and have made the sale of some stock as the sign of NVIDIA's demise. I know f' all about the market but have the common sense to at least do some research. :welcome:
edit: The report indicates he sold 520,947....out of 20,234,002 that he owns. Yep, he is clearly bailing out...
Today is the 15th, are we REALLY not going to have any reviews before release?
So early consumers will be blind buying?
Please...
Are you saying he didn't know in July that ATi had some really nice hardware up their sleeve? He knew the release date as much as we did. Just because it can take 60 days, doesn't mean it always does.
We know how this crap works, it suppose to look innocent, but it's all too obvious.
If nVidia doesn't have a card under $299 they are doomed!
well, you are right... i have little knowledge of stock trading, but when someone sees that the ceo sells off nearly $10 million in stock (without knowing that he has nearly $300 million in stock total) one may view it as a bad sign :shrug:
maybe it's not, but the timming is bad for it to have made the news. :shrug:
I understood, but mu question was...if 5870 is more efficient than the current genereation, how could be that the double chip uses more power instead??Or fudzilla is joking as always (that could really be), or they made a crappy power regulation that supplies the card...I mean...I don't know :D
I am thinking to buy a 5870 or 5850 for gaming instead of upgrading to a 1156 platform. Right now i have an E8400 E0 and a HD4870 512 MB. How much should I OC my E8400 so it won't bottleneck a 5870 or 5850? I use my PC for gaming, Internet browsing and watching movies. No program that require the power of a Quad. :)
*I use 1920x1080 resolution.
Simple. 188 W board power for HD 5870. Slap another GPU with 110 W TDP and you got 188 + 110 = 298 W board with two GPU's. :yepp:
Hehe, I am quite sure Nvidia architects have much, much more info and knowledge what to expect from AMD's new chips than anyone on this board. This includes Huang. They know what AMD can, and most probably will do with the chips. You can't surprise them.
yeah i really dont get it... there arent even any pictures from the press events and cards? wth? so what, the press was flown around to special vip events but they werent allowed to take pics or they arent allowed to show them yet? thats really retarded... :rolleyes:
why do you invite press to such an event and not anybody else? cause they spread infos and reviews to thousands of people... thats what they do, for money, but mostly for fun... right... so you do pick those people that CAN spread infos and cause some pr, spend quite some money on press events... but then dont allow them to show anything? :stick:
what genius is behind this please? :lol:
Buy a good air cooler OC to around 3.8-4.0Ghz, at this level you are around the performance of a 3.6-3.8Ghz i3.
We got benchmarks so early consumers are not blind buyers, the 5870 seems more or less like a 4870 x2 in performance. :up: There is still like a week for release maybe we will get a review sooner than one may think.
lol, ya they took the nvidia's solution for the GTX 295 "2*GTX 275 instead of 2*GTX 285 aka asus mars :ROTF: " and applied it to ati's card. 5870 x2 with 5850 core's.
Well just look around all the tech forums/news... mega hype with no solid numbers?... marketing staff dreamworld! :ROTF:
Its talked everywhere about it, but hardly anything is really known (beside some features and specifications) which leads to even more fuzz... till we finaly have solid numbers.
Do u guys know if this is gonna be a hard launch or just a paper launch?
No more AMD vs Intel.
Found 5870 benchmark here. Don't know how reliable they are.
http://www.techpowerup.com/103786/Fi...s_Surface.html
All indications point to a semi-paper launch. It seems there may be some parts here and there but expect there to be real stock sometime in mid November. Think HD 4770 launch with less launch-date cards available. That should make for some interesting threads on Slickdeals with people hunting down the cards. However, since they will probably be as popular as Meagan Fox performing in a strip show, if stock is found, people shouldn't debate whether they want it or not. ;)
Mid November? That sucks
Hey, I was wondering if the 5870 would be enough of an upgrade vs dual 4850's @ 685MHz?
or should I wait it out for the 5870 X2, bear in mind I have an Antec 300 case.
thats not the marketing staffs dreamworld... not if they are half decent marketing people at least...
the marketing staffs dreamworld is people not knowing the specs but thinking they know the product is awesome...
right now everybody has a good idea of the specs but nobody knows how good the product really is... its exactly the opposite of what youd want as a marketing employee...
the current situation is perfect for people who love to feel special for knowing something others dont, and having something others dont, you know, those 007 wannabe loosers :D
seriously... either stfu or do a hard launch or a paper launch, but dont do a press only paper launch followed by a public semi paper launch followed by a propper actual hard launch... i dont see anybody really excited about 5xxx, all this holding back of the specs and propper infos is just annoying people and not getting them excited... and by stretching the whole launch this much it really loses momentum... it should be one big bang and then followed by some more good pr... thats good marketing... not infos slowly showing up one new info a week for months and then a few cards showing up, and then some more, and then some more... thats the most boring prouct launch i can think of...
the current situation is this:
there might be something good... in 2 weeks... or maybe 6 weeks... possibly for a good price... or maybe not...
yeah, thats def a reason to be excited :P
Digitimes had reported that TSMC will be increasing their output of 12 inch wafers for 40 nm to 40,000 per month from 30,000 per month.
It would seem they are already producing quite a bit. Unfortunately the article is no longer public any more, however ill link the section.
http://www.digitimes.com/topic/bits+chips/
It had been posted september 11th
Comparison animation with G80's 16xAF HQ pattern:
http://forum.beyond3d.com/showpost.p...postcount=3179
Have you guys seen the movie Duplicity? haha im sure nvidia and ati hire spy's to work for the other company to steal info :ROTF:
What needs explanation? Nvidia being surprised of RV770's 800 SPs? No way. People working on the field for over 15 years for sure know about pads and the limitations they yield for small chips. So no, they were aware that 480 SPs core would be padlimited if using 256-bit bus, they had to expect more.
What could Nvidia have done better? They made as great perf/mm˛ GPU as they could. They would have stripped the GPU down to bare minimum in size if they just could without sacrificing the performance. Actually GT200 would have been even better in terms of perf/mm˛ if they had used GDDR5, but for reason or another they did not. Maybe costs/availability, or the high power consumption.
They did overprice their GPUs to milk. The only thing which probably surprised was AMD's aggressive pricing. Not the RV770 chip itself. I remind, they have people working there who have been designing the GPU architectures for some 15 years, if not more. I refuse to believe that random chaps on forums could come up with better guesses/explanations than those people with their expertise.
Or a semi-hard launch coupled with months of searching high and low for stock replenishment. ;)
Yes, but remember that there are several more steps involved rather than just announcing additional wafer production. And even 30,000 per month is a paltry number in the grand scheme of things. ;)
The info isn't extrapolated. The HD 4770 situation of stock in the channel at launch followed by very few units available after that was used as an example.
Except the community is doing it to itself; and that's not AMD's fault. Because (improperly) leaked information gives us a heads up before a planned date doesn't mean AMD should fold and say "well alright guys, we'll move everything up a month, just for you." The September 10th event seemed to mainly showcase Eyefinity and the new technologies available with the 5xxx series, not so much the cards themselves. This is normal marketing procedure and the timeline makes sense - showcase some really cool techonology, let the buzz build and many news outlets report on it, let it reach all streams (especially mainstream consumers), then have the products available in (less than) two weeks. Because enthusiasts like ourselves obsess over it and knew all information practically as soon as it left AMD's mouths doesn't mean there's anything wrong with AMD. The leaks and all of that are NOT what's supposed to happen - that's NOT part of the marketing plan and is an example of people jumping the gun before AMD is ready. These last two weeks are vital for dotting i's and crossing t's to make sure the launch goes smoothly. Now of course, if AMD doesn't release by the end of September then I'll agree - they really missed the peak of the buzz and a good launch.
Honestly though, some of you remind me of children waiting for Christmas to open up their presents "Can't I open my presents now? Please? Come on, please!? Mom, what if I open up one right now, just one, I'll leave the rest, please!? Pretty please?" :D
Quote:
Yes, but remember that there are several more steps involved rather than just announcing additional wafer production. And even 30,000 per month is a paltry number in the grand scheme of things. ;)
I'm certain it is compared to when things are going at their capacity. Also the number reserved for 40nm wafers also will include all 40 nm chips, not just the 58XX series. I'm pretty certain that the availability will be greater than the 4770, though. I can recall an article saying the 4770's run was extremely limited and short in duration.
/me wants :up:
HD4870 (mine):
http://i25.tinypic.com/23lxr9f.png
HD5870:
http://img43.imageshack.us/img43/7008/afresult.jpg
Where has this been stated? I find it hard to believe, Jen Hsun himself being an EE and has been working with chip architectures, I find it hard to believe that they would have been surprised. Maybe they did not expect the 800 SPs core, but clearly they knew it was possibility. And knew it was the reality way sooner than anyone ever speculated about it on these forums.
Facts trump assumptions.
http://www.techpowerup.com/?68634
As I said, the thing they were surprised was the aggressive pricing of the cards. Not the chip(RV770) itself really. Of course it has been possible that they did not expect 800 SPs at first, but it was not like they would(or could, for that matter) have changed their own design due to this.Quote:
We underestimated the price performance of our competitor’s most recent GPU, which led us to mis-position our fall lineup. The first step of our response was to reset our price to reflect competitive realities. Our action put us again in a strong competitive position but we took hard hits with respect to our overall GPU ASPs and ultimately to our gross margins. The price action was particularly difficult since we are just ramping 55-nanometer and the weak market resulted in taking longer than expected to work through our 65-nanometer inventory.
I am fairly sure that engineers at Nvidia were able to predict the performance of 800 SP chip well enough, so that they could price their cards accordingly. Yhey decided to milk. Of course they did not want to admit it though.
Seeing as how RV870 has had final silicon in production since about June/July, they are obviously going to have MORE units available than RV740. Not to mention the fact that RV740 also had a mobile bin. Let's not forget that TSMC actually shut-down 40nm production for a week or two to try and fix the problems.
So... no, there will not be less than 40k cards at launch.
Nope, their underestimation of RV770's "price/performance" translates into their gross underestimation of it, as there's not much left of it to "estimate correctly". Price/Performance estimation requires you to know 1. performance, 2. price.
Nobody, had the faintest clue about it having 800 SPs until the day it was launched and its official specs were put up. "NVIDIA are smart, powerful, brilliant, telepathic and knew everything at ATI" is a frail argument, not supported by anything that went on between Q2~Q3 2008.
In all reality, we will see what happens in the months after launch. Maybe things will change and maybe they won't. Until that time, nothing is certain.
What we do know is that the performance will be there and it will undoubtedly be extremely popular. Whether production will keep up with demand has yet to be seen.
And for the record, the HD 4770 was never meant to be a limited production production card. It was just hampered by some serious yield issues on the part of TSMC coupled with high demand.
Why wouldn't anybody have a clue? Engineers at Nvidia for sure know how much you can fit SP's to a chip while keeping it under the expected/assumed/rumoured die size. AMD planned to have 480 SP chip, but such design would be pad limited if using 256-bit MC which they had to do, so they were able to put more SP's in. Nvidia might not have expected this to happen at first, but for sure knew about this way before than anyone was speculating this on forums.
Regardless of whether they knew about RV770s specs or not, they would not have changed their own design anyway, the only thing they could do was to adjust the pricing, and they decided to milk.
Great way to exaggerate. I was talking about Nvidias chip architects and engineers having a clue about what RV770 might be, what rumours might be true, what to expect from it etc., not about the companies itself. You really seem to want to take this as NVIDIA vs ATI arguing, in that case nothing can come out from this. Thanks, I'm out.Quote:
"NVIDIA are smart, powerful, brilliant, telepathic and knew everything at ATI" is a frail argument, not supported by anything that went on between Q2~Q3 2008.
Uh, first of all, maybe instead of throwing out terms such as pad limited, you should realize that 480 would not have been for the RV770. Seeing as how they were on the same 55nm as RV670 which also had a 256-bit memory but *gasp* 320 SP's.... how in the world would 480SP's be pad limited?
The 'experts' built the Titanic as well. Just because they are experts at GPU design doesn't mean that theyQuote:
They did overprice their GPUs to milk. The only thing which probably surprised was AMD's aggressive pricing. Not the RV770 chip itself. I remind, they have people working there who have been designing the GPU architectures for some 15 years, if not more. I refuse to believe that random chaps on forums could come up with better guesses/explanations than those people with their expertise.
a) Know exactly what their opposition is going to do
or
b) Know how that opposition will perform
If that were true, then R300 vs. NV30 never would have been a surprise
I can't remember the exact reason behind the pad limit then, if it wasn't the too small die. There was a review or an article about it, and RV770 was supposed to have 480 SPs, but was pad limited so they had "free space" to fill up and slapped 800 SPs to it. GDDR5 uses more pins which caused the bigger die to be pad limited, as far as I can recall.
[/QUOTE]The 'experts' built the Titanic as well. Just because they are experts at GPU design doesn't mean that they
a) Know exactly what their opposition is going to do
or
b) Know how that opposition will perform
If that were true, then R300 vs. NV30 never would have been a surprise[/QUOTE]
+1
That is true , just because you an expert even for 15 or 30 years , Does not give you the power to predict or know about everything .
I mean look at what happened to the Unsinkable ship .
And you have to always remember there is always some one that knows more and better ways to make something than you even if you have been in the business for more than 20 + years . Granted he could also not know what you know .
And I could be wrong but since Nvidia has been top dog and they are all humans , as we are all humans, we tend to underestimate or overestimate our knowledge and skills .
Aside of all of this , man they need to give us some real benchies and such .
: (
Play nice folks...
Looks like they spent a little of all that extra power on IQ. Not a bad choice.
http://www.semiaccurate.com/2009/09/...-vs-xbox-360s/
"E OF THE FEATURES of the upcoming ATI Evergreen family, also known as the 5-series, is a tessellator. While this might be old news to graphics card enthusiasts, this time it really is different, mainly because Microsoft is finally backing the technology.
ATI has been putting tessellators"
Correct, GDDR3/GDDR5(GDDR4?) memory controllers, more pins for the GDDR5 due to power/ground, sideport taking up extra perimeter, UVD2 is slightly larger than the original. Let's not forget that they slightly increased the transistor density over RV670.
I also recall hearing that RV770 was pad limited in early stages.
Just from common sense I'd have to go with Calmatory on this one. You don't need to be ignorant of the other one's chip to be surprised by it, or to state that you were surprised about details of their launch plan ("surprise" has a lot of meainings -- good yields, high clocks and an early launch can be a surprise even if you know the arch very well).
Surely, AMD was surprised by Intel's Conroe, but it doesn't mean they haven't seen the engineering samples long before we did. The design cycles are so long, it really doesn't matter if you get to see the engineering samples, because you can't do much about your own product.
The anticipation!
GTX380 vs 5870(x2)
MUST KNOW!
GTX380 will prevent me from buying a card until I know how it performs... so it is doing its job (in a sense)
Or you can get one of the 7 they taped out! $10,000 please!! LOL :D
http://www.semiaccurate.com/2009/09/...eilds-under-2/Quote:
THE SAGA of Nvidia's GT300 chip is a sad one that just took a turn for the painful when we heard about first silicon yields. Nvidia's execution has gone from bad to absent with low single digit yields.
A few weeks ago, we said that Nvidia was expecting first silicon back at the end of the week, the exact date was supposed to be Friday the 4th plus or minus a bit. The first bit of external evidence we saw that it happened was on the Northwood blog (translated here) and it was a day early, so props to NV for that. That lined up exactly with what we are told, but the number of good parts was off.
The translation, as we read it, says there were nine good samples that came back from TSMC from the first hot lot. That is below what several experts told us to expect, but in the ballpark. When we dug further, we got similar numbers, but they were so abysmal that we didn't believe it. Further digging confirmed the numbers again and again.
Before we go there though, lets talk about what a good die is in this case. When you get first silicon back, it almost always has bugs and problems. First silicon is meant to find those bugs and problems, so they can be fixed in succeeding steppings.
By 'good', we mean chips that have no process induced errors, and function as the engineers hoped they would. In other words not bug free, but no more errors than there were in the design. 'Good' in this sense might never power on, just that the things that came out of the oven were what was expected, no more, no less.
Several experts in semiconductor engineering, some who have overseen similar chips, were asked a couple of loaded questions: What is good yield for first silicon? What is good yield for a complex chip on a relatively new process? The answers ranged from a high of 50% to a low of 20% with a bunch of others clustered in the 30% range. Let's just call it one-third, plus or minus some.
The first hot lot of GT300s have 104 die candidates per wafer, with four wafers in the pod Nvidia got back a week and a half ago. There is another pod of four due back any day now, and that's it for the hot lots.
How many worked out of the (4 x 104) 416 candidates? Try 7. Yes, Northwood was hopelessly optimistic - Nvidia got only 7 chips back. Let me repeat that, out of 416 tries, it got 7 'good' chips back from the fab. Oh how it must yearn for the low estimate of 20%, talk about botched execution. To save you from having to find a calculator, that is (7 / 416 = .01682), rounded up, 1.7% yield.
Nvidia couldn't even hit 2%, an order of magnitude worse than the most pessimistic estimate. Ouch. No, just sad. So sad that Nvidia doesn't deserve mocking, things have gone from funny to pathetic.
At this point, unless there's a massive gain in yields on the second hot lot, there might not be enough chips to do a proper bring up and debug. This stunningly bad yield could delay the introduction of the chip, adding to the current pain and bleak roadmap. If there aren't enough 'good' parts from the second hot lot, that might require running another set, adding weeks to the total. Q1? Maybe not.
It is going to be very interesting to see what Nvidia shows off at 'Not Nvision' in a couple of weeks. Will it give the parts to the engineers to work on, or show them off as a PR stunt? We will know soon enough. In any case, the yields as they stand are sub-2%, and the status of the GT300 is far worse than we had ever imagined.
As largon wrote, GDDR5 has fewer pins
That said, that reason was in hindsight. If you recall, at the time before the actual release of the RV770, people were wondering whether ATI could even fit 2.5x more shaders in ~30% more space - pad limitations weren't an issue. For RV740 though... pad limiting was definitely an issue brought up
It was pad limited with planned 8 clusters (so not 480SP but 640SP were designed at first). This also answers only 32 Texture Filtering units and 40TU, originally planned was 32/32 and not 32/40 for RV770. All this because some brilliant engineers managed to redesign/optimize SPU so they were taking a lot less space than original RV670 ones and still provided more capabilities.
If you dig Beyond3D you will find Dave himself told this, but I can't be bothered to look for quotes now (too late:p:)
Oh please :ROTF:
What are the die sizes, and sources for them? AMD has aleady plenty of experience with TSMC's 40 nm process, thanks to RV740.
More than likely that the 2 % yield number is FAR off, but it indicates that yields are LOW. Most probably the chip design by Nvidia has flaws, which they need to sort out and fix, then re try and see how it turns out. Or fix the issues and take a risk; mass produce without testing.
It's probably not true, but I have to correct one point: You can't assume that the yields are going to be about the same just based on being on the same process, especially when you don't have any idea what GT300's die size is, much less the design and procedure that Nvidia gives to TSMC to produce things (yes, it's on Nvidia to give them the layers etc.) nor the testing procedures behind it
Remember folks... increased die size doesn't increase defects linearly... it increases it exponentially
The same GT218 that has poorer performance than the RV710 while consuming more power on a smaller process? :rofl:
You ought to look at this post by neliz on B3D, who has been 'in the know' for this past generation as he's called just about everything on the RV870:
http://forum.beyond3d.com/showpost.p...postcount=2151
If what a lot of B3D posters say are true, and a lot of them have been right time after time, then yes yields are very bad for Nvidia for a plethora of reasons from bad practices to design issues - probably not the 2% that Charlie is spouting but the issues *are* there
Originally Posted by LordEC911 View Post
#1- Charlie is NOT the source. A different source leaked some info and a very good informant even posted up the numbers Charlie was going to quote yesterday. This informant quoted similar troubles but slightly higher yields, still under 10%.
#2- G300 never was getting 20-25% yields, that was a compeltely made up rumor based on RV740 yields.
G300 was either just taped out or not even tapedout 3 months ago...
#3- G300 is supposedly around G200 size, even though it is on 40nm vs 65nm.