No it doesnt make sense. You draw no patterns or anything of substance other than stating when certain companies had failures.
Printable View
And I wouldn't call HD2900XT a train wreck...
2900 XT Was a good 3D Mark card in its day.
I agree it wasnt that bad!
Sorry guys - this is all a little ridiculous to me.
First and foremost: yields are always kept on the down low (even in foundry situations). Information like this doesn't just float out of nowhere.
Secondly: this isn't so much NVIDIA's problem as it is TSMC's problem. If Nvidia's chips are following TSMC's provided design rules then it will be up to TSMC to provide whatever minimum yields they guarantee. Obviously things are a little more complex than that (since chip designs do influence yields in a measurable way) but you have to remember that NVIDIA outsources 100% of their chip manufacturing. It isn't like Intel or (formerly)AMD/ATI who is going to be directly linked to their yields.
Yes, this may hurt NVIDIA if they can't push out their flagship product in a timely manner - but the effect is going to be less harsh than you may expect. And I can assure you TSMC will do whatever it takes to appease NVIDIA as NVIDIA is their largest client: if they have to meet NVIDIA's supply quotas due to low yield they'll be happy to take a loss as necessary (especially given that the top end chips will be low volume products. Smaller chips invariably have higher yield given that yield is related to defects per unit area).
In short.. move along folks.
Just one thing...
Who revealed the most "crucial" part of G200's architecture first ( well, actually it was the only correct pic until the launch :p: ) and several other 100% accurate info ?... hmm
Anyway, I don't do c0cks...
Nonetheless why should I even expose my sources or take the risk to expose NDA'd or not NDA'd stuff ?
Until Nvidia shows their chips or gives some proof, they cannot really do anything to make us not trust these rumours.
I believe 2% figure is not realistic (hey, it's Charlie after all), but it is still supposedly very low. Otherwise, since competition revealed their cards already, why not give your own fans something for their confidence?
yeah but you dont go to tsmc and hand them the design and say i wanna buy 100 fully functional chips with this design please... you can buy wafers and tsmc will try their best at getting high yields... but theres no guarantee, you get the wafers, not working chips... so its not tsmcs problem... if gt300 cant be made in comercial viable yields nvidia will suffer, not tsmc... tsmc doesnt need gt300, nvidia does...
below a certain chip size having even smaller chips doesnt result in notably higher yield... theres a break point with chip sizes where a wafer goes from trash to usable... getting the chip size right is a tricky game, cause you gotta design the chip and set its size a long time before the process is available for test runs... at least reliable test runs. the smart way is to implement plenty of fuses and some redundancy so you dont have to hit the sweet chip size spot exactly, and yields improve so you can try to just hit the acceptable yield rate just barely and then just wait for yields to improve to get your ship out of bad waters... thats what nvidia traditionally did... knowing that tsmcs 40nm was supposed to be fixed but still has yield issues i wouldnt be surprised if gt300 is above the acceptable yield chip size, at least for now...
im not convinced yields are that bad... but if they are, its a problem for nvidia... but yes, not a major one... nvidia could survive with gt200 for another year without losing too much money i think... q3-q4 2010 is where it gets critical... if they dont have a really nice product out by then it could break their neck... but thats plenty of time and im sure they have more planned than just gt300... by that time tsmc and gf should have 28nm done, so even if nvidia never gets propper 40nm parts out, they still have a second chance with 28nm...
Means that GT200 wouldn't be too big? :) Name me 5 chips bigger than GT200. I will name you 500 chips smaller than GT200 in return.
Source?Quote:
G300 is smaller than GT200 but larger than GT200b.
Intels fabs are somewhat ahead of those of TSMC, hence they can produce bigger chips somewhat easier, hence direct comparison is somewhat flawed. Though it holds some value. :)Quote:
Larrabee is supposed to be bigger than GT200.
IF this is causing problems for Nvidia, I am sure they have a backup plan. Get GT300 working, slash 200-series prices, do some PR stuff, rename, driver tricks, rename some more, more PR stuff. It is amazing how much Nvidia can do(and has done, as history suggests) without having new chips to show.
:exclaim:
The following is the most sensible text about this matter: (posted earlier in this thread, from Anadtech)
Quote:
Can it be that bad? Sure, it can always be zero.
Let's just assume ALL of Charlie's numbers and sources are 100% correct...its four wafers.
Getting low yields on four wafers is not exactly uncommon. And it especially comes as no surprise for a hot lot as typically hot lots have nearly all the inline inspection metrology steps skipped in order to reduce the cycle-time all the more.
Those inline inspections are present in the flow for standard priority wip for a reason, related to both yield (reworks and cleanups) as well as cost reduction (eliminate known dead wip earlier in the flow).
I really pity anyone who is wasting their time attempting to extrapolate the future health of an entire product lineup based on tentative results from four hot-lotted wafers. That's not a put down to anyone who is actually doing just that, including Charlie, its an honest empathetic response I have for them because they really are wasting their time chasing after something with error bars so wide they can't see the ends of the whiskers from where they stand at the moment.
Now if we were talking about results averaged from say 6-8 lots and a minimum of 100-200 wafers ran thru the fab at standard priority (i.e. with all the standard yield enhancement options at play) then I'd be more inclined to start divining something from the remnants of the tea leaves here.
But just four wafers? Much ado about nothing at the moment, even IF all of the claimed details themselves are true.
This would have been far more interesting had the yields on those four wafers came back as 60% or 80%, again not that such yield numbers could be used to say anything about the average or the stdev of the yield distribution but it would speak to process capability and where there is proven capability there is an established pathway to moving the mean of the distribution to that yield territory.
But getting zero, or near-zero, yield is the so-called trivial result, it says almost nothing about process yield to get four wafers at zero yield. All it takes is one poorly performing machine during one process step and you get four wafers with yield killing particles spewed on them.
Sounds the most plausible, leaving any conjecture moot. As they die set the lith, and refine their angle, etc.
But will Nvidia have a $299 DX11 part...? That is the question. If not, they loose!
Extreme high-end cards are awesome, but only a few people ever buy those.
Xmas is coming, if people are going to upgrade to a new system, they will do it for Xmas and Windows 7 and Microsoft is going to make sure the whole world knows about it come October 22nd.
So, after millions of Xmas shoppers buy new computers with ATi DX11 cards... who's left to be waiting for Nvidia re-branders and $499 GT300s..?
Those 100k people who waited for a nVidia DX11 cards to go on sale in January, aren't enough to bring in profits for Nvidia !
sigh more of charlie faerie tales...
Can someone close Fudzilla?
http://www.fudzilla.com/content/view/15535/1/
NV is already about three months behind, if its really that bad they better scrap gt300, lower the 2xx model prices and develop something new from ground up..
If you believe Fudzilla GT300 is already built "from ground up" ;)
http://www.fudzilla.com/content/view/15535/1/
Quote:
We can only confirm that GT300 is not a GT200 in 40nm with DirectX 11 support. It’s a brand new chip that was designed almost entirely from the ground up. Industry sources believe that this is the biggest change since G80 was launched and that you can expect such level of innovation and change.
makes me wonder if it's really a completely new chip. or: what does it take for a chip to be "completely new". if you think about the steps from g80-> gt200 -> gt300, all have been completely new chips (if the rumor @fud holds water), whereas ati/amd focused on improving their older chips and adding new features etc (or does this count as "completely new chip" as well? :rolleyes:).
the reason i'm thinking about that is, because i'm wondering why nvidia stopped improving chips (like it was from e.g. 6800 to 7800) and aims for completely new ones instead. that has to be way more expensive... or maybe nvidia wasn't able to get more out of their current chips? i don't know...
you mean your not sure? its pretty obvious isnt it?
on nvidias recent roadmaps gt200 is a q1 part, so there is no way they will have any dx11 part for christmas, ie in late november which is when shops and distries stock up for christmas.
even if it would be out by then, it certainly wouldnt be 299$...
but that doesnt mean nvidia loses... they lose a lot of potential sales... but is it really that much? what games do you need a faster card than a 285 for these days? unless your on a 30" monitor that number can be counted on one hand, especially if you take into account games that might be interesting to play for one certain person, nobody is going to want to play all of those demanding games...
nobody needs or can make any use of dx11 for now and for the next couple of months, and even then itll be more like a patched on tech demo than really a notable diference...
what nvidia really needs is a cheaper gt200, they dont NEED dx11, and they dont need a monster perf gt300 chip... i hope nvidia realizes this as well and doesnt put all their efforts into gt300 :D
GT300 will most likely do better than a 5870 because its suppose, hell the GT300 part is not suppose to be priced around 5870's cost anyways an will be higher, so it would most likely perform better also.
holy crap thats bad news for all of us
What do you base that thought on? It is quite much impossible to gain absolute data on how dense the transistors can be manufactured unless the exact same chip is being manufactured on every manufacturers fab.
As it was said, the transistor density of the core is very much dependent on what kind of parts the core has. Usually simple stuff can be packed densier, e.g. SRAM vs. core logic. Also different parts of core won't shrink too well. As far as I know, for example memory controllers aren't shrinking too well when compared to huge SRAM arrays of cache. So yeah, complex stuff has low density and simple stuff has high density.
I'm sure Hans could be a big help here.
nvda stock -3% in 2 hours of trading.
glad I shorted! :P
Wow nice source. Wasent this charlie guy punched by a nvidia rep (physically punched). no wonder he's talking out of his ***
Why r people putting so much fate in an article written by Charlie on a site called Semi Accurate?
Its a fact that Charlie aint an Nvidia fan and tells rumors asif they where facts and manages to make everything sound dramatic.
Just stop clicking on links to articles written by him and if we r lucky they guy will stop getting work.
Besides that if GT300 would flop Nvidia wont be happy but they wont go under either.
They should have enough cash in the bank to handle another bad year or 2.
@Unoid.
Unles u shortened a :banana::banana::banana::banana:load of Nvidia stock i doubt you made alot on it.
At what was Nvidia stock before the drop? 16 US dollars?
AMD/ATI ftw! Back on the top of the hill!
haha, no, not as far as i know :D
1. charlie didnt like nvidia before that event at last years computex
2. derek perez threw some muffin or sandwich at charlie, who spilled a cup of hot coffe over him, after that perez wanted to attack charlie but was held back or didnt do it... dont know the full details...
I would, if XFX give one of those sweet PSU's free with the XFX 295 :yepp:
What i heard was charlie disclosed something that nvidia did not want him too and he did not take it off the site, after that nvidia cut him off from the family. It was after this when charlie demanded justice from derek perez did things went the way u said with the muffin, etc... "DOnt know full story either"
I wouldn't be surprised if the yields were low, however this is Charlie and his SEMI accurate make believe fairytale anti-nVidia nonsense
In reality if it's an ATi story, knock off 10% and if it's an nVidia story add 10%.
Yeilds are probably awful... in fact woeful but higher than 2%.
The sooner people stop quoting Le Fud, Teh Inq and this new Semi-accurate nonsense the better
John
Hilarious I had to google for this, the true story is even better:rofl::ROTF:
While we're at it let me quote from the big man himself:
Here you have the full scoop right here, at XS :D Derek Perez Muscle Boy
From that thread :) :
Quote:
Disclaimer: This is a joke. It did not really happen. Though Derek did invite Charlie for coffee.
nvidia got bad samples because architecture was too complicated? I think that nvidia and ati share same manufacturer? right?
I honestly hope that GT300 will be close to 3 * GTX 285 level of performance and possibly 40-50% ahead of HD 5870. Anything less would just be plain disappointing, how else can you justify barely any decrease in die size while switching from 65nm to 40nm.
There is no way GT300 will bring 3X GTX285 performance. IMO it`s impossible to achieve with this generation. Moreover it will be at best 10-20% faster than Rv870
gt300 fails if its price/performance ratio is worse like at all than 5870x2
basically as long as its in between 5870 and 5870x2 and its price matches... its fine... but as soon as gt300 = 5870 x2 in price... but 5870 1.5x in performance... its game over again...
gt300 needs to be epic for it to be a success... while this may happen... I'm becoming more skeptical everyday we have ABSOLUTELY NO INFORMATION about it... we already have an inkling about the 5870... but the gt300... not even a real die size
Yes we do. Don't ask for the source, because it was a while ago and I don't remember, but the die size is supposed to be in between GT200 and GT200b. I believe there was actually an exact number, but again I don't remember.
And it's supposed to have 512 MIMD SPs.
i think we need to start seeing much better multi gpu scaling, with things like shared memory. the performance king is not the best selling card, and may not be the most profitable card, but somehow its the only thing most people would consider when trying to determine a winner.
if we could get gpus to be 100$ a piece (with the bad yields selling for 50$) and you just buy the 1x, 2x, or the 4x and currently fit up to 4 of them into your PC, you can now use the same core and sell it for 50-400$. thats the direction i hope they are aiming for. trying to make a gpu greater than 300mm2 i think is an utter waste when good design and drivers can probably save millions in development cost and silicon waste. (but i dont have a degree in chip engineering and everything i said has the potential to be 100% impossible)
Last time that happened from nVidia's side, do you know what happened? G80.
And only a week or so before effective launch, enthusiast comunity knew what to expect, and even then we were surprised by it.
I am not saying that G300 will be revolutionary (performance wise) as G80 was, but it might be.
Patience is what you have to have.
G92 9800GTX ---> G200 GTX 280 is like a 3x speed increase. :rofl:
Just run Crysis and set everything to enthusiast at 1920x1200, G92 tri sli will crumble in face of GTX 280. :ROTF: Oh wait, lets make it a bit more fair and use an overclocked 280 version, maybe Evga FTW or BFG OCXE? Since 9800GTX was nothing more then an overclocked 8800 GTS 512 anyways :ROTF:
Honestly its really hard to compare performance of cards of different generations. Also how do you compare SLI / CF setups vs next gen single cards? There is always perfromance loss due to software overhead of SLI/CF so you will rarily experience the setups true potential. Do you use games or benchmarks or just go by pure theoretical performance numbers (flops etc). So just by specs alone I can confidently say that G300 will pull ahead of 285 SLI set up easy because of the 2x of shaders and boostage in bandwith and core clocks.
Also rv870 (5870) is more 2.26~2.3 faster then rv770 (4870) mostly due to clock bumps. If anything 5870 is like perfect HD 4890 crossfire.
G200 B2 or B3? Because if its bigger then the 65nm B2....mother of god.
I wonder if nvidia looked at the rv770 vs g200 debacle the same way as most of us did. I think what Nvidia really took home was not that they made G200 way too big but that they DID NOT make it BIG ENOUGH thus failing to create a significant performance lead over rv770. I guess the way Nvidia thinks is that if they can make a gpu so powerful that the competition can barely compare the consumer will care less about the technicalities.
I just think NVIDIA rather sooner than later should drop this silly tactics to focus mainly on highend, it's not a healthy business in the long run try to rely on releasing a big but fast card as there's many disadvantages that comes from this, mainly development time & cost, greater risk for yield issues, cooling & power limitation issues and not to mention there's far less customers in this price range than what ATI is focusing.
NVIDIA should try to offer a great from top-to-bottom product range based on a new arch, if they manage to pull a successful series then that would be disastrous to ATIs current tactics which would have to lower prices greatly and possibly still not getting any sales, it would start chewing on their market share which atm is what ATI is very interested.
The perhaps biggest problem right now is that even if their big fat chip is fast, ATI will still get sales as they have nothing that competes in the same price range. I just don't see the logic behind NVIDIA atm, hopefully HD 5xxx series will teach em'.
This is what I'd personally do for the next arch after GT300:
Development: focus on finding a better alternative to stream processors that are more efficient, I think it's about time to move on from them now. Or at least find ways to make them more efficient if there's no alternative made up. Also spend some time looking into the possibilities of multithreading and how to make multicore cpus cooperate better with GPUs.
I'd also make it clear to the engineers, sth like a strict 190W TDP and 50% performance increase over last gen as a goal. I'd put slightly more focus on the power consumption after GT300 than performance, the next wave can focus more on performance.
Schedule: Release first a lowend segment (think "GT460" or whatever) of the new arch like 3 - 4 months before the mid/highend (GT400) as teasers. Good for getting recognition and so you can start the marketing of the new features earlier and to simply test the grounds of the new arch b4 going to the bigger and more complex cards.
The speculation about under 2% is just getting ridiculous.
G200A2, the 65nm core.
G200B2/3 are both 55nm cores, B2 were early GTX285 samples but mainly used for Quadro cards.
I must be because I am really confused...:confused:
He was listing silicon bigger than G200, which I thought you were doing the same.:shrug:
Only other thing would be if you were listing a few of the 500 that are smaller...;)
lets just assume g300 is more than 2x as fast as g200... then what? what are you gonna do with all that gpu power? there are no new demanding games coming out for a while, the only one i know of is crysis2...
unless you have a 30" display, what would you want that much gpu power for?
wha? :eh:
yeah thats what i dont get either... was their strategy really to have g200 and g92 only? one highend chip and use the last gens highend as mainstream? thats a terrible strategy leaving wide gaps... i really hope they already learned their lesson with g200 and the rumors about several cut down g300 parts coming out soon after g300 are true...
its getting hard to justify buying a super highend vga these days unless your benching... same for cpus... there is not that much you gain from going mainstream to highend these days...
i think its true, but its for only 4 hotlot wafers... so it doesnt really mean too much... what it means is that g300 is not ready for mass production and even a press launch at the end of this year seems unlikely. how much time and work itll take to get yields up is really unclear... its like putting together a pc in 3 minutes asap in a big rush, and then it hangs on bootup... it doesnt work, but you cant tell how long itll take to fix... could be a very simple thing you overlooked since you rushed it... or it could mean one part is damaged or you need lots of debugging to get it running... kinda similar here i think...
folding@home:wtf: but i agree with you. consoles are very far behind and it would be overkill to buy these things. consoles are like wii's now.:ROTF:
yep, the only thing to beat gt200a is ULSI.:rofl: which would take 100 years to successfully make a microprocessor. i was hinting that no one makes bigger dies than nV. if you think all the way back to the pentium it was almost 300mm2 on a 200mm wafer so that is pretty close.
maybe why nvidia partners are still churning out and promoting new models of gtx 275/285 and 295 with custom pcbs etc ?
Assuming there are 104 dies on a wafer as Charlie said, then according to my die size calculator the die area will be about 534 mm^2 if the die happened to be almost perfectly square (~23.1mm x ~23.1mm). That would fit what you read in the article with the size being between GT200 and GT200b.
Thats funny I was just telling someone that they must be having trouble with the 300 series because there has been no news about it that I have seen lately.
Could have sworn they demoed a tesla card based on 300 last December though...
Yes, which are all by Intel which apparently is the world leader in semiconductor manufacturing process, doing the chips at their own fabs. Besides the chips have much more caches(less prone for defects) than GT200, which makes things even worse for GT200.
No matter how the situation is twisted or folded, GT200 is HUGE. Smaller dies are always better, and such huge dies are just bad, bad and bad.
must have been gt200 based, def... they usually release tesla and professional graphics a couple of months after the end user cards...
comparing gtx285 vs gts250 in crysis and crysis warhead doesnt make sense cause both are in the unplayable or barely playable range imo, even at 1280x1024 a gtx285 only pulls around 20/30 fps (min-av) the gtx285 is only 50-100% faster here...
add 10 fps and you got the stalker clear sky results, and like above, the gtx285 is 50-100% faster than the gts250.
and now for gta4, lol... a 9800gtx+ does about the same as a gtx280 here... ouch... :D
http://www.pcgameshardware.com/aid,6...eviews/?page=2
g200 3x as fast as g92 my 4ss :P
http://img171.imageshack.us/img171/9...a4gpus1680.png
and thats a quadcore@3.33ghz, thats where cpus stop scaling with gta4... so dont call it cpu limited :P
http://www.pcgameshardware.de/aid,66...l/Test/?page=2
http://img171.imageshack.us/img171/6...enches1280.png
I think that that the focus they put on highend chips is well placed. Highend products, particularly ones that out do the competition, garner attention. Attention is visibility, and visibility is vital to marketing. The performance crown is an important aspect of the business.
Where nVidia fails, is that they continue with this strategy throughout the life cycle of a product. Once a performance part is released, they should then focus on refining, limiting, and segmenting the arch. to fit into different markets. Go for the highend first to prove they've got a decent part and they're still relevent, then scale it back to keep sales up in all the market segments, which becomes easier once yeilds improve. Lower end parts need a great deal of supply capacity, because they're obviously going to apply to a greater market segment than the highend chips, and thus have more demand. When you're dealing with new architecture, yeilds on working chips may not be too good... As it looks like is happening with GT300. If nVidia focuses on highend parts first, where price premiums impact decision making to a lesser extent, then lower quanities and thus higher prices are much more acceptable. It's the low-mid range that would suffer horribly from inflated prices and non-existant availability. It's cool for a person to have a highend, kickass part that's only available in limited quantites. It's just frustrating to have a mid range card that you were charged a premium for because the company can't get their :banana::banana::banana::banana: together.
nVidia TRIES to do this, but it's a too little, too late situation. By the time they get around to it, the competition *cough*ATI*cough* has already extended it's arm into the other market segments. nVidia is left trying to rally its troops and get products out when the rest of the market is already into a full on invasion. Look what happened with the 8 series cards. Amazing performance for the time, compared to ATI's offerings, but they focused on the highend segment for too long. The 8800GTS and 8800GTX were the only worth while cards for a long time. Their lower end parts were overpriced for the performance. Hell, a 7600GT was still a competetive part compared to the similarly priced 8xxx series cards in those days. ATI's 2xxx series was decent enough, the 3xxx series was whatever it was, a dx10.1 update and die shrink IIRC, but they didn't really get off their asses until the 4xxx series. By then nVidia had some decent midrange products out, but the G200 cards were still in the highend stage, and the only midrange offerings they had to speak of were 8 series cards, either branded as such or under the 9 series rename, to keep up appearances that they were actually doing something with their time. That gave ATI time to catch up, grab some market share, and get products out that could compete or beat the G200 cards, all while maintaining and building a market share in the mid-low end.
nVidia has the right theory going of release highend parts first, then refine them to fit other market segments and simultaneously ramping up clocks and performance on the highend as yeilds improve, their problem is implementation. ATI has a similar, but different strategy going, where they've got their highend offerings, but from experience they seem to be throwing more thought into their mid-low end products, where nVidia's presence is lacking. The difference, is that they can actually put their plan into action. It seems like nVidia just kinda releases the highend cards and keeps hyping them to death, while lackadaisically developing their lower end offerings, which end up failing spectacularly, all while they're blind to the fact that ATI has competitive solutions available already, in all segments.
Those gta4 benchmarks look like the result of bad optimizations more than hardware capability. The scaling is really bad.
yeah gta4 is one of the worst pieces of game code for sure...
its not like previous gta games delivered on the hw they demanded to run though... gta has always been very unefficient... only battlefield has been worse in my experience...
doesnt gta4 still scale with more than 6gb of memory?
and it scales above 3ghz intel cpu power... :stick:
most games scale until what, 2.6ghz cpu power and 3gb memory?
the best eye candy/hw requirement game ive played so far is far cry2 closely followed by mass effect and gears of war...
Very good point. Although nvidia and ati always are at in competition with performance. Its as if nvidia is trying to instill the top spot for performance. And while nvidia does that ati is flooding the midrange with well prices gpus. Nvidia is always late to the punch for that. And everyone knows most of the market for gpus is midrange buyers.
Just my .02
I dont see why is this so surprising? Same big chip as g80 and gt200 (or even bigger) and based on not nearly great and proven process as 90nm or 65/55nm. I'd sad it was more than expected.
But i'd take it with grain of salt. Cause we don't know what they really emeant with just 7 chips to be success . Does that mean only seven came out uncorrupted or only seven reach jump over bar that's raised too much by some CEO. I'd rather expect only 20% but onmuch lower clocks than expected. What they announce ~750MHz while they reach only 650MHz on much smaller gt200b and on 18 month old proces. I'd say the bar is too optimistically risen too high for gt300.
Yep i know they only have 128TMUs and 64RBEs nowadays so that advantage over ATi is now with RV870 totally melt away especially from the glorious days of g80 64:24 @680MHz vs. lately r600 pitiful 16:16 @740MHz. But still even with only 600MHz (i think they could reach at least 20% yield at that clock :D) 128TMUs 76,8GTex/s vs. RV870 80TMUs (@850MHz) that reaches only 68GTex/s. But on the other hand do they really think they could make single chip that will make ATi crawl again without some really new radically architecture?!
:(:(
otoh .... they could make it cheaper cause in some parts of the world there are no rebates and gpus stay overpriced even when price steadily decline. even when they're phased out
hd5850 $249 and hd5870(basic 1gb) $299 would be fair even for predatory manufacturers. and this way with even overpriced junipers $199 they simply rip us off.
What a hellawa new architecture if any of this is true?! It's yet another redesigned chip with g70 heritage that we're watching for too long now.
We could see now why the'r all mumbling about GPGPU all they maybe redesigned and upgraded was better and more advanced CUDA support and like we really care about it and for that reason we had to have huge overburdened chip that in the end has less FLOPS than some much smaller solution from ATi. It's more than disappointing. Why they fight CUDA front with gamers/CGs money just that somebody could crunch numbers on same chip?! Well that question wouldn't be such ubiquitous if they're really redesign their GPGPU approach as they claim.
What....GTX 280 OC / GTX 285 pulls 25fps min easy at 1680x1050 w/ everything at Enthusiast / Very High no AA. I've owned a really good oc'ing 280 a while back and at 738/1512/1200 the card could push 1920x1200 all enthusiast while still maintaing avg fps above 25 and min 21-22.
:confused:
Thats exactly what Im talking about. With latest patch, Q9650 @ 4.3Ghz+ and 8800 Ultra (702/1728/1150) I'd be lucky if I hit 23-24 fps in some of the bad spots of the city @ 1680x1050 with stuff maxed to High / Very High and shadows at 8.
With 4870x2 I can play 1920x1200 30fps+ easy...I'd really love to know how this site benchmarked the game because from personal experience in actually playing the game...graphics card still matters even in this game.
yeah but you said g200 is 3x as fast as g92...
so that would mean you expect an overclocked gts250 in your system with the same settings to only hit 7fps min and av fps only 9? thats def not the case :P
and those fps are really too low to enjoy crysis imo... its... ok... but really not nice...
no idea... they dont mention too many details on settings unfortunately... even 30 min fps sucks tho... really crappy coding/porting of the game :shakes:
but again, a g200 isnt 3x as fast as a g92... not even when you only look at minfps...
Regarding Nvidia's strategy...
Targeting the high end was fine especially since ATI was doing the same. In fact, the mid-range mainstream cards were largely neglected throughout the course of history, for those of us who remember that far back.
In fact, it was usually the exception, not the norm, to have a great mid-range card. In particular, the 6600's and the X1950Pro stood out to me as great bang/buck cards - but they were part of a whole flood of bad cards. Remember how craptacular the 8400/8600's were as well as the 2400s/2600s just 2 1/2 years ago?
Of course, we've been spoiled lately but Nvidia's strategy has definitely started to hurt regarding the $150-$300 range. The GT200 release was helped by having a G92 lineup underneath it, but the RV770s easily slotted in to the 200-300 range, especially since the GT200's priced themselves too high while the G92 performance couldn't match RV770 as easily.
I will say though that I like ATI's new approach to releasing a whole slew of cards at once - meaning we can get a 58xx, 56xx, and 54xx all within a short period of time, rather than the old approach they (and Nvidia still) has of releasing the flagships, then waiting 6months to a year+ for the mid range to show up
Definitely agree. I always bought my cards around the $200 price range for as long as I've been buying cards (since Matrox days). It seems no coincidence that when one brand falls short in the mid-high range that I don't buy at that time. It's fun to see who has the top dog but for me the real competition has always been the 6600GT, the 9800PRO, etc.
Man all in all I just want to know when the gt300 series will be out. I just wish they could be out before the end of this year. I'm sure its not going to happen, and if thats the case i'll probably just go ati for this round. I'm not biased towards either company, I just go by supply and demand. Ati is ahead of the game right now. Likewise I'm sure the gt300 will be faster clock for clock, but I don't want to wait another 6+ months.
Charity, charity everybody fronts out that shield :clap: (Does anybody knows what charity means these days) It's not charity just a common sense about ATi's MSRP. Stores could easily raise that prices if demand overcame supply. So it would be some common sense for ATi. But hell with reason when there's enough unreasonable buyers :D
Is it really that much...? YES!
It's significant.
Do you understand that people don't buy Computers every month, like you seem to think...? AT best, people make this plunge about every 4 years!
Don't underestimate how much people spend on Santa Claus. The increasing importance our computers have on our lives is nothing short of a cultural phenomenon. Rampant use of PC's as "Family Centers" and as entertainment and socializing, etc... means this year's Holiday Season is going to be huge for the computer industries !!
And leading the charge is going to be Microsoft. With the multi-media experience of Zune and Windows 7 ...!
It's going to be HUGE!
OVERDONE AND MASSIVE. Don't underestimate marketing and how much people value seamless and ease. You Tube, Facebook, Twitter, eHarmony, Stocks, etc.. unite's the world.
Microsoft's Windows 7 is going to be in the center of all of that. Record sales!
That means, indirectly DirectX 11.0 will be important as a standard as well as a marketing thing; DX11, ...even if people don't understand it, the logo will be everywhere, well marketed by both ATi and Microsoft.
It's going to be EVERYWHERE.
Mac people are fascinated by QuickTime X ... Microsoft can market Window 7's and DirectX, as an even deeper & richer environment than Macs.
Microsoft has a bunch of technologies it is selling you with Windows7. Natal, Zune, DX11, Aero, 64bit, easy, powerful, etc..
Simple is good... standards are good! I think this time around, Microsoft is going to sell the idea of Window7 being more compatible and easy to use.
Apple might need to be concerned, if people start shopping
So, If nVidia doesn't have a product for these hungry buyers, ATi will make massive profits and will be able to drop prices once nVidia enters the market (around the Superbowl).. squeezing nVidia's sales.
All-the-while releasing budget 5600 series, with a final blow. Nvidia will have to play a fine line of dumping chips for market share and profits. Ati could thwart nVidia's attempt to recoup their fab & funding for the GT300 if the only market is the $499 high-end GPUs. As that hi-end market is minuscule to profits to be made in mainstream sales.
You do understand that the SLI and 4870x2 Market is extremely small... not everyone is Xtreme, we are enthusiasts, you seem to forget that often.
Investor's don't care if Nvidia's low yield, high-end ($499) GPU sell's some 40k GPU's to a bunch of computer freaks.
Or, i could just be really high ...?
originally there was only one current gen card from each maker, then they made OEM/cut down cheaper retail versions so there were 2 versions... then they stopped using the last gen as mainstream/entry level product and created a card based on the latest gen but physically cut down and not just castrated by blowing fuses. from there on more and more skus developed...
well thats a diferent thing then... was a misunderstanding i guess :D
but i wouldnt say a gtx280 beats 3 g92 cards in sli either... actually xbitlabs made an article showing that 2 g92 cards in sli beat a gtx285 and are cheaper... :)
if you can accept it being a multi gpu setup thats def a good alternative...
i have a set of gts250 and gtx260 cards at home and the 260s are barely faster in most games, but consume notably more power, are way bigger and heavier, have a bigger heatsink and still run hotter... i def prefer the gts250 cards, cheap, small, very fast for their price and size...
the only time i had to throw in the second card in sli to play games in 1920x1080 with all eye candy and aa is in far cry2 and crysis... besides that i could play everything fine with 1 gts250 so far...
AMD has terrible market share. A few years ago Intel slashed their own CPU prices when the core architecture came out to flood the market with their chips and maintain brand loyalty. This worked and core now dominates the market. AMD is in trouble with nVidia's branding and they might want to recruit an entirely new generation of customers right now - they could provide a superior product, at a lower price, make a :banana::banana::banana::banana:-ton of sales, converting millions of users and setting themselves up for a coup in the next generation. If they do slim their own profit margins, they have months to sell to users before GT300 is even a threat, and when it does come out nVidia would be forced to take a loss coming to market late with lower yields.
It's a trade-off - give up some profits right now, in effect "buying" loyalty and market share, and then hope to exploit this in the future for profit then.
ATI can't realistically lower the price on these parts enough to put them into the price range where sales volumes will increase dramatically and still make a profit. The kinds of people that know anything about these cards or even care enough to keep up with the technology (people like us) are usually willing to pay a price premium for the latest and greatest piece of kit, but we're definitely in the minority.
ATI will do what it (and NVIDIA) have always done: try to win the high end and then let the reputation trickle down to your lower parts to increase sales regardless of performance.
9800 pro was the top dog for a while, so that's not targeting "mid range".
For me, it was really the push to LCD's that started me gunning for the high end. :yepp:
Prior to that, why bother? Fast FPS games (my favorite) are always run at 800x600 with :banana::banana::banana::banana: details just to make things easier to see. No real need for anything beefy.
But now with no true quake/unreal game out (mostly single player slower games) why not see the detail? And having a native res forces you to run higher end cards.
I hope the next gen cards bring a lot to the table. I've been running OC'd 8800gts 640 since they hit and yet to come across anything I can't play.
Yes, games used as benchmarks. Look at some of the older articles on anandtech.com.
This was why GTX 280 was so underwhelming with its $650 price tag. You could get better performance from a $499 9800 GX2--and you weren't missing out on anything (i.e. No DX10.1, both supported CUDA/PhysX and etc).
So what happens then? AMD have lowered their prices BAM GT300 arrives and AMD have to lower prices once again resulting in lower margins. One pricecut is more then enough. Market shares will come slowly, it won't happen over night and AMD can't sacrifice tons of potential income just to gain a few marketshares.
I believe they should stick to the separate cards approach. Make a really powerful physx card and then make a powerful gpu. That way you can buy exactally what you want. Everyone seems to have 2+ pcie slots theese days waiting to be used. They seem to be pushing the dedicated physx card anyways, so why not just make one then. The regular gpu can have some physx power just enough for minor use. Id rather run a separate physx dedicated card truthfully anyways so why would I need to buy more gpus with physx Im not gonna use. That way crunchers can buy more physx only cards, and gamers can buy more gpus. That way it isnt forced down your throat if all you want is the physx or just a gpu.
And you're basing this on what?
On the fact that unemployment rates continue to climb? Up to 9.7% nationwide as of August with no end in sight....numbers continue to climb, but do look like they're at least slowing down slightly. (August's rise was "only" 0.3% higher than July, and the previous six months' pace of unemployment increase--Dec '08-May '09-- was a 0.4-0.5% increase each month.)
And we'll have a national unemployment rate well over 10% by Christmas....count on it.
Or maybe you're basing your optimism on the fact that 16 states have unemployment rates over 10%? And another 7 states are in the 9% range. Maybe it's the three (3) states with unemployment rates that are below 6%...ND, SD, and Nebraska. Yeah, those will do it.
Here you go....a picture of the unemployment across the U.S. by state/region, as of June '09.
http://www.ritholtz.com/blog/wp-cont...8/mstrtcr1.gif
So what region is going to support a big Christmas? Not the west coast, not anywhere from Florida to Michigan. Maybe the Northeast....which is mired with 8% or higher unemployment across the region. Guess it's all down to the Dakotas and Nebraska.
I'd wager that the Asus CEO is more correct, that Windows 7 will NOT usher in huge computer sales, not to business, not to home consumers. With the huge economic uncertainties that still abound nationwide, I predict this Christmas will be as dismal as last season, if not worse.
True, there are signs the economy is picking up slowly. But instead of employment picking up, which is always the last thing to happen when coming out of a recession, the still employed workers are simply being asked to do more, either with long erhours or more productivity expectations.
If we're lucky, we'll see a "big" Christmas in 2010, and I use the term "big" in relative terms....big only in respect to the '08 and '09 seasons, not to really big holiday seasons like in the mid-'90's.