haha yeah, well a lot people participated in it a lot more than I did! :D like i said, 3 possibilities, I just went for the first one! heh
Printable View
haha yeah, well a lot people participated in it a lot more than I did! :D like i said, 3 possibilities, I just went for the first one! heh
That was the funniest thing I had come across in the news section.
Lol. Lol. Lol. Lol for the next little while.
yes, consoles are full of cheap parts. they are priced very aggressively too. it would cost way too much to have g80 based console and its really sony's fault that their console didnt have a good gpu because they came to nvidia way too late to design a good chip for next gen console. back during g70 days nvidia had perf per mm2 crown which made them attractive for console hardware. this is very important when you sell 30 million units.
What the heck was that all about. You know, that dude seriously sounded (for once) pretty serious. :shocked:
Was he serious? Mentally ill? Just a elaborate prank? What was that.
:shrug: This thread is seriously non-informative :shrug: and this is part 2 threads OMFG.
almost as good as the GTX295 WTF Edition!
Despite his lack of factual proof, his posts are certianly convincing. The inclusion of a picture with a 295 however was certainly a way to bomb any credibilty he may have had... I'm guessing he is no more than a talented / bored word smith :up: Funny none the less. However I have a feeling that he may actually have some thigns right.. we shall see when the crap storm that is Fermi finally disapates.
The slides all came from the same slide deck, they just weren't posted publicly, leaked, all at the same time.
Also, there were at least two different Hemlock designs with a third being a clockspeed difference.
Back to semi-ontopic... I'm slightly disappointed that so many fell for that guy. He mentioned a hyper-transport bridge in one of his posts...
i thought it kept climbing over the years... thats what they showed just a few weeks ago :confused:
and it makes sense, im sure rnd has more than trippled since 2000
well they didnt say it would solve mfc problems, it would help to get a new cut down and/or reworked fermi out asap... i dont know about that, but it makes sense... there are a lot of things you can do to cut ttm that have nothing to do with mfc, but they cost money...
http://www.semiaccurate.com/2010/02/...and-unfixable/
the good: A3 came back from the fab at the end of january
the bad: yields suck, top bin is only 448cores and 600mhz
the ugly: shader clocks are only 1200mhz
Quote:
fab wafer yields are still in single digit percentages.
the problems that caused these low yields are likely unfixable without a complete re-layout. Lets look at these problems one at a time...
My goodness, that is actually way worse then I imagined it would be. I hope nVidia has got some serious reserves, because they are going to need it if this article is anywhere near true.
$500 per chip? :eek:Quote:
At $5,000 per wafer, 10 good dies per wafer, with good being a very relative term, that puts cost at around $500 per chip, over ten times ATI's cost. The BoM cost for a GTX480 is more than the retail price of an ATI HD5890, a card that will slap it silly in the benchmarks. At these prices, even the workstation and compute cards start to have their margins squeezed.
A very reliable source told me the top GF100's clocks were about GTX 285. So the 600mhz figure from Charlie is, I believe, wrong.
die size is 550mm2? thats almost g200 65nm big then, ouch...
i thought it was 500mm2 or about g200b 55nm in size...
the ES cards suck 280W... single gpu... wow... :o
mhhhh 480 is rumored to sell for 400-500$, so the bom cost is probably 400$? so 5890 will sell for 400$ and 5870 drops to 300$? :DQuote:
The BoM cost for a GTX480 is more than the retail price of an ATI HD5890, a card that will slap it silly in the benchmarks
maybe thats what they were hoping for... maybe thats what one or two cards can run... or maybe thats what all cards COULD run if they cool them really well...
either way, the clocks dont really matter, 10% higher or lower clocks... thats not gonna make a huge difference... yields are a serious problem... if they are really still that low, then thats bad...
he said GF104 still didnt tape out... that sucks :/
i hope nvidia isnt waiting for 28nm to get GF104 out!
The interesting point was that no GF100 derivatives have taped out yet. So another year before we see Fermi mainstream parts?
yes, that was the most interesting part for me too...
if they follow their old strategy of shrinking and cutting down (G80->G92) then yes, a year almost... if they follow their recent strategy of just shrinking (gt200->gt200b) then it will also be about a year... cause shrinking means 28nm, and thats not going to happen before Q4... if not even Q1 2011...
the only way they can get GF104 out soon is if its still on 40nm... but since they havent taped it out yet... even that wont be too soon :(
they were very optimistic with GF100 and it taped out it july and they wanted to sell it in november... thats 4 months... and that was optimistic... if they tape out tomorrow that would mean GF104 arrives in july... maybe a little sooner... bleh :/
since there are signs of delays and yield issues at 28nm at tsmc ALREADY and we are just in Q1 of 2010 while its supposed to kick off at the end of the year, it would be really stupid from nvidia to wait for 28nm... so im pretty sure they will do GF104 in 40nm and will try to have it out in the middle of this year...
Nvidia's Fermi GTX480 is broken and unfixable
Hot, slow, late and unmanufacturable
http://www.semiaccurate.com/2010/02/...and-unfixable/
Reply from nvidia.
http://twitter.com/RS_Borsti
Oh Charlie... that just another hilarious post
http://twitter.com/Igor_Stanek
Oh Charlie... that just another hilarious post me: I think with this post Charlie totally destroyed his credibility :)
I want to see how he is going to explain his article in March.... looks like biggest lie and mistake of his life :)
It really seems to me that Fermi is just to big, it has so much more arch added in there for Cuda based hardware that it is bigger then it needs to be to work as a gaming card.
I remember when GTX 280 came out my first reaction was, its so big where can they go from here, if it gets any bigger its just not going to work. That reaction was based on really nothing save that my first GTX 280 ran much hotter then I expected and required a second loop to keep my CPU at the temps I wanted.
If in fact Fermi is just too big to make then where does Nvidia go from here?
Do they
A, Rework a smaller version for late 2010, maybe throw out some of the CUDA stuff that was not needed for the gaming market, and then double up like the HD 5970 for the top card?
Problem: Nvidia will be fighting ATI in its own backyard, dual GPU cards is kinda ATI's thing, and from my own experiences with the 5970, ATI has it down. The 5970 is also sitting at the 300W PCIE wall, and though you can break it with over-clocking OEM's don't want to break it for legal reasons. Unless Fermi has better performance per watt, a dual Fermi card will be slower un over-clocked and thus slower at the OEM lvl.
B, Start shrinking down Fermi to 28nm and not release anything till 2011?
Problem: ATI will have something new out by then, maybe the 6XXX cards or by that point 7XXX cards, leaving Nvidia one to two generations behind.
C Find what chips work, throw them on boards with insane cooling just to beat the 5870 by around 10 to 20 % and use the performance crown to sell re-branded G92 based cards to the masses?
Problem: People may catch on and not buy re-branded stuff, and with Win 7 selling so well and OEM's wanting to give everyone DirectX 11, re-branded G92 cards won't cut it in the OEM market. Nvidias market share could crash and the money and time that would have been set aside to help shrink Fermi down to 28nm, will have been used to make a broken card almost work.
I have been holding off getting a 5870 until the MSI lighting version is out, but a part of me was holding off to see how Fermi would do, and I feel I am not alone in that. Now however I feel no reason to wait, my 4870X2 I use most days is no longer new and shiny, and there are a few games where it chokes up a bit, mostly due to CF issues. There is simply no longer a reason to wait, Fermi is not going to be better then a 5870 from the looks of it, and if it is, it will be too hot, and take way to much power to make the small increase worth it.
I hate saying it, but Nvidia has failed with Fermi, even if it comes out and it works, sorta, it is just to late to be called a success, no matter what. It sucks I know, but its about time we and Nvidia admit that Fermi was a bit to much to try to build on 40nm, and the inability of Nvidia to admit and realize this soon enough may have hurt them more then any of us could have expected or predicted.
I think Charlie, as much as I love to discredit the bunny boiler, is telling a half truth at least.
Just go by the vibes, there's nothing being shown to strengthen the launch of Fermi with less than a month to go.
By this time, ATI had a boat and over a hundred cards for the visitors to play with.
Nvidia *might* have a booth with one behind a curtain. Probably with a number 7 written on the chip with marker pen
7800GTX 512 memories coming now.
If this is true, this will leave a scar. It would also teach them a lesson in being humble towards opposition. Monolithic GPUs are the way of failure. They are complex, expensive, and inefficient per mm˛.
A good idea is a good idea. Big corporations cannot survive if they ignore good ideas endorsed by rivals. Look at Microsoft and Google and Apple and every other successful fortune 500. Even ATI! The example from ATI is the ringbus memory controller. It was originally a great idea: plug&play gpus/mem chips with loads of bandwidth. As time went on, they came to realize that it was expensive, in terms of space used, it had to be optimized per GPU to get maximum performance, negating the benefits first assumed, and that made it unnecessary. How many different kinds of memory chips are you going to use with any GPU lineup?
How much time and money did they spend on it? It didn't matter. They scrapped it and went for the classic approach. The engineers learned many valuable lessons, and it is paying off now in their almost fully modular GPU/mem design.
The big green giant needs a slap in the face. While the little red rabbit isn't as big or powerful, it has won a few battles by outwitting the giant.
With Compute Shader in DX11 most of the changes done for CUDA apply to games as well. What exactly is this stuff not needed for the gaming market that you're referring to? The biggest expense in Fermi is definitely the geometry engine and that's completely gaming related.