You are totally right on this.
If an AMD hobo was asking for my money and an nVidia hobo was asking for my money too, guess which hobo would get it:
Clue: not nVidia.
Printable View
It's just the underdog vs. goliath thing
And anyways, I don't think the GPU manufacturer's actually WANT prices to be low. They want to make as much money as possible - I'm sure, looking back, that ATI would've gladly priced RV770 higher and made 25% more profit and the price STILL would have been competitive. They were probably shocked more than anyone else that RV770 was as good as it was
When 3DFx was around i loved them even tough they were not under dogs
299 for 480 lol you people are really getting high i can't even buy gtx 285 for that much oh my god this thread is epic
Back in Nov (09 that is), I said Fermi launch would take a miracle. There were just too many risks and problems:
1. Precedent. Everyone was thrilled with big G80. It was a big jump. Everybody expected it now. Naturally successor, G200 had to be big to outperform. Fermi - making it 200 SP or even 300 SP was not an option. And ofcourse the hype! nVidia fans expecting "the 2nd coming" have been impatiently waiting for over 20 months (GTX280).
2. Complacency. AMD FUBAR'd on R600. RV670 was just a patch job. Ofcourse nVidia went along with (logical!) rumours that 480SP. Hook, line and sinker. Likewise, despite die shrink, who would've guessed 2x SP and still comfortable at just over 300mm2 range. All of a sudden, AMD is very (Apple) good at keeping secrets and misdirection.
3. Fear of Failure. The fx5800ultra used 130nm early, and had problems. nVidia vowed not to repeat mistake. Besides, why not let AMD sort out the shrink issues.
FYI nVidia and AMD switched with G80 launch: X1900 was ~350mm2, while 7800 was 330mm2 (and 90nm 7900 die shrink <200mm2 giving amazing yields and nVidia the price/profit advantage)
4. 1101. nVidia and AMD launched GDDR3 together back in '04. But, AMD wasn't satisfied and used GDDR4 first on X1950, then X2900's and finally on HD3870's. And AMD didn't stop there. GDDR5 in the HD4870 and HD4890 doubled bandwidth and was cheap.
5. Outs with MSFT. nVidia was #1 choice for XBOX. Naturally, after collaborating to bring DX8 to market. But, ATI was first with DX9, and MSFT didnt forget, and chose R500 for XBOX360. Although first with DX10 hardware, nVidia quarelled over features. Early Vista and Win7 drivers were buggy and very limited. And nVidia fell further behind, as AMD added DX10.1 and then DX11 to "first" and "AMD only" lists.
6. The X Factor. AMD - the cool kid? AMD makes graphics chips for both XBOX360 and Wii. Developer who already worked with AMD to make console game, will feel at ease making the PC version run well on AMD graphics. Sadly, PS3 doesn't have that same level of market penetration or console-PC tie-ins. Despite some PhysX games, AMD is getting more popular with developers. And manufacturers too - many jumped ship on "nVidia only" and shipped Radeons.
7. The Full Monty. AMD also makes CPUs and chipsets, which makes IGP development a snap... nVidia meanwhile between a rock (lack of license for Intel i3/i5/7) and a hard place (AMD). AMD Crossfire enjoys hasslefree support across wide range of Intel and AMD chipsets.
And with $1.25B cheque from Intel, legal wise, AMD is virtually worry free. nVidia meanwhile is being sued (again?) by RAMBUS.
8. Avatar refresh. Ofcourse none of these points mattered when nVidia was ubiquous with "fastest". G80 and G92 raked in big profits - but RV770 put a stop to that. Importantly, while AMD shipped a whole product lineup with HD4xxx, and then HD5xx, nVidia only launched the high-end and never improved the midrange or lowend. Unless you think G240 is "really" an improvement on 8800GT/9600GT. No word yet on Fermi derivatives - worrisome
9. Price Wars. G92 is great. Its 256bit bus, small, "simple" PCB and cheap. Unlike G200s. Or Fermi's 384bit bus. Arguably AMD is also ahead in digital power circuitry, and clock/idle power management. But, that's easy to fix for nVidia - except R&D budget is rumoured to be slashed. Due to bargain basement prices on GTX260s in the price war, nVidia didn't make much. 2009 featured triple $200-300 quarterly losses. Q1 is historically low. Tighten the belt and ride the storm out till summer.
I think he's trying to say that nvidia are sliding down a big slope of fail championed by a fat speccy dude who somehow landed a job when he should be the teaboy.
You don't get it. You can't admit it and it is really funny :rofl::ROTF:
You've learned that resistance is futile. :rolleyes:
You guys seem to like to keep this slide out of the "proof" which is from the same slide deck.
http://lab501.ro/wp-content/uploads/...at-580x381.jpg
Also, why wasn't the 5770 priced at $200?
hmmm 2 weeks until cebit, and still nothing?
nada? im starting to think there really wont be any public fermi cards at all...
i wonder if any of nvidias partners even shows up, if there are no new cards to show then whats the point?
they have been on bread n water for a few quarters already... :/
o hai guys i just wanted to tell u that u r as offtopic as it gets. geddit?
They didn't. silent_guy@B3D called Charlie on his mistake hence the 2/15 update to the article.
That bolded bit was of course pulled out of his nether regions since dumping money into R&D won't magically make TSMC's process any better but it's all he can grasp at when he was called on his poor "journalism". It's amazing that this guy actually thinks he knows what engineers and CEOs should do in any given situation yet he makes his living as a lowly rumour-monger who obviously doesn't know how to read financial statements properly.Quote:
Updated February 15th, 2010:
Some users on our forums have pointed out that Nvidia took a one time charge of 90 million in Q1, which reduces the drop in R&D 10% from 50% as stated above. True. The problem is that if a company has a good product in the pipeline and needs to rush it to market, they would likely increase expenditures to pull in the timeline. This costs money, the R&D should therefore rise, not stay flat or fall a little.
its not an article from charlie...Quote:
Nvidia's R&D spending examined
Not what you'd expect
by Luis Silva
February 12, 2010
but why do you say they didnt cut their rnd?
they did... not as much as that guy first wrote, but they did... right?
and what does tsmc have to do with it? i thought 40nm yields were fine now?
rocket sledge meet rocket roller-coaster lol
http://www.blogcdn.com/www.engadget....100216-600.jpg
Ah, my mistake. It's from the same hive mind though.
Their quarterly R&D expense has been hovering around 200m for the last 2 years. It's not a sign of anything.
I was referring to the bolded part above where SA claimed Nvidia should throw money at R&D as if that would magically solve all their manufacturing/yield problems.Quote:
and what does tsmc have to do with it? i thought 40nm yields were fine now?
Well here is a REAL eye opener. This guy Noah seems to know a thing or 2 about what Fermi is and will be. Needless to say, it's not all that good of info if you are waiting for a killer Fermi. If you have an hour or so, read from page 19 on...
http://www.overclock.net/hardware-ne...-480-a-19.html
His posts are certainly very interesting, but I just can't bring myself to the point of believing forum posts anymore. If Fermi isn't all that, it sucks for people waiting this long. At the other hand, I just want to wait for reviews really.
If Fermi is that bad, lets hope it's an eye opener, and kicks nVidia in the backside and get them going again.
And if somebody was missing 003, you can find him posting in that thread :D
I have a doubt with something that Noah guy posted:
Didn't Sony buy only the design of the RSX to reduce costs? Why would they buy Nvidia's GPUs then??Quote:
What is worse is Sony is pissed at nVidia for selling them a previous generation of chip when the 8000 series and Stream Processors were about to be released, along with Physx support integrated. There is a reason nVidia made the RSX on the G78 platform, and it was to limit the hardware and to get those unsold GPUs out the door.
That was a professional (somewhat sarcastic) read. But Noah stated that he has aspergers, so assuming everything in that thread is true (LOLZ!) then it's not surprising that he can be goaded into posting stuff he shouldn't be posting. Anyways, I don't really care, but I'm for the ride just to see people squirm when the card comes out.
His doom mongering is somewhat epic though. Fermi is a hacked together, two chip bastard child of tesla?
Maybe he means that Nvidia purposely made RSX a worse part than it could have made because it wanted their 7000 based PC graphics cards sell still well by not allowing PS3 to be a huge leap forward in the graphics department?
Sounds a bit far too fetched, obviously...
Isn't the 933 Gflop a GTX200 number?
And I didn't bother reading much of the rest, but wasn't user "003" the same one that was here saying how Fermi was coming black Friday for sure. Then later admitting he said that because he wanted it to happen (sounds like what a little kid would do).
:rolleyes:Quote:
Yes, I am a bit upset with the way nVidia have been treating us. To be honest, I look forward to Fermi's final fab to prove me wrong so I can come in here and say "YAY I was wrong, Fermi Rules!". Until then, I have to stick with what exists today.
btw, that Noah guy's first post is nearly complete garbage, now that I read it
Say what?? :confused: :shrug: :ROTF:Quote:
To please the computing gods... Fermi is not being sold right now because it is FREAKING BROKEN! It presently consists of two separate chips working together, and it CANNOT be sold even if they fixed the hyper-transport bridge because the card draws 400 watts. 300 watts is the ATX IEEE spec limit. nVidia either has to severely shrink the die set, compile them into one chip and shrink that, sell the board as a dual board solution, or reduce their performance numbers.
I hope you are happy. I sincerely could get fired for this.
Wow, that's awesome, Fermi Ductape FTW!!! :DQuote:
When I get my beta board in, I will record it and send links to you all. Right now, our solution is using a dual board solution that uses one PCI express slot, but two 6-pin and two 8-pin connectors on a 1200 watt power supply. My issues with this are that it's duct taped together. That wasn't a joke. It is duct taped together right now. The rubber support grommets broke on the back.
I just do the code, and I can tell you that it is harder to code for this right now than it was when Cell first came out. At least nVidia provided a basic translation protocol script for their APIs.
When they have Fermi working, and they will, it is possible it could be completely different from what we are hearing today. All I know is we have a Fermi compatible unit. They told us to ensure the performance is adequate on this setup, but to leave some headroom, so it leads me to believe the card will not be as fast as out beta board(s).
In 2 weeks, we are "SUPPOSED" to get a Fermi beta board... the official reference board. The leaf blower on the beta board is a joke as well. I swear if it didn't have fan blades it could suck me off.
I have said time and time again that I am a NVIDIA FAN BOY AT HEART!!!!! Don't forget that. I really want Fermi to work. I am also pissed about all this crap and hoopla we have to swim through.
I would be quite surprised if it was really the case. To me, it sounds like a pissed off ATI fanboy or a disgruntled Nvidia supporter.
Honestly, I'm a fan of ATI myself, but I'm holding off until Nvidia releases their Fermi to buy my next video card. I'll go with whoever offers me the most bang for the buck :up:
Even though i do not have the required technical knowledge to understand whether he is talking bs or not, he does seem a bit odd. Anyway apparently (according to a poster in another forum) Fermi should be released in March (nothing new i know). One month left!
He claims that the chip is 2025 mm2. Seriously. Read his "specs" post. He also says it's gonna have a 512bit memory bandwidth.
I totally don't get what he is talking about with the two cards thing.
He's saying that fermi is tesla with an additional output display solution.
Quote:
Hmm.. This thread is worthless without picherz!
Seriously.. I'm sure that everyone here would really want to see a ducttaped graphics card.
That's called teasing ... :p:Quote:
Gimme a sec... I need to find a USB cable for this. They don't let us use web cameras, so I will use my cell phone.
Going up stairs now to the tech lab. brb.
Well if he is telling the truth, the only way he could get fired more easily, is if he puts his ID card in those pictures:p:
haha yeah I guess there is about 3 possibilities here. His info is correct and he is telling the truth, he's spreading complete and utter FUD, or he could be intentionaly lowering expectations knowing that the thing rocks, so it will make peoples eyes pop out of their heads when the see the numbers. My first impression was that he truly has correct info, but who knows.
All claims on the internet are false till proven otherwise.
It's a bit surreal to be true to be honest :shrug: I think that guy wants some attention. I'd be curious to see any kind of picture of what the beast looks like :DQuote:
Originally Posted by NoahDiamond
i thought he was real in the first few posts..a lot of info with sources..but the more he posts the more i fail to believe him..and after his recent "i got in trouble" post...i really dont think this guy is telling the truth..sad :(
:ROTF: :rofl:Quote:
Ok. Now I am in trouble. I am not supposed to take photos of this thing, and I did. I told them I was just keeping a personal record. They said delete them now. I said ok. They said if they saw me in there with any sort of camera again, they would take my phone and my access badge.
I don't know why but I was pretty sure of this kind of conclusion :p:
apparently, he sent the pictures to the guy, I'd be curious to know what kind of pictures he got from him XD I swear, this is going be an enormous fake for sure :P
That kinda looks real
Fermi fanboys just got told. Out of curiosity, that big card does look kinda of similiar to recent cards. And also, that hardly looks like a developer environment, but more like somebody's basement. I hesitate to call fake, but I guess I will.
PS I'd be delighted if I was wrong though.
Saved the screenshots, just in case :D
it looks a bit like a GTX295 with a GT240 to me ^^ :shrug:
oh god. the thread got closed. WHY? a guy working at some IT taking pictures of an unreleased card and sends them to a guy in a forum via SMS! INCREDIBLY EPIC!
You know nVidia is getting desperate when you see them having to use duct-tape to hold stuff together :lol:
"That is a BFG GTX295 single PCB. The same BFG GTX295 that Noah Diamond has tried to sell here before, and the same if you try this google search.
Myth Busted.
Closing thread and cleaning it up."
Poop.
argh, the card was a GTX 295! We got trolled!
LoL this beat the wooden board .
fermi is the biggest FIASCO in the history of video cards.
I feel very very sorry for the idiots who are waiting for HALF A YEAR...
Thank god I bought my 5870 like 4 months ago?
Hahaha, I knew that it couldn't be real. Nice try on his part to grab some attention on his GTX295 XD
Well, that was certainly fun while it lasted!
His ad for a 5970 is still up on CraigsList
http://fortmyers.craigslist.org/lee/sys/1586570355.html
if you think about it, why duct tape anyways?
the single slot card supports itself
I guess he is majorly butthurt. Nvidia kiled his father or something
And there you have it. Fermi still hopelessly lost in a sea of darkness.
Thanx flippin' with "REAL eye opener" link, a really good laugh :ROTF: :clap:
Too bad the supposed Fermi card had outputs, which sorta went against what he claimed. But still, I don't think there has been this much hoopla surrounding a release for a while. Great stuff.
haha yeah, well a lot people participated in it a lot more than I did! :D like i said, 3 possibilities, I just went for the first one! heh
That was the funniest thing I had come across in the news section.
Lol. Lol. Lol. Lol for the next little while.
yes, consoles are full of cheap parts. they are priced very aggressively too. it would cost way too much to have g80 based console and its really sony's fault that their console didnt have a good gpu because they came to nvidia way too late to design a good chip for next gen console. back during g70 days nvidia had perf per mm2 crown which made them attractive for console hardware. this is very important when you sell 30 million units.
What the heck was that all about. You know, that dude seriously sounded (for once) pretty serious. :shocked:
Was he serious? Mentally ill? Just a elaborate prank? What was that.
:shrug: This thread is seriously non-informative :shrug: and this is part 2 threads OMFG.
almost as good as the GTX295 WTF Edition!
Despite his lack of factual proof, his posts are certianly convincing. The inclusion of a picture with a 295 however was certainly a way to bomb any credibilty he may have had... I'm guessing he is no more than a talented / bored word smith :up: Funny none the less. However I have a feeling that he may actually have some thigns right.. we shall see when the crap storm that is Fermi finally disapates.
The slides all came from the same slide deck, they just weren't posted publicly, leaked, all at the same time.
Also, there were at least two different Hemlock designs with a third being a clockspeed difference.
Back to semi-ontopic... I'm slightly disappointed that so many fell for that guy. He mentioned a hyper-transport bridge in one of his posts...
i thought it kept climbing over the years... thats what they showed just a few weeks ago :confused:
and it makes sense, im sure rnd has more than trippled since 2000
well they didnt say it would solve mfc problems, it would help to get a new cut down and/or reworked fermi out asap... i dont know about that, but it makes sense... there are a lot of things you can do to cut ttm that have nothing to do with mfc, but they cost money...
http://www.semiaccurate.com/2010/02/...and-unfixable/
the good: A3 came back from the fab at the end of january
the bad: yields suck, top bin is only 448cores and 600mhz
the ugly: shader clocks are only 1200mhz
Quote:
fab wafer yields are still in single digit percentages.
the problems that caused these low yields are likely unfixable without a complete re-layout. Lets look at these problems one at a time...
My goodness, that is actually way worse then I imagined it would be. I hope nVidia has got some serious reserves, because they are going to need it if this article is anywhere near true.
$500 per chip? :eek:Quote:
At $5,000 per wafer, 10 good dies per wafer, with good being a very relative term, that puts cost at around $500 per chip, over ten times ATI's cost. The BoM cost for a GTX480 is more than the retail price of an ATI HD5890, a card that will slap it silly in the benchmarks. At these prices, even the workstation and compute cards start to have their margins squeezed.
A very reliable source told me the top GF100's clocks were about GTX 285. So the 600mhz figure from Charlie is, I believe, wrong.
die size is 550mm2? thats almost g200 65nm big then, ouch...
i thought it was 500mm2 or about g200b 55nm in size...
the ES cards suck 280W... single gpu... wow... :o
mhhhh 480 is rumored to sell for 400-500$, so the bom cost is probably 400$? so 5890 will sell for 400$ and 5870 drops to 300$? :DQuote:
The BoM cost for a GTX480 is more than the retail price of an ATI HD5890, a card that will slap it silly in the benchmarks
maybe thats what they were hoping for... maybe thats what one or two cards can run... or maybe thats what all cards COULD run if they cool them really well...
either way, the clocks dont really matter, 10% higher or lower clocks... thats not gonna make a huge difference... yields are a serious problem... if they are really still that low, then thats bad...
he said GF104 still didnt tape out... that sucks :/
i hope nvidia isnt waiting for 28nm to get GF104 out!
The interesting point was that no GF100 derivatives have taped out yet. So another year before we see Fermi mainstream parts?
yes, that was the most interesting part for me too...
if they follow their old strategy of shrinking and cutting down (G80->G92) then yes, a year almost... if they follow their recent strategy of just shrinking (gt200->gt200b) then it will also be about a year... cause shrinking means 28nm, and thats not going to happen before Q4... if not even Q1 2011...
the only way they can get GF104 out soon is if its still on 40nm... but since they havent taped it out yet... even that wont be too soon :(
they were very optimistic with GF100 and it taped out it july and they wanted to sell it in november... thats 4 months... and that was optimistic... if they tape out tomorrow that would mean GF104 arrives in july... maybe a little sooner... bleh :/
since there are signs of delays and yield issues at 28nm at tsmc ALREADY and we are just in Q1 of 2010 while its supposed to kick off at the end of the year, it would be really stupid from nvidia to wait for 28nm... so im pretty sure they will do GF104 in 40nm and will try to have it out in the middle of this year...
Nvidia's Fermi GTX480 is broken and unfixable
Hot, slow, late and unmanufacturable
http://www.semiaccurate.com/2010/02/...and-unfixable/
Reply from nvidia.
http://twitter.com/RS_Borsti
Oh Charlie... that just another hilarious post
http://twitter.com/Igor_Stanek
Oh Charlie... that just another hilarious post me: I think with this post Charlie totally destroyed his credibility :)
I want to see how he is going to explain his article in March.... looks like biggest lie and mistake of his life :)
It really seems to me that Fermi is just to big, it has so much more arch added in there for Cuda based hardware that it is bigger then it needs to be to work as a gaming card.
I remember when GTX 280 came out my first reaction was, its so big where can they go from here, if it gets any bigger its just not going to work. That reaction was based on really nothing save that my first GTX 280 ran much hotter then I expected and required a second loop to keep my CPU at the temps I wanted.
If in fact Fermi is just too big to make then where does Nvidia go from here?
Do they
A, Rework a smaller version for late 2010, maybe throw out some of the CUDA stuff that was not needed for the gaming market, and then double up like the HD 5970 for the top card?
Problem: Nvidia will be fighting ATI in its own backyard, dual GPU cards is kinda ATI's thing, and from my own experiences with the 5970, ATI has it down. The 5970 is also sitting at the 300W PCIE wall, and though you can break it with over-clocking OEM's don't want to break it for legal reasons. Unless Fermi has better performance per watt, a dual Fermi card will be slower un over-clocked and thus slower at the OEM lvl.
B, Start shrinking down Fermi to 28nm and not release anything till 2011?
Problem: ATI will have something new out by then, maybe the 6XXX cards or by that point 7XXX cards, leaving Nvidia one to two generations behind.
C Find what chips work, throw them on boards with insane cooling just to beat the 5870 by around 10 to 20 % and use the performance crown to sell re-branded G92 based cards to the masses?
Problem: People may catch on and not buy re-branded stuff, and with Win 7 selling so well and OEM's wanting to give everyone DirectX 11, re-branded G92 cards won't cut it in the OEM market. Nvidias market share could crash and the money and time that would have been set aside to help shrink Fermi down to 28nm, will have been used to make a broken card almost work.
I have been holding off getting a 5870 until the MSI lighting version is out, but a part of me was holding off to see how Fermi would do, and I feel I am not alone in that. Now however I feel no reason to wait, my 4870X2 I use most days is no longer new and shiny, and there are a few games where it chokes up a bit, mostly due to CF issues. There is simply no longer a reason to wait, Fermi is not going to be better then a 5870 from the looks of it, and if it is, it will be too hot, and take way to much power to make the small increase worth it.
I hate saying it, but Nvidia has failed with Fermi, even if it comes out and it works, sorta, it is just to late to be called a success, no matter what. It sucks I know, but its about time we and Nvidia admit that Fermi was a bit to much to try to build on 40nm, and the inability of Nvidia to admit and realize this soon enough may have hurt them more then any of us could have expected or predicted.
I think Charlie, as much as I love to discredit the bunny boiler, is telling a half truth at least.
Just go by the vibes, there's nothing being shown to strengthen the launch of Fermi with less than a month to go.
By this time, ATI had a boat and over a hundred cards for the visitors to play with.
Nvidia *might* have a booth with one behind a curtain. Probably with a number 7 written on the chip with marker pen
7800GTX 512 memories coming now.
If this is true, this will leave a scar. It would also teach them a lesson in being humble towards opposition. Monolithic GPUs are the way of failure. They are complex, expensive, and inefficient per mm˛.
A good idea is a good idea. Big corporations cannot survive if they ignore good ideas endorsed by rivals. Look at Microsoft and Google and Apple and every other successful fortune 500. Even ATI! The example from ATI is the ringbus memory controller. It was originally a great idea: plug&play gpus/mem chips with loads of bandwidth. As time went on, they came to realize that it was expensive, in terms of space used, it had to be optimized per GPU to get maximum performance, negating the benefits first assumed, and that made it unnecessary. How many different kinds of memory chips are you going to use with any GPU lineup?
How much time and money did they spend on it? It didn't matter. They scrapped it and went for the classic approach. The engineers learned many valuable lessons, and it is paying off now in their almost fully modular GPU/mem design.
The big green giant needs a slap in the face. While the little red rabbit isn't as big or powerful, it has won a few battles by outwitting the giant.
With Compute Shader in DX11 most of the changes done for CUDA apply to games as well. What exactly is this stuff not needed for the gaming market that you're referring to? The biggest expense in Fermi is definitely the geometry engine and that's completely gaming related.