o hai guys i just wanted to tell u that u r as offtopic as it gets. geddit?
Printable View
o hai guys i just wanted to tell u that u r as offtopic as it gets. geddit?
They didn't. silent_guy@B3D called Charlie on his mistake hence the 2/15 update to the article.
That bolded bit was of course pulled out of his nether regions since dumping money into R&D won't magically make TSMC's process any better but it's all he can grasp at when he was called on his poor "journalism". It's amazing that this guy actually thinks he knows what engineers and CEOs should do in any given situation yet he makes his living as a lowly rumour-monger who obviously doesn't know how to read financial statements properly.Quote:
Updated February 15th, 2010:
Some users on our forums have pointed out that Nvidia took a one time charge of 90 million in Q1, which reduces the drop in R&D 10% from 50% as stated above. True. The problem is that if a company has a good product in the pipeline and needs to rush it to market, they would likely increase expenditures to pull in the timeline. This costs money, the R&D should therefore rise, not stay flat or fall a little.
its not an article from charlie...Quote:
Nvidia's R&D spending examined
Not what you'd expect
by Luis Silva
February 12, 2010
but why do you say they didnt cut their rnd?
they did... not as much as that guy first wrote, but they did... right?
and what does tsmc have to do with it? i thought 40nm yields were fine now?
rocket sledge meet rocket roller-coaster lol
http://www.blogcdn.com/www.engadget....100216-600.jpg
Ah, my mistake. It's from the same hive mind though.
Their quarterly R&D expense has been hovering around 200m for the last 2 years. It's not a sign of anything.
I was referring to the bolded part above where SA claimed Nvidia should throw money at R&D as if that would magically solve all their manufacturing/yield problems.Quote:
and what does tsmc have to do with it? i thought 40nm yields were fine now?
Well here is a REAL eye opener. This guy Noah seems to know a thing or 2 about what Fermi is and will be. Needless to say, it's not all that good of info if you are waiting for a killer Fermi. If you have an hour or so, read from page 19 on...
http://www.overclock.net/hardware-ne...-480-a-19.html
His posts are certainly very interesting, but I just can't bring myself to the point of believing forum posts anymore. If Fermi isn't all that, it sucks for people waiting this long. At the other hand, I just want to wait for reviews really.
If Fermi is that bad, lets hope it's an eye opener, and kicks nVidia in the backside and get them going again.
And if somebody was missing 003, you can find him posting in that thread :D
I have a doubt with something that Noah guy posted:
Didn't Sony buy only the design of the RSX to reduce costs? Why would they buy Nvidia's GPUs then??Quote:
What is worse is Sony is pissed at nVidia for selling them a previous generation of chip when the 8000 series and Stream Processors were about to be released, along with Physx support integrated. There is a reason nVidia made the RSX on the G78 platform, and it was to limit the hardware and to get those unsold GPUs out the door.
That was a professional (somewhat sarcastic) read. But Noah stated that he has aspergers, so assuming everything in that thread is true (LOLZ!) then it's not surprising that he can be goaded into posting stuff he shouldn't be posting. Anyways, I don't really care, but I'm for the ride just to see people squirm when the card comes out.
His doom mongering is somewhat epic though. Fermi is a hacked together, two chip bastard child of tesla?
Maybe he means that Nvidia purposely made RSX a worse part than it could have made because it wanted their 7000 based PC graphics cards sell still well by not allowing PS3 to be a huge leap forward in the graphics department?
Sounds a bit far too fetched, obviously...
Isn't the 933 Gflop a GTX200 number?
And I didn't bother reading much of the rest, but wasn't user "003" the same one that was here saying how Fermi was coming black Friday for sure. Then later admitting he said that because he wanted it to happen (sounds like what a little kid would do).
:rolleyes:Quote:
Yes, I am a bit upset with the way nVidia have been treating us. To be honest, I look forward to Fermi's final fab to prove me wrong so I can come in here and say "YAY I was wrong, Fermi Rules!". Until then, I have to stick with what exists today.
btw, that Noah guy's first post is nearly complete garbage, now that I read it
Say what?? :confused: :shrug: :ROTF:Quote:
To please the computing gods... Fermi is not being sold right now because it is FREAKING BROKEN! It presently consists of two separate chips working together, and it CANNOT be sold even if they fixed the hyper-transport bridge because the card draws 400 watts. 300 watts is the ATX IEEE spec limit. nVidia either has to severely shrink the die set, compile them into one chip and shrink that, sell the board as a dual board solution, or reduce their performance numbers.
I hope you are happy. I sincerely could get fired for this.
Wow, that's awesome, Fermi Ductape FTW!!! :DQuote:
When I get my beta board in, I will record it and send links to you all. Right now, our solution is using a dual board solution that uses one PCI express slot, but two 6-pin and two 8-pin connectors on a 1200 watt power supply. My issues with this are that it's duct taped together. That wasn't a joke. It is duct taped together right now. The rubber support grommets broke on the back.
I just do the code, and I can tell you that it is harder to code for this right now than it was when Cell first came out. At least nVidia provided a basic translation protocol script for their APIs.
When they have Fermi working, and they will, it is possible it could be completely different from what we are hearing today. All I know is we have a Fermi compatible unit. They told us to ensure the performance is adequate on this setup, but to leave some headroom, so it leads me to believe the card will not be as fast as out beta board(s).
In 2 weeks, we are "SUPPOSED" to get a Fermi beta board... the official reference board. The leaf blower on the beta board is a joke as well. I swear if it didn't have fan blades it could suck me off.
I have said time and time again that I am a NVIDIA FAN BOY AT HEART!!!!! Don't forget that. I really want Fermi to work. I am also pissed about all this crap and hoopla we have to swim through.
I would be quite surprised if it was really the case. To me, it sounds like a pissed off ATI fanboy or a disgruntled Nvidia supporter.
Honestly, I'm a fan of ATI myself, but I'm holding off until Nvidia releases their Fermi to buy my next video card. I'll go with whoever offers me the most bang for the buck :up:
Even though i do not have the required technical knowledge to understand whether he is talking bs or not, he does seem a bit odd. Anyway apparently (according to a poster in another forum) Fermi should be released in March (nothing new i know). One month left!
He claims that the chip is 2025 mm2. Seriously. Read his "specs" post. He also says it's gonna have a 512bit memory bandwidth.
I totally don't get what he is talking about with the two cards thing.
He's saying that fermi is tesla with an additional output display solution.
Quote:
Hmm.. This thread is worthless without picherz!
Seriously.. I'm sure that everyone here would really want to see a ducttaped graphics card.
That's called teasing ... :p:Quote:
Gimme a sec... I need to find a USB cable for this. They don't let us use web cameras, so I will use my cell phone.
Going up stairs now to the tech lab. brb.
Well if he is telling the truth, the only way he could get fired more easily, is if he puts his ID card in those pictures:p:
haha yeah I guess there is about 3 possibilities here. His info is correct and he is telling the truth, he's spreading complete and utter FUD, or he could be intentionaly lowering expectations knowing that the thing rocks, so it will make peoples eyes pop out of their heads when the see the numbers. My first impression was that he truly has correct info, but who knows.