Yep thanks hobbes
Printable View
Yep thanks hobbes
:rofl:
Joking aside, 20% faster for the Fermi which we will see in March sounds good... especially as we know that there is a more powerful variant coming in Q3. What are the odds we see the 360 6months before the 380?
As in March for the 448 piper (20% faster than HD 5870) and September for the 512piper which would trade blows with the HD5970?
Ati will most likely release a HD 5890 come May which will significantly narrow the Fermi performance lead to just a few %.
If ATi can get a 2GB at least 950Mhz or even 1Ghz HD 5890 out of the door it will be the first time I have purchased an ATi card since the Radeon 9700 Pro. If not then I will most likely stick it out with a working GTX 285 OCX I receive from RMA until something with more VRAM comes along.
John
It seems both cut-down and full part will be launching in March, placed on top of the GTX275 in the price list. Dual Fermi a couple of weeks after that. can't say if the cut-down part is 384 or 448. TDP for the "big" part should at least match HD5970.
Really?
Hmm nice. So, what IS coming in Q3? (other than the Daddy of all Quadro cards).
Could this be an Ultra Part?
What everyone needs to remember is that the ASUS M.A.R.S was essentially a rebaged Quadro FX 5800
Whatever is coming in September, is going to be huge (6GB of VRAM for a start). Could this be the dual Fermi?
Perhaps the cut-down part has 1.5GB of VRAM, the full part 3GB of RAM and the Dual part "6" GB?
It would make sense as OpenCL, DirectCompute, CUDA etc love the VRAM too you know..
John
5970's TDP is 294W iirc.
I believe a dual Fermi would come soon. One of the reasons why NVIDIA has managed to built quite a number of loyal followers in the past years was their "fastest card on the market" claim. I don't think NVIDIA would want to lose that claim, especially after all the heat Fermi has taken.
For a dual Fermi, they could especially bin for chips that can undervolt, lower the clocks and/or disable some cores (like they did with GTX295). Single Fermi needs to push the power envelop to reclaim the "fastest GPU" title; but NVIDIA will manage a dual Fermi as well, IMHO.
i'm still running g92's, they've served me well, it's a good product when priced right. i just want to see at least 3 gf100 based geforce cards.
i agree with you, we can wait, true performance is really a year away with driver maturity....
btw, that ati marketing chart is probably the worst peice of negative pr i've EVER seen.... :shakes:
Yeah, right...Quote:
GTX395 is 60% faster than GTX380
GTX395 is 70% faster than HD5970
Release Date : May 2010, Price : 499-549 USD.
I'm talking based upon the points neliz made. He said that a single Fermi's TDP would be 300w. Now you could take a stripped Fermi part and downclock it even further maybe like you said to make a dual Fermi, but it would still have a TDP of 300w and it would be SLI.
So a dual part drawing the same power as the single GPU part, and with SLI would probably mean being actually slower than the single GPU part. I don't get how a dual Fermi is going to be made if a single Fermi draws 300w
499-$549? FAIL, also 640-bit? Man if it could pack all those things listed at 40nm then this card would have to be like probably at least 35~40cm long which I doubt will happen.
at $500, $600 and $700? two single parts at launch, cut down and (probably) full product at launch, although they could also launch a 256/320/384 or 448 sp part. Remember nvidia says "up to" 512 cuda cores.. and they don't want to be caught lying. dual fermi part is definitely a number of weeks after that.
I also wouldn't be surprised if the boards shown at CES were 256sp parts. Since most of the demos ran at 720p and even that simple rocket sled demo turned into a slideshow once they turned tessellation on, they must be hiding something.
Now that rajiff the thief (whatever) guy can either deny or confirm it, but in march we will see one hug *ss fermi based chip and one of more decent size. It would also be nice if the confirmed the three different part available in this and the next quarter. So far he seems to be telling me less than people that actually are under NDA.
their single card (GTX380) won't gain their performance crown, so I wouldn't be surprised if they release a dual-gpu based fermi shortly after their fist single gpu cards, but lets see GTX 380 tdp and temps goes...
as annihilat0r pointed out there's just no space in 300W budget to squeeze two GTX380's... hell even two 360's would be challenge, and that would require significant mumbo-jumbo on binning side... but... they can always stitch something like they did with EVGA... you know GTX275+GTS250 freak... but this time with GTX360 and... err... GTS340!? They have know-how, and they'll have bunch of 360's if rumored yield is true ;)
Power draw is directly proportional to performance. What I don't get is that, if neliz is right and GTX 380 will draw 300W, then there can't be anything faster than that in the current node and architecture. Whatever they do to castrate Fermi to put 2 of it into 1 card, it must still be 300W. So I am guessing that Fermi won't be 300W but something like 240 or 250W. That way, a dual version might be at least somewhat faster than the GTX 380.
its going to be interesting if nvidia can make a stronger part than a 5970 thats still under 300W. if there was no 300W limit, then things would get really interesting from both parties.
And hot... I know power consumption doesn't always mean more heat, but chances are a 300W chip will dissipate more heat than a 200W chip... Imagine over that :D Of course, it's no big issue in a forum where people have enormous quads (or hexa for lucky bastards :D) overclocked and enormous multi gpu setups :D
They didn't turn on tessellation, it was always on. They turned on wireframe which results in lots of overdraw and additional geometry (the white lines you see are actual line primitives), hence the slowdown. There's enough bad news out there that you don't need to imagine it as well ;)
I can no longer insult "The Big One":confused:
(was going to write something about wireframe being cheap back in the day, then I noticed in the video that the demo defeated the entire purpose of tessellation as the detail did not decrease further from the viewpoint, pause it at 3:48 and the amount of polys is just absurd on the mountain in the background.)
Anyway, not bad for a (my guess) 384SP part.
Lol, why would you want to insult the "Big One"? Wireframe is cheap when it's wireframe only. When it's regular rendering + wireframe it's not cheap (wireframe adds even more work). In terms of whether they're doing dynamic LOD or not, does it really matter for the purposes of the demo?
Mmmm... I find it interesting that no one has taken neliz's hints to heart. ;)
Beating out the ATI5870 the new GF100 Fermi architechture has been thoroughly tested in our private labs and have found to be the fastest in ALL of the following popular, totally and completely non-infulenced, games. We promise.
FermiMarks10
Call of Fermi 5
Fermi Cry 2
World of Fermicraft (WoF)
Grand Theft Fermi IV
Fermilands
Fermiout 3
Left 4 Fermi 2
Age of Fermi III
Alien vs Fermi
Assassin's Fermi II
Resident Fermi 5
Fermi Trek Online
Fermisys 2
With the lead in so many popular games the debate is over. GF100 is the shizzzznizzle!!!!!111one :ROTF:
I still doubt it's actually drawing the wireframe that throws it off (it means the Heaven benchmark would show it too / AVP demonstrations) but the insane amount of polys that it has to render extra since they turn into sub-pixel sizes at those distances.. now that's where it's going to hurt.
Every invited journalists has complete Fermi Geforce slides now, NDA will lift in few days, but i hope someone broke it and publish something on B3D or somewhere in Asia like always. If you want to know +/- perf. look at my previous post here in this thread. Dont worry about TDP, on CES are only first silicon unfinished products, remember first black press samples 4870X2 with TDP 400W?
i hope they have a bunch of fun apps like this rocket sled thing
you guys see this vid? the rig dont crash in this one.
http://www.youtube.com/watch?v=6RdIrY6NYrM
Did you see nVidia statement about Fermi on its websites?
Certain statements in this press release including, but not limited to, statements as to: the benefits, features, impact, performance and capabilities of NVIDIA Tesla 20-series GPUs, Fermi architecture and CUDA architecture; are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: development of more efficient or faster technology; design, manufacturing or software defects; the impact of technological development and competition; changes in consumer preferences and demands; customer adoption of different standards or our competitor's products; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems as well as other factors detailed from time to time in the reports NVIDIA files with the Securities and Exchange Commission including its Form 10-Q for the fiscal period ended July 26, 2009. Copies of reports filed with the SEC are posted on our website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.
So nVidia say it could lie about Fermi and get away with it:) ? (Example 512 cuda cores - ups we couldn't deliver that... you can have 448 tops) What you think?
What's this doubt based on? Wireframe off it's fast, wireframe on it chugs. Pretty straightforward.
It looks like any other disclaimer to me. Basically it means that what they say today is based on what they know today and that things could change in the future.
the bolded part looks very out of place, or at least very unfortunately worded. Better wording would be something about not guaranteeing that retail products will have the same clocks as the samples.
But to change official specs of cards? Did that ever happened before? I mean rumors of retail Fermi not having 512 cuda cores, but less than that.
They never released official specs.
You have official specs when they release final products like GTX 380 or along that line, that is when you have official specs of official products.
Of course, it would suck if they cannot deliver full 512 cores, it would be a self inflicted punch in the face, just like 2900 XT was for AMD/ATI 3-4 years ago.
Again, nvidia's official word is "Fermi architecture supports up to 512 CUDA Cores"
That means we could see a launch with a 360(256CC) and 380 (320CC) all the way up to 448/512CC's
Maybe they'll do a 320CC 360, a 448CC 380 and a 2x256=512CC 390. That's a win over AMD in every segment.
They leave the speculation to all of us.
For me it reads, "for unexpected problems with integrating desing on actual silicon". Hope TSMC will get their process on good level, hope that possible problems are not caused like one rumor said, that nVidia did not follow TSMC:s recommendations on silicon design.
Hope we will get some competition back in the game.
I doubt a 2x256 would be faster than a single 448, since it's going to be SLI. But still that's a pretty good idea methinks.
What's this NDA that's going to lift in a few days? And someone in another forum, quite rightly so, asked why the date of this NDA is a one week after CES. Probably the only explanation is that the NDA information points to the fact that Nvidia can't get the performance crown back with Fermi, and they didn't want a buzzkill at CES. Any other ideas?
'bout time for the thread to turn in the usual FUD/troll-a-like/flames...
says who? nvidia... and you believe them?
theyve been talking about gpgpu and tegra as big cashcows for 2 years now everytime they are questioned about the success and profits of their desktop and workstation parts and they need something to distract investors ;)
but how much have they made with gpgpu so far?
how much have they made with tegra?
sure those markets have potential and their products do too, i guess... but does that pay their employees salaries? it only does if their employees get paid in stocks, cause in the stockmarket claiming to have a big product tomorrow means swimming in cash today, but in the real world things work differently :D
traditionally, yes... but then what was all the hype at the end of last year?
what was all the hype at gtc? if that wasnt an attempt to get people to camp on their cash then what was it?
it didnt work very well, and now that they have actual fermi silicon and are close to launch, supposedly, they have the chance to do some REAL pr damage and get people to camp... so why hold out? to keep ati in the dark about the exact specs and perf numbers for a couple of weeks? as if that would make any difference to atis pr campaigns or future designs... a few weeks are nothing in that regard, and ati has a good enough idea where fermi perf is at already and probably has a pr campaign lined up already...
im pretty sure, they dont show anything because its not that great... they need more spin power to make the numbers look great as the numbers themselves arent overwhelming...
and they cant cut the prices of those parts that much cause they arent making much with them as is... so what does that mean for prices?
historically nvidias strategy has always been to offer slightly-notably higher perf than the competition and charge extra for it.
as i see it there are 2 possible scenarios:
1) 360 beats 5850 and 5870, in which case the 380 should beat the 5970
2) 360 is at least as fast as a 5850 but doesnt beat the 5870, and in that case the 380 probably cant beat 5970 either, and nvidia needs a dualgpu card (;))
im pretty sure that nvidia will aim for the first, but itll come down to yields... if they have a notable amount of chips that has less working blocks than what they need to beat a 5870, then they will HAVE to create a part that sits between the 5850 and 5870.
anyways, it boils down to this, 360 will probably cost 400-450$ and 380 will cost 650-700$, and a 395 would probably cost 900$+. heres a big hint imo, a 395 would be pretty expensive, and despite the thermal and power issues, even a 999$ retail price would mean that nvidia makes less money on it than on a 380. which makes no sense... why would you launch a highend product that either costs so much nobody buys it, or costs close to your current highend product but you have a lower margin on it. the only reason would be PR, to have the performance crown...
so if nvidia prepares a 395 already, it means they fear they wont capture the perf crown with a 380. that they prepare it doesnt mean they will launch it though... they might have it in the pipeline for a while, like they did with the 8800 Ultra...
if nvidia really preps a 395, to me that means that a 380 wont be able to beat a 5970, or at least not notably.
yepp, totally agree... g92 was a very nice chip... g200 is terribly inefficient compared to it...
i dont think so...
i used to think the same, but drivers dont improve performance that much... there have been several articles proving this myth wrong... over a year the perf usually only improves around 10%
there are just a lot of bugs with a game here and there that get fixed and that then gives a unique performance jump of 30% or 50%...
no. What? There's at least 40% perf. difference between 5870 and 5970. If the difference between 360 and 380 is not HUGE, 360 beating 5870 will by no means mean 380 defeating the 5970.
What I think is that, 360 will be around 5870 and 380 will be 20-30% faster than that, which places it a lot above 5870 and a few percent behind 5970.
5970 may be 40% "faster" than a 5870 on MAX fps, it sure isn't that much in most cases with minimum FPS at high settings such as 2560x1600 with AA by a longshot, sometimes barely doing much better than the single 5870. I don't care if the max is, hypothetically, 150 when the single card gets 120 max, but maintains 30 min vs. 35 min and the same average... crossfire/SLI scaling still are very driver-dependent and have a long way to go to make any real sense to me. The gains just aren't that amazing other than pure benching. Most of the cases where it has 40%+ gains are on tests where the framerate is 8-10, boosting to 12-14, still leaves it 100% unplayable ;).
I was talking about average FPS
well it all depends on yields... if they CAN make full blown fermis with 512sps, they will def sell them... unless even those wont beat a 5970...
assuming a 512sp full blown fermi does beat 5970, then if yields are bad, they can still decide to make the 360 a 256sp part and sell all the "trash" that way, if thats still fast enough to beat the 5850 (should be... slightly more sps than a 285, more mem bw, maybe higher clocks...)
in that case youd have a big difference between a 360 and a 380, theoretical perf difference of 100%, realistically more around 50% i guess...
that would make the 360 a 384sp part and the 380 a 480sp part, right?
and that would explain why they need a dual gpu card, to beat the 5970... i think nvidia will price that dual fermi card WAAAy high though, as otherwise they wont make a lot of profits with it, and they have problems ramping up production, so selling cards with 2 gpus when they have problems getting good yields is not a good idea... a dual fermi has very good chances at beeing a record breaker in more than just performance benchmarks, it has very good chances to be the first consumer graphics card that reaches or even surpasses the 999$ mark :D
but you know what?
people are spending that on cpus for years, and a highend vga is 10x more worth it than a highend cpu imo...
actually im surprised highend vgas havent been selling for more for the past years...
It sounds like a normal disclaimer written in legalese to CYA. They probably realize they have a penchant for talking big and that could ultimately get them in trouble with the shareholders if things don't work out perfectly.
But more then that, it's also the truth. Sometimes things don't work out as expected. Sometimes something looks great on paper but doesn't measure up when produced. Sometimes external factors conspire to ruin a good product. Et cetera. It's easy for us, being so cynical, to look at those statements and think that they might just be using those statements as a cover for lying. But the reality is that they probably do think/hope that it will be awesome but have to admit that there might be problems - don't ding them for being honest, even if it's the legal department making them do it.
As for what those bolded statements mean: The first is probably a reference to airflow (ie. it's hot so if you put it in a case without enough airflow it will throttle). The second is probably a reference to not reaching target clocks or shaders.
If there going with 256 CC then that be it self shows there in bad shape. Think about it, that's half the CC turned off on a die that's very large and that cost alot to get. The only reason I can think of for them doing this is trying to find a way not to make money but trying to find a way to break even and not be in the red. They will be selling cards with a 256 CC at a loss but its better then trashing them. Even a 320 CC will be hard to sell to break even on with the low yields and the large size of the die.
Deep Dive meeting has over, and we already know how powerful GF100 is, and IT IS! Performance is better then expected, most peoples here will search in own words and feel to shy for craps in thread. We can only laugh now, because Nvidia did it! Perf in Unigine DX11 benchmark is spectacular, Radeon HD 5870 is there like low-end toy for kids against GF100.
PS. Yeah, GF100 has 512 CC, and GPUs are in massproduction now, plenty of manufacturers have first GF100 inhouse.
I feel a few things here,
1- First, you're probably overexcited about nvidia "tweaked for fairness" benchmarks
2- We're talking about Unigine here, what about other benchmarks? No games?
3- We need some settings+actual numbers otherwise your statement just sounds like more hype.
PS: Do you have info on the lower range card (aka 360)?
Are you basing this on Nvidia's PR slides?
Cause if so, we all know that they're overstated (this is true of ATI too)
Seeing as how you've posted that actual cards won't be sent out til February, I'm going to assume you got your numbers from PR slides - so how about game performance, and not benchmark performance? And what you're saying - that it's better than everyone expected - seems to go counter against what everyone else has been hearing, and that includes hints from some Nvidia folks that the design was incredibly hard to achieve
Sorry if I'm not exactly trusting of a new poster from China who's only here to post on this thread, and from where I've heard more than a few rumors turn out false
it seems like you invented all the things you write, ces was a free event and nobody talked like this about fermi, if nvidia does not post any benchmark of fermi cards, and if it does not talk about it with certain things we can talk about it all day long......still wainting for the cards from september........let's wait without saing uncertain things.....
Actually, all he has posted has turned out true so far. (Which is 3DFinity or whatever, the deep dive, the NDA date.) And seeing as how he said the power consumtion for GTX380 was also huge, it's probably in the ~250W or else it wouldn't have been all that much. A 250W card should beat a 188W card.
I haven't read the whole thread but does NVidia itself expect Fermi 380 only in September 2010?
Thats basically on par with an ATI refresh for 5 series. Should be interesting if true.
Nvidia's answer to eyefinity has been known about for quite awhile, first heard about it over a month ago. The deep dive and NDA expiration date had been known for a couple of weeks.
Not saying he is wrong or pulling our leg but all that info has been floating around for quite awhile.
exactly why its nice to use a chart and see what does and dosnt change between setups of multigpu or changing brands.
i also think its up to the user to determine the best settings for their games for their rig. if somehow turning on 8xAA on a 5970 cut the fps in half, leave it at 4x and turn on the ultra high textures (you get the idea). reviews are great, but they never give 100% of the picture. (i still hate how many people dis the 2900, which is a beastly card as long as you dont need AA, but it handles high rez and lots of effects quite well)
Just need some damn games worth playing these days...
TSMC recently announced to share holders and many financial news websites that ALL yield problems with manufacturing .40nm complex graphics chips have been resolved. This is further proven by how ATi only now is able to deliver large quantities of chips to their 3rd party manufacturers and cards for consumers.
I've seen news articles from tech websites posted here as well.
nVidia did it!? What... 'cause I dont see anything in stores yet.
Also, I'm sure you love playing Unigine for many hours, but its not the type of game most people play. Performance in benchmarks can be misleading and quite different between a variety of games.
R600 used more power (and had more bandwidth) than G80. Are you suggesting 8800GTX users were stupid, and that X2900XT was faster?
Likewise, the nVidia fx 5800 Ultra used a lot of power too. Performance was good as long as it was DX8. In contrast, the fx DX9 performance was spectacularly miserable at best. Anybody here remember the poor rendering quality and low performance?
High power usage just means better home heating.
http://forum.donanimhaber.com/m_36999404/tm.htm
It says that "volume production" will start in the 3rd week of February, some cards will be available in March but not a lot.
I think what this article means by volume production is the placement of the GPUs on the PCB's, and what JHH meant by volume production is that the wafers are in the oven in TSMC.
While synthetic benchmarks do occasionally favor one particular hardware design over another, especially with a little influence from said particular company.... the facts in the situation tend to be quitte stubborn when it comes to Unigine.
First off, ATi (now AMD) has been listed on their website as a development since I first heard about Unigine a few years ago. And after just recently visiting their website, it seems they finally added nVidia... albeit to the very bottom of their "partners" list.
http://unigine.com/company/partners/
Also, I believe everyone here knows that ATi has been way ahead of nVidia on making DirectX 11 hardware, which Unigine used for the majority of the development cycle since we all know nVidia only recently had Fermi samples running well enough to show off in a company controlled demo. Do you honestly think nVidia would have handed out any of these early Fermi cards 6 months ago to a small 3rd party development team?
The R600 was just a crappy design by a graphics company hurting financially and only saved by AMD buying them up.Quote:
R600 used more power (and had more bandwidth) than G80. Are you suggesting 8800GTX users were stupid, and that X2900XT was faster?
So we are to automatically assume Fermi is going to be another flop like the FX? Are you forgetting that once we got basic architecture specs for FX, everyone at B3D and other well informed sites all agreed was going to suck. Especially since it took about a year and countless "tape outs" just to get a chip stable enough to show the public a demo.Quote:
Likewise, the nVidia fx 5800 Ultra used a lot of power too. Performance was good as long as it was DX8. In contrast, the fx DX9 performance was spectacularly miserable at best. Anybody here remember the poor rendering quality and low performance?
I think people with ample skills at using reason can see that it was only logical for nVidia to delay Fermi because of the huge problem of .40nm production at TSMC, something that plagued ATi for the past 5 months. So instead of releasing a product that had extremely high production costs and very low yields which was almost impossible to purchase.... nVidia decided to not release specs or performance data so they and their 3rd party card manufacturers could sell off warehouses full of product.
I don't think nVidia is losing much sleep at night knowing that being #2 in performance for a few months is not as bad as the huge costs of full scale manufacturing at 40% yields AND having the performance crown while only having a small number of those costly cards available for the public to purchase.
After seeing that Unigine and Rocket Sled video on Youtube and the incredible framerates and detail with tessalation running.... I have a feeling you wont be dissapointed.Quote:
If true, sounds much better then expected. And 512 CCs, that's good too.
I really hope this will be another G80.
.... the people who spent the past few months beating their chests over the HD5xxx cards performance will just grab some tissues and sulk for the next year or two until ATi gets the crown again for a short time. I'm sure all the sane ATi and nVidia fans will be glad to have the endless thread crapping, lies by Charlie, and other nonsense just go away for a while. I sure as heck will.
This just in!! Breaking News!!!
Attendees at PDXLAN will be amoung the FIRST LAN attendees to see GF100 in action!!! That's right, the NEXT, NOT EVEN RELEASED Video Card from NVIDIA!!!! Taking it a step further, CLUB SLI members can win one of the first ones AUTOGRAPHED by the CEO of NVIDIA!! ZOMG!!!!! To win one, it's all part of a contest taking place ONLY at PDXLAN 15!!!
All contest details at www.pdxlan.com - Last day to get tickets to PDXLAN is Tuesday night!! Register at pdxlan.com!!
nice to see some marketing/publicism about the GeForce brand itself again
hehehe, well, theres always hope... :D
in the past it was announce and launch on the same day
then it was announce first and launch later
in recent years that become: announce, then wait, then launch, then wait, then sell
:D
nvidia announced fermi at GTC, they will launch it in february, 4 months later... im really curious when it will actually sell :D
yeah, they basically buy good PR by selling cards at a loss then... wouldnt be a first though afaik :D
cool, sounds like the event was pretty nice if they got you all excited like this :D
why did they do this behind closed doors and didnt make it public though?
im still sceptical over the claims they threw at you and others during that presentation...
Well, TSMC does seem to have less yield issues now. Put it another way, at least ONE party seems to be happy with the clocks they're reaching on some products.
nvidia's deep dive meeting:
It was all technical. But no clock speeds, power consumption or benchmarks were being shown…yet at least," the source confirmed at the time of writing.
source
thanks blob, sadly it was just a white-paper oral...
Using stupid logic one can say if per shader unit the performance of a GF100 to a G200 shader unit is min 5%-10% more we can assume that at similar speeds the GF100 even with 448 shaders can beat a GTX295 without any problem and a 512 shader version can give a additional min 15%-20% boost.
This is off-course done by using stupid logic no idea on the b/w or anything else...
EDIT: I also got this in mail:-
"Dear Ajai dev,
You’re Invited: Join us for a webinar on Wednesday January 13, 2010, 10am – 11am pst
Do Science with Tesla Today. Be First to do Mad Science with Fermi.
NVIDIA’s next generation CUDA architecture, code named “Fermi” is the most advanced GPU computing architecture ever built. Join us for a live webinar to learn about the new Tesla GPU Compute solutions built on Fermi and the dramatic performance capabilities they offer customers who are tackling the most difficult, compute-intensive problems. In addition you will learn about our limited time offer, the Mad Science Promotion, whereby you may qualify for a promotional upgrade to a new NVIDIA Fermi-based Tesla product when you purchase a NVIDIA® Tesla™ C1060 GPU Computing Processor or a S1070 1U GPU Computing System today.
Who Should Join: Researchers, Scientists and other Professionals looking to accelerate compute-intensive applications with the GPU
Presenter: Sumit Gupta, Sr Product Manager, NVIDIA GPU Compute Solutions
When: Wednesday January 13, 2010, 10am – 11am pst""
Maybe not 6 months ago, they didn't have anything 6 months ago...
Now about 2 months ago, they definitely did.
Hmmm... ATi was hurting financially? They were actually doing very good at the time. Pretty sure everyone agrees AMD bought them at the wrong time.
Actually, the only thing that was limiting AMD/ATi was capacity, not yields.
Ahhh... so you are one of the few that think Nvidia ran hotlots, risky, and did 2 respins for "fun." Good one!
Nvidia would be in heaven if they were even close to 40% yields.
Hmmm... you are sure doing a good job here. Thanks for the facts and non-bias comments. As much as some hate Charlie, I'm not sure where you can say he lied. Only recent thing I can remember is that he got some very wrong info on the whole Lucid ordeal.
Whatr you talking about Saaya has agreed with me plenty times. We both agree that nvidias business model is doomed. You cant COMPETE in the gpu market, if you aint making gpu's period. Fermi is a trickle down product that has no business being in a desktop. Fermi is designed only for being a gpgpu cruncher. The only way nvidia can truly compete is if they split their designs and make dedicated ppu's and dedicated gpu's. You cant have one gpu that works both ways efficiently!:up:
it is phenomena of its own that's OK when Intel sells CPUs for the price of $560+ and $1000+, and when VGA manufacturer offers killer card with the processing unit that is way more complex, and with bunch of GRR5 memory chips for 300-400 USD everyone is complaining how expensive it is!
so I'm complitely with you on this one!