Opening Keynote with Jen-Hsun Huang (30 Sep 2009)
Quote:
1:30:50 And so, I look forward to announcing the GeForce in the near future, based on the Fermi architecture today, I'm just gonna have to ask you to wait just a little longer
Printable View
thats not the point, to be able to run it, at all, the architecture has to be working 100% like it should, all instructions and combinations of instructions need to work exactly as they should... there are usually always some bugs, and yes you can work around them on a compiler level afaik, but it takes time to figure that out... you need to know about a bug first before you can work around it, and do it in a way that doesnt cost you a lot of performance...
they showed gt300 silicon which was supposedly so fresh out of the oven it was still steaming, yet they had it running highly complex maths pounding every transistor of the new pipeline like theres no tomorrow, at very high performance and without any bugs... im not saying its impossible, but its def something that raised my eyebrow... especially because its not the only thing that they showed supposedly running on gt300... according to those demos it seemed gt300 was 100% done, no bugs, no driver issues, nothing... just waiting for lame old lazy tsmc...
Bring it on! I want 5870x2 (probably will release with gf100 or gt300 w/e) or fermi... my gtx285 stumbles in half my games (fallout 3 is nearly unplayable, risen lags like crazy)
you dont think its a little weird that most g300 rumours are negative? not saying it is specifically fanboys.
it was in a white paper published by none other than intel. still thats the first real performance figure of larrabee.Quote:
Anyways, not sure where you think you read about 16core Larabee @ 1ghz being faster than GTX280... Certainly not at gaming since they will need 32-64cores on the final product.
http://techresearch.intel.com/UserFi...2009_FINAL.PDF
Well, with mods it is totally unplayable.
And fo3 with mods is way better.
Also benchmarks don't say everything... average frame rate in fo3 with gtx285 is like 70, max details with 4 aa
I cannot get over 50 fps on max details no AA on default game at 1920 by 1200 and 50 fps in fo3 feels laggy... its often close to 30 - 40 to
That surprised me too.
How rarely do you ever get a complex chip like that working flawlessly... never. Many people agree that Core 2 Duo was a stunning break through in performance for Intel. But just like P4 and P3 before it, there was a long errata list. Thankfully, many of these bugs can be fixed in microcode, and dont require any OS patches.
Although GPU's dont have an ISA as complex as x86 with 800+ instructions, all those "programmability" features added with DX9, DX10 and DX11 add more logic and thus more chance for bugs. Fermi **MIGHT** have been running a limited set of instructions in whatever demos were real.
Finally, why its much easier for AMD to get DX11 out the door: they already had 40nm chips for months. they already had GDDR5 for years. they already had DX10.1 since 3870. And ofcourse, they already had tesselation engine for years. The block diagram for 5870 says it all... just double 4870. Even the 5way SIMD stayed the same. As for nVIDIA, each of these items listed are hurdles for them to overcome.
I'm seeing a lot of conjecture on your part, but not much else. While I'm willing to give them the benefit of the doubt, you're simply doubting, based on no less than what you assume the application demands. GPGPU in general demands very little from a substantial portion of a graphics chip, particularly the texture units and ROPs. To claim all transisters need to be pumping at full throttle at all times is a bit silly. Again, they might have also had to clock it way down, use crazy cooling, high volts, whatever to get the transistors (the ones related to computation) in working order. Who knows?
But neither of us are going to get anywhere with this. Like I said, debating this is a waste of time.
But it's mostly the main reason to purchase a brand new video card.
Provided it is marketed (Labree) as being a "highend gaming sku" then I would agree. However if Intel, not unlike Nvidia, cater to the whole CUDA/OpenCL type deal then obviously that is their target market. Just don't say that these products should be "only" for gaming as that is ignorant to assume given the direction GPUs are going. At the end of the day Nvidia and Intel don't give a rat if the product is bought for gaming or processor purposes. If they can meet a demand and be competitive, their product sells, they create revenue and share holders are happy. Simple as that.
So this is gonna "hard launch" in one month? And we still know nothing? yeah, right
to much talk from invidia no action i want some new cards.
it does support Dx11 right??
I wouldn't say we know nothing. In fact we probably know more about it than we did about the 5870 in the same time period, especially about the low-level stuff. Rumors currently suggest 128 TMU, 48 ROP, and that pretty much covers the gaming side of the hardware. We're just missing tidbits at this point, and of course clocks, which is always the last to be finalized.
At this rate, would it be unreasonable to say that we might only see real availability of Fermi early Q2 2010?
GTX380 vs 5950 that will be the fight that goes down as GTX285 vs 4850 X2 did most likely. In games that are not CF optimized GTX380/Fermi would win "Since a single 5850 cant destroy a Gtx380/Fermi no matter how bad Nvidia made the card, it has to kill 5850 in game otherwise its a retard" and in games that are CF optimized i expect ATi's 5950 to win.
But the Nvidia card would offer more than just gaming, you get a partial/fake CPU that can in theory do most of the work ARM cpu's can do. Its totally upon the end user to buy which card, i am on the value for money bandwagon how much the GTX380/Fermi costs and how much 5950/5870 costs is important factors for me.
Yeah, of course.
Speculations, speculations...
5870 is 2x of 285. Are you saying that 5970 being 5870x2 is slower than Fermi? Delusional much? It would be such a massive die size for a single chip that the card would cost 1000$...
All speculation folks. From what I do know for sure I think ati's ideals of a graphics card is better then nvdia. To me I want a gfx card not a pseudo CPU. Bottom line though we really need new games, none are going to push the last, current and next gen.