Last edited by kaktus1907; 09-21-2010 at 10:46 AM.
Man from Atlantis(B3D, DH, S|A, 3DC, OCN), MfA(G3D, CH), kaktus1907(XS,TPU,AT) and zennino
SIS 6326 > Ti 4200 > 9800XT > 9800GT > GTX 460
Celeron 366 > Celeron 1700 > Athlon XP 2500+ > E6300 > Q9650
Alice Madness Returns | Assassin's Creed: Brotherhood | Assassin's Creed: Revelations | Batman Arkham City | Battlefield 3 | Bulletstorm | Call of Duty: Modern Warfare 3 | Crysis 2 | Darkness II | Darksiders | Dead Island | Dead Space | Dead Space 2 | Deus Ex: Human Revolution | Dragon Age Origins | Dragon Age 2 | F.3.A.R. | F1 2011 | Half Life 2 | Hard Reset | Kane & Lynch 2 | L.A. Noire | LEGO: Pirates of the Caribbean | LEGO: Star Wars III: The Clone Wars | LOTR: War in the North | Mass Effect | Mass Effect 2 | Mass Effect 3 | Mini Ninjas | NFS Hot Pursuit | RAGE | Renegade Ops | Skyrim | The Witcher 2 | Tomb Raider: Underworld | Transformers: WFC | Trine 2
Yes I mean in context of graphics that is mandatory for game card. Physics is extra that is not mandatory. Also note that it is about perfomance per watt, so it might be that real power difference is just 2x but power need is lot lower (not that is bad, just so that dont jump your horses yet about 3-4x "real" performance).
About tesselation i dont know really nothing, but im in impression that Fermi does it with dedicated units so nothin to do with DP performance, correct if im wrong.
Last edited by Mechanical Man; 09-21-2010 at 11:12 AM.
i7 920 D0 / Asus Rampage II Gene / PNY GTX480 / 3x 2GB Mushkin Redline DDR3 1600 / WD RE3 1TB / Corsair HX650 / Windows 7 64-bit
In games anything that needs more than 32bit precision is done on the CPU, Double Precision is basically GPGPU stuff.
This smells like the same trap nvidia fell into on the FX chips, DX 9 only needed 24bit precision for it's shaders on release but nvidia builds a 16/32bit part that's fast on 16bit but slow on 32bit. This meant every game needed to be optimised with 16bit shaders to come even close to ATI's 9k series.
Everything mentioned since the Keynote started has been either proprietary (which means it'll only be supported if nvidia's card succeeds) or aimed at making a super computer. And while I loved DEC Alpha technology, I've not tried to find one second hand to own.
How can Nvidia even begin to call Fermi a 2009 product? The way things are going, NV will take until the end of 2010 to release a full DX11 lineup. Unless Jensen counts the woodscrew wonder as a product release.
As for 28nm, how is this going to save Nvidia? AMD will also move to that node (and might have something from GlobalFoundries as well). Nvidia needs to rethink their strategy, otherwise I fear they are going to get further and further behind.
OMG!! I just noticed, Day 2 of GTC is going to be with U of I at Urbana-Chanpaign!!! I got accepted there but didn't go. Everyone at my school goes there for parties though!
i7 920 D0 / Asus Rampage II Gene / PNY GTX480 / 3x 2GB Mushkin Redline DDR3 1600 / WD RE3 1TB / Corsair HX650 / Windows 7 64-bit
Lets see if GF110 will be a "mid-life kicker" which Huang said would be a product which launches in-between each new NVIDIA chip.
Favourite game: 3DMark
Work: Muropaketti.com - Finnish hardware site
Views and opinions about IT industry: Twitter: sampsa_kurri
the first 2 infos were obvious, the cuda core count doesnt make sense imo...
![]()
what else COULD they do?
they can either shut up, which isnt a good idea for a company, or they can try to focus on areas where they are still competitive... every company does that...
i dont like this marketing bs either, but thats really all they CAN do in situations like this, and the sad part is that for 99% of the people out there it actually works... so not doing it would mean theyd lose money...
according to the graph its 5x, but if you check actual numbers, then youll notice that tesla is positioned too high on the graph and fermi too low... which artificially increases the jump to keppler making it look a lot more impressive than it actually is... according to the graph keppler will have 5gflops/W while fermi supposedly has around 1gflop/W... in reality tesla c2070 cards are at close to 2.5gflops/W, so keppler is merely a double of that, slightly more than that, but definitely not a 5x jump as the graph makes it look
me2, but i dont think so... :/ theyd have announced it, and i think the hardware would be so similar it doesnt make sense... all they really need to do is make those massive chips on a node once they AND tsmc had some experience with it, and not when the node is brand new... i think then their strategy of having one big chip that does it all can actually work out... but of course it looks like they will throw a massive chip at tsmc again once they announce their next node...
if nvidia wants to own the highend, yes...
thats just jensen marketing speak for "refresh" :P
so what's the current use for you besides gaming that you could do today if you buy a gpu ?????
and is it really needed to buy an nvidia gpu to do those task or can the competition do it right now ?????
2012 release???
could it be because tsmc canceled a node in between ???? you know.. 32nm .. and now they are going with 28nm .. maybe those delays are because of that instead of architecture design flaws .....
only 4 blades ???? LOL i doubt that very much ...
why dont we wait to see actual numbers of the big gun to see how they compare.... then you can stop making those claims that its nothing beside a rebrand .. to wich an article by an amd exec said it wasnt ...![]()
this is just more lipservice...
the amd fanboi force is strong in this thread...
But they are not Jedi yet
Joking aside I can seriously see nVidia branching out their products purely to suit each niche market.
Raw DP performance will be reserved for Telsa based products, and then we will see some sort of "half way house" for the Quadro type products with GOOD DP performance and then we will see mediocre to moderate DP performance for the consumer products.
What this means is that consumer end users get cheaper, less power hungry and cooler running cards.
GF104 was just the beginning....
Stop looking at the walls, look out the window
My gpu is a 470.
Bias or fanboyism can be misconstrued easily.
There's a difference between beating a dead horse and blindly arguing without actually integrating new logical input.
I'll beat down any manufacturer when it's enjoyable but I'll buy whatever part is the best £/perf at the rate I'm willing to pay.
I am not ignorant, bias is for idiots.
Let's face the elephant in the room, nvidia is massively behind and in not very good shape to catch up.
Their strategy is to impose false gpu restrictions to upsale SLI to customers (fake perf restriction x) and lie through the teeth about upcoming products to stave customers.
Remember when the 5 series came out and nvidia was bragging at launch about their "soon to be released" product?
That's what this conference is about, they see 6xxx being released and are trying to divide end users in the community.
i keep wondering who people think the conference was shown for
i dont think it was for gamers to watch about on youtube. i think it was to try and find more investors to expand the capabilities for cuda. so much was about how far they have come, only a hint was about gaming or future products.
Bookmarks