its going to be interesting if nvidia can make a stronger part than a 5970 thats still under 300W. if there was no 300W limit, then things would get really interesting from both parties.
its going to be interesting if nvidia can make a stronger part than a 5970 thats still under 300W. if there was no 300W limit, then things would get really interesting from both parties.
And hot... I know power consumption doesn't always mean more heat, but chances are a 300W chip will dissipate more heat than a 200W chip... Imagine over thatOf course, it's no big issue in a forum where people have enormous quads (or hexa for lucky bastards
) overclocked and enormous multi gpu setups
![]()
i7 2600K @ 4.6GHz/Maximus IV Extreme
2x 4GB Corsair Vengeance 1866
HD5870 1GB PCS+/OCZ Vertex 120GB +
WD Caviar Black 1TB
Corsair HX850/HAF 932/Acer GD235HZ
Auzentech X-Fi Forte/Sennheiser PC-350 + Corsair SP2500
They didn't turn on tessellation, it was always on. They turned on wireframe which results in lots of overdraw and additional geometry (the white lines you see are actual line primitives), hence the slowdown. There's enough bad news out there that you don't need to imagine it as well![]()
Last edited by trinibwoy; 01-10-2010 at 05:03 PM.
I can no longer insult "The Big One"![]()
(was going to write something about wireframe being cheap back in the day, then I noticed in the video that the demo defeated the entire purpose of tessellation as the detail did not decrease further from the viewpoint, pause it at 3:48 and the amount of polys is just absurd on the mountain in the background.)
Anyway, not bad for a (my guess) 384SP part.
Last edited by neliz; 01-10-2010 at 05:32 PM.
Lol, why would you want to insult the "Big One"? Wireframe is cheap when it's wireframe only. When it's regular rendering + wireframe it's not cheap (wireframe adds even more work). In terms of whether they're doing dynamic LOD or not, does it really matter for the purposes of the demo?
Mmmm... I find it interesting that no one has taken neliz's hints to heart.![]()
Originally Posted by motown_steve
Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.
Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.
Beating out the ATI5870 the new GF100 Fermi architechture has been thoroughly tested in our private labs and have found to be the fastest in ALL of the following popular, totally and completely non-infulenced, games. We promise.
FermiMarks10
Call of Fermi 5
Fermi Cry 2
World of Fermicraft (WoF)
Grand Theft Fermi IV
Fermilands
Fermiout 3
Left 4 Fermi 2
Age of Fermi III
Alien vs Fermi
Assassin's Fermi II
Resident Fermi 5
Fermi Trek Online
Fermisys 2
With the lead in so many popular games the debate is over. GF100 is the shizzzznizzle!!!!!111one![]()
I still doubt it's actually drawing the wireframe that throws it off (it means the Heaven benchmark would show it too / AVP demonstrations) but the insane amount of polys that it has to render extra since they turn into sub-pixel sizes at those distances.. now that's where it's going to hurt.
Every invited journalists has complete Fermi Geforce slides now, NDA will lift in few days, but i hope someone broke it and publish something on B3D or somewhere in Asia like always. If you want to know +/- perf. look at my previous post here in this thread. Dont worry about TDP, on CES are only first silicon unfinished products, remember first black press samples 4870X2 with TDP 400W?
i hope they have a bunch of fun apps like this rocket sled thing
you guys see this vid? the rig dont crash in this one.
http://www.youtube.com/watch?v=6RdIrY6NYrM
_________________
Did you see nVidia statement about Fermi on its websites?
Certain statements in this press release including, but not limited to, statements as to: the benefits, features, impact, performance and capabilities of NVIDIA Tesla 20-series GPUs, Fermi architecture and CUDA architecture; are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: development of more efficient or faster technology; design, manufacturing or software defects; the impact of technological development and competition; changes in consumer preferences and demands; customer adoption of different standards or our competitor's products; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems as well as other factors detailed from time to time in the reports NVIDIA files with the Securities and Exchange Commission including its Form 10-Q for the fiscal period ended July 26, 2009. Copies of reports filed with the SEC are posted on our website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.
So nVidia say it could lie about Fermi and get away with it? (Example 512 cuda cores - ups we couldn't deliver that... you can have 448 tops) What you think?
Xeon E5 2697 V2, Asus P9X79 Deluxe, 4x 8 GB LoVo 1600Mhz 9-9-9 RAM, GTX 680
What's this doubt based on? Wireframe off it's fast, wireframe on it chugs. Pretty straightforward.
It looks like any other disclaimer to me. Basically it means that what they say today is based on what they know today and that things could change in the future.
Last edited by trinibwoy; 01-11-2010 at 06:15 AM.
the bolded part looks very out of place, or at least very unfortunately worded. Better wording would be something about not guaranteeing that retail products will have the same clocks as the samples.
But to change official specs of cards? Did that ever happened before? I mean rumors of retail Fermi not having 512 cuda cores, but less than that.
Xeon E5 2697 V2, Asus P9X79 Deluxe, 4x 8 GB LoVo 1600Mhz 9-9-9 RAM, GTX 680
They never released official specs.
You have official specs when they release final products like GTX 380 or along that line, that is when you have official specs of official products.
Of course, it would suck if they cannot deliver full 512 cores, it would be a self inflicted punch in the face, just like 2900 XT was for AMD/ATI 3-4 years ago.
Again, nvidia's official word is "Fermi architecture supports up to 512 CUDA Cores"
That means we could see a launch with a 360(256CC) and 380 (320CC) all the way up to 448/512CC's
Maybe they'll do a 320CC 360, a 448CC 380 and a 2x256=512CC 390. That's a win over AMD in every segment.
They leave the speculation to all of us.
Last edited by neliz; 01-11-2010 at 06:55 AM.
For me it reads, "for unexpected problems with integrating desing on actual silicon". Hope TSMC will get their process on good level, hope that possible problems are not caused like one rumor said, that nVidia did not follow TSMC:s recommendations on silicon design.
Hope we will get some competition back in the game.
I doubt a 2x256 would be faster than a single 448, since it's going to be SLI. But still that's a pretty good idea methinks.
What's this NDA that's going to lift in a few days? And someone in another forum, quite rightly so, asked why the date of this NDA is a one week after CES. Probably the only explanation is that the NDA information points to the fact that Nvidia can't get the performance crown back with Fermi, and they didn't want a buzzkill at CES. Any other ideas?
INTEL Core i7 920 // ASUS P6T Deluxe V2 // OCZ 3G1600 6GB // POWERCOLOR HD5970 // Cooler Master HAF 932 // Thermalright Ultra 120 Extreme // SAMSUNG T260 26"Has anyone really been far even as decided to use even go want to do look more like?
'bout time for the thread to turn in the usual FUD/troll-a-like/flames...
Coding 24/7... Limited forums/PMs time.
-Justice isn't blind, Justice is ashamed.
Many thanks to: Sue Wu, Yiwen Lin, Steven Kuo, Crystal Chen, Vivian Lien, Joe Chan, Sascha Krohn, Joe James, Dan Snyder, Amy Deng, Jack Peterson, Hank Peng, Mafalda Cogliani, Olivia Lee, Marta Piccoli, Mike Clements, Alex Ruedinger, Oliver Baltuch, Korinna Dieck, Steffen Eisentein, Francois Piednoel, Tanja Markovic, Cyril Pelupessy (R.I.P.), Juan J. Guerrero
says who? nvidia... and you believe them?
theyve been talking about gpgpu and tegra as big cashcows for 2 years now everytime they are questioned about the success and profits of their desktop and workstation parts and they need something to distract investors
but how much have they made with gpgpu so far?
how much have they made with tegra?
sure those markets have potential and their products do too, i guess... but does that pay their employees salaries? it only does if their employees get paid in stocks, cause in the stockmarket claiming to have a big product tomorrow means swimming in cash today, but in the real world things work differently
traditionally, yes... but then what was all the hype at the end of last year?
what was all the hype at gtc? if that wasnt an attempt to get people to camp on their cash then what was it?
it didnt work very well, and now that they have actual fermi silicon and are close to launch, supposedly, they have the chance to do some REAL pr damage and get people to camp... so why hold out? to keep ati in the dark about the exact specs and perf numbers for a couple of weeks? as if that would make any difference to atis pr campaigns or future designs... a few weeks are nothing in that regard, and ati has a good enough idea where fermi perf is at already and probably has a pr campaign lined up already...
im pretty sure, they dont show anything because its not that great... they need more spin power to make the numbers look great as the numbers themselves arent overwhelming...
and they cant cut the prices of those parts that much cause they arent making much with them as is... so what does that mean for prices?
historically nvidias strategy has always been to offer slightly-notably higher perf than the competition and charge extra for it.
as i see it there are 2 possible scenarios:
1) 360 beats 5850 and 5870, in which case the 380 should beat the 5970
2) 360 is at least as fast as a 5850 but doesnt beat the 5870, and in that case the 380 probably cant beat 5970 either, and nvidia needs a dualgpu card ()
im pretty sure that nvidia will aim for the first, but itll come down to yields... if they have a notable amount of chips that has less working blocks than what they need to beat a 5870, then they will HAVE to create a part that sits between the 5850 and 5870.
anyways, it boils down to this, 360 will probably cost 400-450$ and 380 will cost 650-700$, and a 395 would probably cost 900$+. heres a big hint imo, a 395 would be pretty expensive, and despite the thermal and power issues, even a 999$ retail price would mean that nvidia makes less money on it than on a 380. which makes no sense... why would you launch a highend product that either costs so much nobody buys it, or costs close to your current highend product but you have a lower margin on it. the only reason would be PR, to have the performance crown...
so if nvidia prepares a 395 already, it means they fear they wont capture the perf crown with a 380. that they prepare it doesnt mean they will launch it though... they might have it in the pipeline for a while, like they did with the 8800 Ultra...
if nvidia really preps a 395, to me that means that a 380 wont be able to beat a 5970, or at least not notably.
yepp, totally agree... g92 was a very nice chip... g200 is terribly inefficient compared to it...
i dont think so...
i used to think the same, but drivers dont improve performance that much... there have been several articles proving this myth wrong... over a year the perf usually only improves around 10%
there are just a lot of bugs with a game here and there that get fixed and that then gives a unique performance jump of 30% or 50%...
Bookmarks