This thread reminds me of extensive Conspiracy Theory discussions.
99% Assumption 1% Unconfirmed facts...
Printable View
This thread reminds me of extensive Conspiracy Theory discussions.
99% Assumption 1% Unconfirmed facts...
Not really. We had a pretty nice and healthy discussion about MS in here, and the conclusion is fairly clear.
http://www.xtremesystems.org/forums/...d.php?t=243190
Who wants to sell a GTX275 - for $100?
(its obvious, but highlighting for folks waiting for Fermi)
The GTX are all great products with very good performance and features. I certainly wouldn't reject it if I won a couple as a prize. GT200/b is better than AMD's HD48xx lineup.. but that comes at a price. Even though GTX260/275 sell for a quarter of the price of their high end cousins, the chips and boards cost about the same
chip, transistor_gate_size, chip_area
G92 65nm 330
G92b 55nm 231 / 260?
GT200 65nm 576
GT200b 55nm 470 / 412 ?
Fermi 40nm ??? BIG
HD38xx 55nm 192
HD48xx 55nm 260
HD58xx 40nm 334
HD57xx 40nm 166
We know Fermi will be big. Like HD5870, 40nm, but much bigger. And that means more power. And more complex 384bit memory PCB vs HD5870's 256bit (just like GT200 vs HD4870). AMD will be able to steal market share with price war like before. nVidia will either lower prices really low making no profit (ie GTX275 for $150), or lose market share.
Because in the end, even if Fermi is 50% faster, makes coffee and does your taxes, it's still very big and expensive. AMD already flooded market with cheap DX11 cards, and with today's economy struggling to recover, very few will be able to justify the price tag over mortgage bills.
http://www.hardforum.com/showthread.php?t=1325165
Fermi is 23,4x23,4 3.2mil
am i lost? i thought this was the gf100 thread.....
hrm, that makes it exacly the same size as the gt200. i hope that power consumption is the same as gt200 or less, hearing about a 250-300 watt single gpu doesn't inspire confidence...
The "magic" of fixed hardware, focus/optimization on one single platform, closer-to-the-metal APIs than their PC counterparts, small memory footprint OS without bloatware, and last but one of the most important things - enough time, money and other resources.
Just look at what Naughty Dog/GG have managed to put out with Cell and castrated G71.
When is the REAL Farming going begin !!!! :(:shakes:
Waiting for Fermi is like waiting for AMD Bulldozer.
The chances, the sheer odds of either of these products being released before the end of 2010 is so remote, so improbable, I have trouble even saying it.
Everybody here remember 2004 - the big unveiling of THE 6800 ULTRA.
What a huge change that was. Night and day. Suddenly, you could actually play Far Cry. AA performance was immense. 35GB/s 256bit memory... seemed like something pulled out of Star Wars: Attack of the Clones. And 16 "pipelines".. wow it seemed like SO MANY.
Fast forward a bit over 5 years later. 256bit HD5870 features over 150GB/s. 1600 SP. 384bit Fermi will be around 200-240GB/s. Seems like hitting a ceiling since don't see anything beyond GDDR5 on the map and only option is 512bit.
But the surprising thing is, all those BILLIONS of transistors, yet its still just little simple textured triangles. Voxel didn't catch on. Ray tracing is still a pipe dream. Nobody even makes use of geometry morphing. Its approaching 20 years after Terminator2 and Jurassic Park, yet where are all the photo realistic virtual worlds?
Hoping the industry won't sour like the Simpsons.. 10 years from now playing as Rodriguez in Call of Duty 17 - same old point and shoot.
Good post, even with Fermi's new architecture makes me wonder how much more can they really squeeze out of a refresh - 20%, 50%?
I think pretty soon PCI Sig will have to up to PCI 3.0 for another 100W eek seems like there's no other way to avoid this architectural/performance ceiling other than to give more raw power to the cards.
well, you have the following options.
512 bit, higher GDDR5 speeds (7-8 Ghz), more cores, more ram etc., 28-20nm shrinks and lower etc.., i think we have not reached a ceiling.
good post :)
Photo realistic gaming won't happen in our lifetime. "little simple textured triangles" will be around for a long time to come - drawn in numbers they work fairly well. When all is said and done, gaming graphics are still an absolute galaxy away from where they were in the days of T2 and Jurassic park.
Before we get carried away with this "nostagic trend", let's put things in context.
Doom was released in Dec 1993, roughly 16 years ago. Do take a look at that game and compare to where we are today. I think that wasn't bad for 16 years.
also take a look at the Fermi car raytracing demo; now zap 16 years forward; it's not unlikely that instead of one car raytraced we'll see a whole game world ray traced at playable FPS with good resolution
this will be JP/T2 territory :)
and that would be?
i just read the last page and its still the people claiming it doesnt exist because they personally havent witnessed it while others insist that there def is something wrong.
your one of them religious types arent ya? oh lordy lord, make mah miracle happen! :D hehehe jk
hmmm i played far cry on my 9500np@9700p :shrug:
well, the jump from doom to real 3d games was amazing, everything after that was... well less than i expected...
it makes perfect sense, as we approach reality, the amount of detail that needs to be increased to make a notable difference grows exponentially... a smart way around this would be to actually track where we are looking on the screen, and only render that focus point in high detail... we wouldnt really notice if the rest of the screen was blurry, our eyes arent good enough for that :D
the problem is just that it takes some time to render a frame, and i dont think we can adjust the focal point of the frame that quickly...