PDA

View Full Version : The official GT300/Fermi Thread



Pages : [1] 2 3 4 5

WeeMaan
09-30-2009, 03:50 AM
http://www.brightsideofnews.com/news/2009/9/30/nvidia-gt300s-fermi-architecture-unveiled-512-cores2c-up-to-6gb-gddr5.aspx


3.0 billion transistors
40nm TSMC
384-bit memory interface
512 shader cores [renamed into CUDA Cores]
32 CUDA cores per Shader Cluster
1MB L1 cache memory [divided into 16KB Cache - Shared Memory]
768KB L2 unified cache memory
Up to 6GB GDDR5 memory
Half Speed IEEE 754 Double Precision

Bodkin
09-30-2009, 04:12 AM
Repost : http://www.xtremesystems.org/forums/showthread.php?t=235490&page=4

Hornet331
09-30-2009, 04:19 AM
i've always wondered if bsn really stands for bright side of news or more like "bull sh1t network" or something

definitly the later one. :ROTF:

WeeMaan
09-30-2009, 04:47 AM
Repost : http://www.xtremesystems.org/forums/showthread.php?t=235490&page=4

I missed that, sorry for repost.

LiquidReactor
09-30-2009, 05:31 AM
http://www.techpowerup.com/104942/NVIDIA_GT300__Fermi__Detailed.html

randomizer
09-30-2009, 05:49 AM
http://www.techpowerup.com/104942/NVIDIA_GT300__Fermi__Detailed.html

Which was pulled from BSN, which was pulled from the table linked in the other thread, which was pulled from wikipedia and changed slightly, which was pulled from random forum rumours. :p:

annihilat0r
09-30-2009, 05:53 AM
Which was pulled from BSN, which was pulled from the table linked in the other thread, which was pulled from wikipedia and changed slightly, which was pulled from random forum rumours. :p:

In the table the memory bus is listed as 512 bit, here it's 384.

trinibwoy
09-30-2009, 05:56 AM
Which was pulled from BSN, which was pulled from the table linked in the other thread, which was pulled from wikipedia and changed slightly, which was pulled from random forum rumours. :p:

I can guarantee you that BSN's data was not pulled from a table on any other site.

demonkevy666
09-30-2009, 05:56 AM
In the table the memory bus is listed as 512 bit, here it's 384.

it's a jumble then no one know but they have bits and pieces with out the picture to think it out.

jmke
09-30-2009, 06:00 AM
I can guarantee you that BSN's data was not pulled from a table on any other site.

doesn't matter much, it's the same;)

Mechromancer
09-30-2009, 06:07 AM
I think the most interesting part of this uarch is the cache structure. If the L1 and L2 is ECC, this could be a huge benefit to GPGPU computing. Nvidia looks like they will keep the lead in GPGPU if they have ECC and cache coherency taken care of. We can likely expect the GT300 to at least match the 5000 series in 3D gaming.

Where I doubt the GT300 can compete with the ATI products is price if yields aren't perfect.

lockee
09-30-2009, 06:19 AM
512 shader cores [renamed into CUDA Cores]

I can't tell if this is marketing or brainwashing.

saaya
09-30-2009, 06:19 AM
I think the most interesting part of this uarch is the cache structure. If the L1 and L2 is ECC, this could be a huge benefit to GPGPU computing. Nvidia looks like they will keep the lead in GPGPU if they have ECC and cache coherency taken care of. We can likely expect the GT300 to at least match the 5000 series in 3D gaming.

Where I doubt the GT300 can compete with the ATI products is price if yields aren't perfect.huh? what does ecc cache have to do with gaming performance?

Mechromancer
09-30-2009, 06:25 AM
huh? what does ecc cache have to do with gaming performance?

READ WHAT I WROTE AGAIN. Your eyes failed you. ECC is useful for GPGPU appications.

saaya
09-30-2009, 07:03 AM
READ WHAT I WROTE AGAIN. Your eyes failed you. ECC is useful for GPGPU appications.well you talk about gpgpu needing ecc cache and then the next sentence is that gaming perf will be good... it looked to me like your drawing that as a conclusion of the ecc cache :D

zalbard
09-30-2009, 07:12 AM
CUDA cores? Hahaha. :rofl:
They won't give up CUDA even when it's too late, huh?

trinibwoy
09-30-2009, 07:53 AM
CUDA cores? Hahaha. :rofl:
They won't give up CUDA even when it's too late, huh?

They're slowly trying to turn CUDA into an architecture, like x86. So Intel has x86 cores, Nvidia has CUDA cores. It's all part of the master plan. Bwahahahaha! :worship:

But seriously, Nvidia's plans are far more extensive and longer term than we can cover in short-sighted forum banter. They really want to be a significant provider of high performance computing. So for them, it's a lot bigger than getting more frames in Crysis, it's about breaking into new markets with new products and making more $$$$$$.

jaredpace
09-30-2009, 07:56 AM
They're slowly trying to turn CUDA into an architecture, like x86. So Intel has x86 cores, Nvidia has CUDA cores. It's all part of the master plan. Bwahahahaha! :worship:

But seriously, Nvidia's plans are far more extensive and longer term than we can cover in short-sighted forum banter. They really want to be a significant provider of high performance computing. So for them, it's a lot bigger than getting more frames in Crysis, it's about breaking into new markets with new products and making more $$$$$$.

:D I agree.

Clairvoyant129
09-30-2009, 07:59 AM
Well we can all agree that during Nvidia's GPU Technology Conference today, Jen-Hsun Huang will talk about GT300.

ajaidev
09-30-2009, 08:02 AM
lol at CUDA cores, to use the cores you need cuda installed....

Well from what it seems GT300 will be a monster in scientifically intensive games :) How it performs in real games is the real question.

WeeMaan
09-30-2009, 08:02 AM
Well we can all agree that during Nvidia's GPU Technology Conference today, Jen-Hsun Huang will talk about GT300.

Most likely, yes.

trinibwoy
09-30-2009, 08:07 AM
lol at CUDA cores, to use the cores you need cuda installed....

You mean just like you need x86 binaries and an x86 OS to use x86 cores? ;)

nr4
09-30-2009, 08:12 AM
Seriously now, this new architecture is astonishing.
For me, just the ideea that it natively supports C++ is more then enough.

Speaking of games and physics, this GT300 will be the new frontier.

Don't know why, but ATI smells like "voodoo" to me.

Helloworld_98
09-30-2009, 08:17 AM
anyone else noticed it doesn't say that is has DX11 support in those specs?

Sh1tyMcGee
09-30-2009, 08:18 AM
NVIDIA GT300 ''Fermi'' Detailed
NVIDIA's upcoming flagship graphics processor is going by a lot of codenames. While some call it the GF100, others GT300 (based on the present nomenclature), what is certain that the NVIDIA has given the architecture an internal name of "Fermi", after the Italian physicist Enrico Fermi, the inventor of the nuclear reactor. It doesn't come as a surprise, that the codename of the board itself is going to be called "reactor", according to some sources.

Based on information gathered so far about GT300/Fermi, here's what's packed into it:

* Transistor count of over 3 billion
* Built on the 40 nm TSMC process
* 512 shader processors (which NVIDIA may refer to as "CUDA cores")
* 32 cores per core cluster
* 384-bit GDDR5 memory interface
* 1 MB L1 cache memory, 768 KB L2 unified cache memory
* Up to 6 GB of total memory, 1.5 GB can be expected for the consumer graphics variant
* Half Speed IEEE 754 Double Precision floating point
* Native support for execution of C (CUDA), C++, Fortran, support for DirectCompute 11, DirectX 11, OpenGL 3.1, and OpenCL

netkas
09-30-2009, 08:39 AM
read somewhere its not going to have hardware tesselation engine(s/w emulation instead), nvidia is behind again..

DilTech
09-30-2009, 08:59 AM
read somewhere its not going to have hardware tesselation engine(s/w emulation instead), nvidia is behind again..

You read wrong, it's required as part of DX11. :ROTF:

trinibwoy
09-30-2009, 08:59 AM
anyone else noticed it doesn't say that is has DX11 support in those specs?

Yep, not a peep about anything graphics related. Bad sign? Maybe.


read somewhere its not going to have hardware tesselation engine(s/w emulation instead), nvidia is behind again..

I presume you have extensive experience with tessellation and have good reasons for implying that a software implementation will be worse than fixed function hardware? I guess the fact that some of the leaders in tessellation research work for Nvidia doesn't matter.

Piotrsama
09-30-2009, 09:06 AM
"CoWA cores" sounds better. (Can of Whoop-Ass cores)

jaredpace
09-30-2009, 09:11 AM
lol. Can of whoop-ass cores. That's great. "COWA Cores"

eleeter
09-30-2009, 09:11 AM
"CoWA cores" sounds better. (Can of Whoop-Ass cores)

Haha. Could also call them CoWA-bunga cores.

trinibwoy
09-30-2009, 09:15 AM
Sign me up!

netkas
09-30-2009, 09:29 AM
You read wrong, it's required as part of DX11. :ROTF:

DX11 requires tesselation engine, dx11 specs doesnt specify if it has to be hardware or software (using "cuda"-cores)

Vit^pr0n
09-30-2009, 09:34 AM
Looks like a beast card if true. Now all that matters is pricing.

DilTech
09-30-2009, 09:38 AM
DX11 requires tesselation engine, dx11 specs doesnt specify if it has to be hardware or software (using "cuda"-cores)

If hardware isn't required, then you know as well as I do there's a good chance the developers won't code the games with ATi's tesselator in mind, right?

We'll know soon enough.

Cybercat
09-30-2009, 09:41 AM
This is the final nail in Larrabee's coffin.

Helloworld_98
09-30-2009, 09:51 AM
This is the final nail in Larrabee's coffin.

I doubt it, if the GT300 uses 512 SP's, they will be SIMD unless the die size is even bigger than rumours say and even then raytraced graphics and GPGPU work will be more optimized for larrabee than GT300 due to the fact that it's Intel and x86 is far more wide spread than nvidia's architecture.

003
09-30-2009, 09:56 AM
anyone else noticed it doesn't say that is has DX11 support in those specs? If you honestly believe nvidia will release their next gen GPU without support for DX11... :rofl:


even then raytraced graphics and GPGPU work will be more optimized for larrabee than GT300 due to the fact that it's Intel and x86 is far more wide spread than nvidia's architecture.

Apparently, Larrabee's vector cores are NOT x86.

Helloworld_98
09-30-2009, 10:08 AM
this can't be good for gamers if it's true


Our appetite, however, has been whetted by what an NVIDIA bigwig had to say recently, commenting on the upcoming GT300 graphics processor as "more like a CPU than a GPU."

trinibwoy
09-30-2009, 10:15 AM
I doubt it, if the GT300 uses 512 SP's, they will be SIMD unless the die size is even bigger than rumours say and even then raytraced graphics and GPGPU work will be more optimized for larrabee than GT300 due to the fact that it's Intel and x86 is far more wide spread than nvidia's architecture.

Aaaargh!! Larrabee's vector units are not x86. Repeat it 100x until it sticks.

nr4
09-30-2009, 10:17 AM
I doubt it, if the GT300 uses 512 SP's, they will be SIMD unless the die size is even bigger than rumours say and even then raytraced graphics and GPGPU work will be more optimized for larrabee than GT300 due to the fact that it's Intel and x86 is far more wide spread than nvidia's architecture.
GT300 has MIMD cores which work in MPMD mode. :comp10:

MadMan007
09-30-2009, 10:45 AM
This could be great for the high-end market like what NV intended for the GTX 260 and up (even though GTX 260 ended up plummeting in price to compete in the market) but how is it going to scale to smaller chips for the midrange and mainstream markets that really matter most? Last I knew NV was releasing mere DX 10.1 chips (GT21x) for the those markets. That's honestly where I'd be looking these days with little need for uber graphics power and ATi's Juniper seems more appealing. Damn you NV and your not doing full line refreshes. It worked fine with the undying G92 which remained competitive but I don't see how they'll fill the $150-200 market without a cut-down (not just disabled parts) GT300 chip.

Bail_w
09-30-2009, 10:49 AM
3 billion transistors huh, i wonder how much longer it will be against the 5870...

Splave
09-30-2009, 11:13 AM
^ to see who physically has the larger e-peen?

Luka_Aveiro
09-30-2009, 11:34 AM
^ to see who physically has the larger e-peen?

lol...

RPGWiZaRD
09-30-2009, 11:53 AM
EVGA GTX 380 SuperEnlarged 3P33N edition !

I can already see it coming...

ethomaz
09-30-2009, 12:08 PM
http://www.nvidia.com/object/fermi_architecture.html

512 CUDA cores - Confirmed.

Clairvoyant129
09-30-2009, 12:11 PM
http://www.nvidia.com/object/fermi_architecture.html

512 CUDA cores - Confirmed.

Gt300 is going to be a monster, looks like I'll have to return these HD5870s :p:

jaredpace
09-30-2009, 12:12 PM
http://i38.tinypic.com/4j9oxj.jpg

eric66
09-30-2009, 12:14 PM
looks like killer gpu we can see death of 5870 lol

stangracin3
09-30-2009, 12:18 PM
even if it is a monster i sense we will see alot more of these

http://store.nvidia.com/DRHM/servlet/ControllerServlet?Action=DisplayProductDetailsPage&SiteID=nvidia&Locale=en_US&Env=BASE&productID=165074700

netkas
09-30-2009, 12:21 PM
nvidia decided to improve gpgpu, but "can it run crysis?"

DilTech
09-30-2009, 12:23 PM
Considering the 58x0 series still struggles with crysis at high rez, I sure hope it can.

OCguy
09-30-2009, 12:29 PM
Ill just leave this here.

http://anandtech.com/video/showdoc.aspx?i=3651

Although it does leave wiggle room for early-adopters to get thier cards by Xmas. Note the "widespread."

Chumbucket843
09-30-2009, 12:40 PM
this is truly awesome. i cant wait to see the scale of how many gpu's they can put in a single data center.

this made me lol:

I asked two people at NVIDIA why Fermi is late; NVIDIA's VP of Product Marketing, Ujesh Desai and NVIDIA's VP of GPU Engineering, Jonah Alben. Ujesh responded: because designing GPUs this big is ":banana::banana::banana::banana:ing hard".

SamHughe
09-30-2009, 12:40 PM
*edited out by Dil-Tech* looks like killer gpu we can see rape of 5870 lol *edited out by Dil-Tech*

Here, I fixed your post for you. You are welcome!

On topic: I am really surprised that they adopted 384 bit memory interface rather than 512 bit. Is it because of GDDR5?

OCguy
09-30-2009, 12:42 PM
this is truly awesome. i cant wait to see the scale of how many gpu's they can put in a single data center.

this made me lol:

Yea, awesome quote coming from an engineer. :ROTF:

Edit: Nope, it was the marketing guy. My bad.

Tim
09-30-2009, 12:44 PM
Here, I fixed your post for you. You are welcome!

On topic: I am really surprised that they adopted 384 bit memory interface rather than 512 bit. Is it because of GDDR5?

I would love to see GT300 beat the 5870 as well. I'm counting on it. I like nVidia very much, and if they offer me a better card in atleast the majority of features and speed I will be a very happy guy around Xmas time. I guess that makes me a fanboy. Yay. :yepp:

I can wait whilst using my G80, no problem at all. Still runs games fine.

GT300, bring it on, let's see what you got nVidia. :)

BenchZowner
09-30-2009, 12:49 PM
On topic: I am really surprised that they adopted 384 bit memory interface rather than 512 bit. Is it because of GDDR5?

Of course.
512BIT BUS Width and GDDR5-4800+ would be overkill even for the "enhanced" child of the GF100 ( the new GPU ).

They don't need the extra bandwidth, and I'm quite sure the lower trannie ( transistor :p: ) count and the "easier" to produce PCB say go for 384BIT :)

eric66
09-30-2009, 12:52 PM
Here, I fixed your post for you. You are welcome!

On topic: I am really surprised that they adopted 384 bit memory interface rather than 512 bit. Is it because of GDDR5?
ehm don't understand why truth make me fanboy *deleted by Dil-Tech* ? lol :ROTF:

want to bet that it won't happen ?

Tim
09-30-2009, 12:56 PM
What is all this fanboy business about anyway. If you watch Formula One, or Football or any other sport, you cannot possibly say you don't have your favorite team. It's like I would prefer a Skyline over a Scooby, it's just personal preference. It's a bit sad this whole 'you fanboy!' thing. I wouldn't mind using an ATI card, but still would prefer nVidia, I've always have had very good experiences with nVidia, especially driver wise and new features.

DilTech
09-30-2009, 01:00 PM
Eric, I'm giving you one chance to edit that post yourself... you have 15 minutes.

Consider this a warning to every one, WATCH THE LANGUAGE! Next time won't be just a warning about it.

:::On Topic:::

I know what you mean Tim. Everyone use to claim me a NVidia fanboy, but I have no problem buying an ATi card either, and in fact am presently using one. Of course, with the issues I've run into with my HDTV and this thing, I'm kind of wishing I hadn't, but the fact remains that for the price I was spending ATi was the better buy and that's who I went with.

When a product treats you right, you remember that. When a product treats you wrong, you remember that too. That's how a "fan" is born, plain and simple. Old school computer users still haven't gotten over back when ATi cards couldn't run opengl at all during opengl's hey-day, claiming it wasn't important. We still saw signs of it until earlier this decade as they still had poor opengl performance, which they finally fixed. Those people who had that problem though haven't forgotten, and most will never touch ati because of it.

There is, however, a difference between a Preference and being an out-right Fanboy. A fanboy will NEVER buy anything but that brand, and thinks said company never does any wrong.

eric66
09-30-2009, 01:15 PM
thing is nvidia came with brand new arch even it was old one with those specs it would easily kill 5870. now i trust my six sense and no i am not a fanboy i used different brands and i like to give everyone a chance until they majorly dissapoint me so i say gt3xx series top card will walk all over 5870 with its current performance

stangracin3
09-30-2009, 01:25 PM
snip.

back off topic for a minute.

are you a new mod or did you change your name?

Nedjo
09-30-2009, 01:28 PM
Then timing is just as valid, because while Fermi currently exists on paper, it's not a product yet. Fermi is late. Clock speeds, configurations and price points have yet to be finalized. NVIDIA just recently got working chips back and it's going to be at least two months before I see the first samples. Widespread availability won't be until at least Q1 2010.

http://www.anandtech.com/video/showdoc.aspx?i=3651&p=1

so what's the point of arguing right now? Let's wait CeBIT, or in better case CES, and review these threads about GT300 aka GF100

DilTech
09-30-2009, 01:32 PM
back off topic for a minute.

are you a new mod or did you change your name?

I've been around for awhile, I was just gone for months due to work. My original name was suppose to be Dil-Tech on here, but the activation screwed up so I had to make it this.

Manicdan
09-30-2009, 01:36 PM
thing is nvidia came with brand new arch even it was old one with those specs it would easily kill 5870. now i trust my six sense and no i am not a fanboy i used different brands and i like to give everyone a chance until they majorly dissapoint me so i say gt3xx series top card will walk all over 5870 with its current performance

i dont think it will kill a 5870, these things were built to be bigger and stronger and more expensive. ATI will be here unless you see these launch for 100$ and come with free pie and hookers.

Chickenfeed
09-30-2009, 01:37 PM
I was 3gb/s off on my estimated memory bandwidth, damn :p: Pretty good guess though I'd say.

The 5870x2 will still be faster but if there is in fact an official 5850x2, it might just trade blows with it. Nvidia won't hold the single board crown until they manage some kind of dual design but their single gpu performance should be top of the pile.

I expect it to be priced around $450-500 or so (with the 360 at 5870 price levels), and it might turn out the the X2s end up launching at more than AMDs said 500 (edit: that would have been likely with a Q4 release, a Q1 release gives them no competition still )due to this new information ( either that or all the prices will slide down a fair bit) I'd be as bold to guess that a GTX380 might be 20-30% faster at times single gpu so it should be a pretty impressive product. However it still is a more expensive design to produce that R870 by a fair amount ( raw specs say this much ) so I don't expect their prices to fall much in the months after release ( that and the performance advantage they'll offer ). The 360 model will likely trade blows with the 5870. I think SLI 360s will be quite popular that said ( assuming they don't red ring... oh my bad )

The final question is when will we see them in mass. I'm going to guess mid to late December at best assuming the rumored November launch ( as with the 8800GTX / 680i with their November *paperish* launches, availability was somewhat weak for about a month ) EDIT: Scratch that, Anand says is still toting the Q1 release card. That said, a 20-30% increase ( a fair guess if you ask me ) 4-6 months after the competition isn't that amazing but required, I don't feel bad about getting a 5870 with this much of a time gap.

Farinorco
09-30-2009, 01:38 PM
What is all this fanboy business about anyway. If you watch Formula One, or Football or any other sport, you cannot possibly say you don't have your favorite team. It's like I would prefer a Skyline over a Scooby, it's just personal preferance. It's a bit sad this whole 'you fanboy!' thing. I wouldn't mind using an ATI card, but still would prefer nVidia, I've always have had very good experiences with nVidia, especially driver wise and new features.

Little OT (don't shoot me please): The difference is that Formula One or Football are shows that you see for entertainment, they're shows where the participants compete, so it's logical that audience developes favoritism for certain participant. It's about it in the end.

But ATI, NVIDIA, AMD, Intel and so are companies that make products for you to buy. Supporting one like if they were "your team", making yourself blind about their "faults", or being happy about competition "losing the game", it's kind of sad IMHO. It's like if I am a fan of :banana::banana::banana:or (I don't know if it's in all the world, a home appliances vendor).

That said, I find that in forums the thing of fanboyism (I'm not talking about this specific case, not my business so...) is way abused now to make others opinions appear less valid or biased to strengthen the one's position. So I try to be the more distant the better about this term.

Of course, having a favourite brand based on personal experiences or other factors is perfectly good. I happen to not have them in this case, I've not had any bad experience with either ATi or NVIDIA. I have in other topics.

*********

Oh, someone else think that this is a mess with so many threads about "GT300 and its specifications" :D

All those turns now here now there are making my head spin... :p:

Manicdan
09-30-2009, 01:45 PM
Little OT (don't shoot me please): The difference is that Formula One or Football are shows that you see for entertainment, they're shows where the participants compete, so it's logical that audience developes favoritism for certain participant. It's about it in the end.

But ATI, NVIDIA, AMD, Intel and so are companies that make products for you to buy. Supporting one like if they were "your team", making yourself blind about their "faults", or being happy about competition "losing the game", it's kind of sad IMHO. It's like if I am a fan of :banana::banana::banana:or (I don't know if it's in all the world, a home appliances vendor).

That said, I find that in forums the thing of fanboyism (I'm not talking about this specific case, not my business so...) is way abused now to make others opinions appear less valid or biased to strengthen the one's position. So I try to be the more distant the better about this term.

Of course, having a favourite brand based on personal experiences or other factors is perfectly good. I happen to not have them in this case, I've not had any bad experience with either ATi or NVIDIA. I have in other topics.

*********

Oh, someone else think that this is a mess with so many threads about "GT300 and its specifications" :D

All those turns now here now there are making my head spin... :p:

off topic too...

beating up someone for wearing the wrong sports jersey i think is 1000x worse than any fanboy we have on here

Farinorco
09-30-2009, 01:49 PM
off topic too...

beating up someone for wearing the wrong sports jersey i think is 1000x worse than any fanboy we have on here

Agreed.

eric66
09-30-2009, 01:51 PM
http://alienbabeltech.com/main/wp-content/uploads/2009/09/FermiGT300.jpg

looks small or jen is very big lol

NapalmV5
09-30-2009, 01:55 PM
looks like no 5800 of any/no sort for me..

384bit is disappointing for such rops/core specs but i guess 3gb will make up for it

patiently waiting on you gtx380

thank you nvidia :toast:

ubuntu83
09-30-2009, 02:03 PM
http://www.anandtech.com/video/showdoc.aspx?i=3651&p=1

so what's the point of arguing right now? Let's wait CeBIT, or in better case CES, and review these threads about GT300 aka GF100

hmm so by the time nVIDIA will manage to release the "Fermi" AMD will be out with the complete lineup from top to bottom.

eric66
09-30-2009, 02:05 PM
http://img223.imageshack.us/img223/3123/picture48h.png

close one

AbelJemka
09-30-2009, 02:05 PM
With a launch at Q1 that let's a lots of time for AMD to make a move.

ajaidev
09-30-2009, 02:10 PM
Why are people hyping up ECC so much, the ATi's cards also support something similar if not same "EDC" even that is great tech for some GPU-CPU action.

EternityZX9
09-30-2009, 02:17 PM
Blah, didn't we go through this last year and the year before? Rumor mill never ceases...

Btw, good to see you around diltech :)

Chumbucket843
09-30-2009, 02:18 PM
With a launch at Q1 that let's a lots of time for AMD to make a move.

no it isnt. what would they do except lower price point?

Piotrsama
09-30-2009, 02:21 PM
no it isnt. what would they do except lower price point?

Sell a lot of DX11 cards while preparing something special for G300 launch?

gamervivek
09-30-2009, 02:21 PM
Why are people hyping up ECC so much, the ATi's cards also support something similar if not same "EDC" even that is great tech for some GPU-CPU action.

http://www.anandtech.com/video/showdoc.aspx?i=3651&p=6

zanzabar
09-30-2009, 02:26 PM
so whats the clock speed


Blah, didn't we go through this last year and the year before? Rumor mill never ceases...

Btw, good to see you around diltech :)

it was officially announced so we have some specs its not all rumors

gamervivek
09-30-2009, 02:29 PM
Here, I fixed your post for you. You are welcome!

On topic: I am really surprised that they adopted 384 bit memory interface rather than 512 bit. Is it because of GDDR5?

probably due to ECC?

AbelJemka
09-30-2009, 02:31 PM
no it isnt. what would they do except lower price point?
Please argument!
If you're point is 4 months is nothing, i laugh in advance :rofl:

Chumbucket843
09-30-2009, 02:38 PM
Please argument!
If you're point is 4 months is nothing, i laugh in advance :rofl:
what move will they make then? 5890? please tell me.

Clairvoyant129
09-30-2009, 02:41 PM
With a launch at Q1 that let's a lots of time for AMD to make a move.

Lower price point and raise clock speeds to make HD5890? Not much you can do... unless you're expecting some kind of a miracle new architecture in 4 months. :ROTF:

Watching the Nvidia GTC, I have a good feeling that GF100 will be special... much faster and better than the competition in every aspect except price.

tajoh111
09-30-2009, 02:43 PM
With a launch at Q1 that let's a lots of time for AMD to make a move.

I think the performance is totally a mystery to every everyone at this point so they don't know whether to jump or stay put. And even if they were to make a move, it couldn't be much more than a clock jump as i doubt AMD would think r8xx was only going to last 6 months.

Knowing AMD they might just completely forfeit the high end and fight at 250 and below for the next year if this card turns out to be 50-60% percent faster than the 5870, like they did against intel lately or 88xx generation to an extent. If the gtx 380 is able to somehow beat r800, it might just completely abandon it altogether, as I doubt it would sell well at all, as 3870x2 bombed and it was still beating the 8800 ultra. This was lots to do with NV just being a stronger brand due to marketing.

This is all talk at this point until NV atleast shows the card.

DilTech
09-30-2009, 02:46 PM
Please argument!
If you're point is 4 months is nothing, i laugh in advance :rofl:

There's 2 things they can do, 5890 with maybe another 100-150 mhz, and a 5870x2.

This card should still be faster than a 5890, and a GTX-395 will likely walk all over the 5870x2 with ease.

2x+ the speed of a GTX 285(not sli, literally double or more performance) will put a single GTX-380 far above the 5870(40 to 60%, if not more), and a simple overclock of the 5870(5890) is NOT going to catch it.

There's no way ATi can launch another architecture in 4 months, so those are their only 2 options.

AbelJemka
09-30-2009, 02:48 PM
Complete Nvidia brigade jump on me:p:

Clairvoyant129
09-30-2009, 02:52 PM
Complete Nvidia brigade jump on me:p:

No, you're just making assumptions that doesn't make sense. Seriously, what do you expect in 4 months? A new architecture? I really want to know your answer.

I have no preference over certain brands as I have two HD5870s in my desktop but you on the other hand. :rolleyes:

zalbard
09-30-2009, 02:53 PM
Bigger die = more expensive card. Lets see who wins price / performance wise.
The way I -feel- about it, AMD solution will offer better price / performance ratio for games, while Nvidia will offer additional computing features.

LordEC911
09-30-2009, 03:02 PM
Why are people hyping up ECC so much, the ATi's cards also support something similar if not same "EDC" even that is great tech for some GPU-CPU action.
Exactly, the true ECC won't happen with GDDR5, expect the Telsa cards to have DDR3 on them.


no it isnt. what would they do except lower price point?
Umm... have other cards out? Have a die shrink being prepped and released shortly after? Ever thought on why Cypress and Juniper are so large for their bus sizes?


so whats the clock speed
750mhz core is supposedly the targeted core clocks.


Complete Nvidia brigade jump on me:p:
Yep, seems that way now that certain people are back posting...

Edit- If anyone is expecting these cards to have a massive performance lead over 5870 is sadly going to be very disappointed.

ubuntu83
09-30-2009, 03:10 PM
Why are people hyping up ECC so much, the ATi's cards also support something similar if not same "EDC" even that is great tech for some GPU-CPU action.

Details on ECC ---> http://www.anandtech.com/video/showdoc.aspx?i=3651&p=6

AbelJemka
09-30-2009, 03:11 PM
No, you're just making assumptions that doesn't make sense. Seriously, what do you expect in 4 months? A new architecture? I really want to know your answer.

I have no preference over certain brands as I have two HD5870s in my desktop but you on the other hand. :rolleyes:
Ok i give you what i thinking.

I will quote Anandtech article about Fermi :
http://anandtech.com/video/showdoc.aspx?i=3651&p=7

Ujesh is wiling to take total blame for GT200. As manager of GeForce at the time, Ujesh admitted that he priced GT200 wrong. NVIDIA looked at RV670 (Radeon HD 3870) and extrapolated from that to predict what RV770's performance would be. Obviously, RV770 caught NVIDIA off guard and GT200 was priced much too high.

AMD launch it card before this time, what AMD is doing now : extrapolate GT300 performance and cost.
Performance? GTX285 SLI is like 30% faster than 5870 in average. GTX380 may be more like 50% to 60% faster than 5870 in average. Maybe even less.
Cost? 40% more transistors than RV870 and 384bits istead of 256bits. 600$? More?
Diltech speaks about GTX395 but in Nvidia history multigpu cards were launched very late (More than 6 months in average).
Basically AMD have 3 months to sell DX11 card with the help of Windows 7.

LordEC911
09-30-2009, 03:12 PM
Details on ECC ---> http://www.anandtech.com/video/showdoc.aspx?i=3651&p=6
Read above...

RAW-Raptor22
09-30-2009, 03:13 PM
Wow, its like an 8800GTX but with more RAM, and 4x the shaders... :p:

The 8800GTX was awesome so this probably will be too. :)

DilTech
09-30-2009, 03:16 PM
Ok i give you what i thinking.

I will quote Anandtech article about Fermi :
http://anandtech.com/video/showdoc.aspx?i=3651&p=7


AMD launch it card before this time, what AMD is doing now : extrapolate GT300 performance and cost.
Performance? GTX285 SLI is like 30% faster than 5870 in average. GTX380 may be more like 50% to 60% faster than 5870 in average.
Cost? 40% more transistors than RV870 and 384bits istead of 256bits. 600$? More?
Diltech speaks about GTX395 but in Nvidia history multigpu cards were launched very late (More than 6 months in average).

I doubt it'll be $600, NVidia said they realized they launched the GTX-280 too high and they won't be making that mistake again. I'm thinking probably around $400 at launch, which will put a VERY tight squeeze on ATi.

Of course, that's all theoretical as no one can know for sure how NVidia will price this thing. We'll find out soon enough though. Considering NVidia is aiming to market for tesla, where the profit margins are much higher, they can afford lower prices on the desktop cards and make the difference off tesla kits. Just a little something to keep in mind. ;)

Chumbucket843
09-30-2009, 03:17 PM
Umm... have other cards out? Have a die shrink being prepped and released shortly after? Ever thought on why Cypress and Juniper are so large for their bus sizes?

those will only compete at price point. i am suprised to see you not excited about this card. 2 kernels,new ISA and a new mem hierarchy... yay

are you referring to a 28nm shrink?:confused:i thought the die was big because they doubled everything.

Wiggy McShades
09-30-2009, 03:21 PM
Why are people hyping up ECC so much, the ATi's cards also support something similar if not same "EDC" even that is great tech for some GPU-CPU action.



amd's cards can detect errors on the memory BUS only and not do a thing to correct them. If there is an error on the bus connecting the controller to the chip, amd's card can try to adjust the signal to maintain a stable connection to the memory chip.AMD did this to maintain higher memory clocks without having to change traces on the pcb. ECC on fermi is for more than just the memory bus.

LordEC911
09-30-2009, 03:23 PM
I doubt it'll be $600, NVidia said they realized they launched the GTX-280 too high and they won't be making that mistake again. I'm thinking probably around $400 at launch, which will put a VERY tight squeeze on ATi.

Of course, that's all theoretical as no one can know for sure how NVidia will price this thing. We'll find out soon enough though. Considering NVidia is aiming to market for tesla, where the profit margins are much higher, they can afford lower prices on the desktop cards and make the difference off tesla kits. Just a little something to keep in mind. ;)

Really? You think they will be pricing it that low when it will easily have a kit cost higher than G200? I am guessing anywhere from $150-$200 each and that's being conservative.


those will only compete at price point. i am suprised to see you not excited about this card. 2 kernels,new ISA and a new mem hierarchy... yay

are you referring to a 28nm shrink?:confused:i thought the die was big because they doubled everything.
I am excited to see exactly what they have done with the architecture but from what I have already seen they are dismissing their, original, primary mission of delivering uncompromising gaming performance and are now more focused on marketing and very minor niche markets. While there is much money to be made in the GPGPU market it is a little disappointing to see them chasing it so hard and with little to no regard to their gaming performance. Plus Nvidia's arrogance as a whole is very bewildering and insulting as a consumer.

AbelJemka
09-30-2009, 03:26 PM
I doubt it'll be $600, NVidia said they realized they launched the GTX-280 too high and they won't be making that mistake again. I'm thinking probably around $400 at launch, which will put a VERY tight squeeze on ATi.

Of course, that's all theoretical as no one can know for sure how NVidia will price this thing. We'll find out soon enough though. Considering NVidia is aiming to market for tesla, where the profit margins are much higher, they can afford lower prices on the desktop cards and make the difference off tesla kits. Just a little something to keep in mind. ;)

Quote 1 :

Jonah did step in to clarify. He believes that AMD's strategy simply boils down to targeting a different price point. He believes that the correct answer isn't to target a lower price point first, but rather build big chips efficiently. And build them so that you can scale to different sizes/configurations without having to redo a bunch of stuff. Putting on his marketing hat for a bit, Jonah said that NVIDIA is actively making investments in that direction. Perhaps Fermi will be different and it'll scale down to $199 and $299 price points with little effort? It seems doubtful, but we'll find out next year.

Quote 2 :

Last quarter the Tesla business unit made $10M. That's not a whole lot of money for a company that, at its peak, grossed $1B in a single quarter. NVIDIA believes that Fermi is when that will all change. To borrow a horrendously overused phrase, Fermi is the inflection point for NVIDIA's Tesla sales.

Quote 3 :

We'll see how that plays out, but if Fermi doesn't significantly increase Tesla revenues then we know that NVIDIA is in serious trouble.

Wiggy McShades
09-30-2009, 03:27 PM
Quote 1 :


Quote 2 :


Quote 3 :


ya read that part about how they were NOT going to price the cards too high like gt200?

gamervivek
09-30-2009, 03:29 PM
I doubt it'll be $600, NVidia said they realized they launched the GTX-280 too high and they won't be making that mistake again. I'm thinking probably around $400 at launch, which will put a VERY tight squeeze on ATi.

Of course, that's all theoretical as no one can know for sure how NVidia will price this thing. We'll find out soon enough though. Considering NVidia is aiming to market for tesla, where the profit margins are much higher, they can afford lower prices on the desktop cards and make the difference off tesla kits. Just a little something to keep in mind. ;)

they priced it too high since they miscalculated the rv770's performance,this time round they might make sure that the performance is where the 650$ tag makes sense.Or at least the marketing hype does.

ubuntu83
09-30-2009, 03:31 PM
they can afford lower prices on the desktop cards and make the difference off tesla kits. Just a little something to keep in mind. ;)

They'll have to spend a lot on it's marketing. It's current market share is nothing to fancy about.


Last quarter the Tesla business unit made $10M. That's not a whole lot of money for a company that, at its peak, grossed $1B in a single quarter.

AbelJemka
09-30-2009, 03:34 PM
they priced it too high since they miscalculated the rv770's performance,this time round they might make sure that the performance is where the 650$ tag makes sense.Or at least the marketing hype does.
I was about to post to answer Wiggy McShades but you burn me on it :p:

I read this part like him.

Ujesh is wiling to take total blame for GT200. As manager of GeForce at the time, Ujesh admitted that he priced GT200 wrong. NVIDIA looked at RV670 (Radeon HD 3870) and extrapolated from that to predict what RV770's performance would be. Obviously, RV770 caught NVIDIA off guard and GT200 was priced much too high.

Ujesh doesn't believe NVIDIA will make the same mistake with Fermi.

tajoh111
09-30-2009, 03:43 PM
Ok i give you what i thinking.

I will quote Anandtech article about Fermi :
http://anandtech.com/video/showdoc.aspx?i=3651&p=7


AMD launch it card before this time, what AMD is doing now : extrapolate GT300 performance and cost.
Performance? GTX285 SLI is like 30% faster than 5870 in average. GTX380 may be more like 50% to 60% faster than 5870 in average. Maybe even less.
Cost? 40% more transistors than RV870 and 384bits istead of 256bits. 600$? More?
Diltech speaks about GTX395 but in Nvidia history multigpu cards were launched very late (More than 6 months in average).
Basically AMD have 3 months to sell DX11 card with the help of Windows 7.

You can only extrapolate so much and do so much with the technology you have at the time.

AMD R and D budget is tiny compared to Intel and NV(especially Intel), you can over estimate your rivals performance by 1000% and it will do nothing if you don't have the r and d budget to get something going to match that estimate.

With so many losing quarters in the past(except a couple quarters lately), I can imagine AMD graphic division was working on a shoestring budget, especially when AMD itself is so in the hole. Thankfully the research ATI put into r600 before the AMD and ATI merger paid off to some extent with r7xx and possibly to an extent r8xx as it turned out r6xx turned out to be a very scalable architecture. However research for the next big thing I can imagine being lacking for AMD and if this thing performs 50-60% faster than rv870, then AMD will need to come out with something new and not just a bigger chip with more shaders as returns have started to decrease with more shaders.

It will take either a big chip from AMD(which seems to be against their design philosophy) or a new architecture. I think a new architecture is not coming any time soon because of budget issues.

What AMD did with the R8xx is stretch the limits of the design(which NV did with g80->g200) that began with r600, it's all you can do when your company doesn't have the money to design a new architecture.

JohnJohn
09-30-2009, 03:49 PM
2x+ the speed of a GTX 285(not sli, literally double or more performance) will put a single GTX-380 far above the 5870(40 to 60%, if not more), and a simple overclock of the 5870(5890) is NOT going to catch it.



I concur with everything you said except this. If something has taught us the 5870, is that doubling everything up doesn't translate always in doubling performance. Even without having the SLI/CF handicap

Chumbucket843
09-30-2009, 03:49 PM
I am excited to see exactly what they have done with the architecture but from what I have already seen they are dismissing their, original, primary mission of delivering uncompromising gaming performance and are now more focused on marketing and very minor niche markets. While there is much money to be made in the GPGPU market it is a little disappointing to see them chasing it so hard and with little to no regard to their gaming performance. Plus Nvidia's arrogance as a whole is very bewildering and insulting as a consumer.

i was like wtf when i saw 8x dp performance:ROTF:. i hope in the future they will have a version with less dp support. i dont use it so its just wasting power. the hpc market is 1/2 size of the desktop gpu market. it is clear they have a reason to target this market but i would have to agree this is overkill for a gpu and the money they spent on designing specifically for this market might not even work. i might end up buying larrabee or a 5870 if the tdp and price are not right. the videos on their site dont even mention gaming.

LordEC911
09-30-2009, 03:50 PM
You can only extrapolate so much and do so much with the technology you have at the time.

AMD R and D budget is tiny compared to Intel and NV(especially Intel), you can over estimate your rivals performance by 1000% and it will do nothing if you don't have the r and d budget to get something going to match that estimate.

With so many losing quarters in the past(except a couple quarters lately), I can imagine AMD graphic division was working on a shoestring budget, especially when AMD itself is so in the hole. Thankfully the research ATI put into r600 before the AMD and ATI merger paid off to some extent with r7xx and possibly to an extent r8xx as it turned out r6xx turned out to be a very scalable architecture. However research for the next big thing I can imagine being lacking for AMD and if this thing performs 50-60% faster than rv870, then AMD will need to come out with something new and not just a bigger chip with more shaders as returns have started to decrease with more shaders.

It will take either a big chip from AMD(which seems to be against their design philosophy) or a new architecture. I think a new architecture is not coming any time soon because of budget issues.

What AMD did with the R8xx is stretch the limits of the design(which NV did with g80->g200) that began with r600, it's all you can do when your company doesn't have the money to design a new architecture.

Expect a shrink in '10 and a new architecture 1H '11, if not before.
You seem to not realize that architectures take 3-4years of design, not 2-3 months.

v_rr
09-30-2009, 03:51 PM
Just asking if it's possible:

->GT300: Q1 2010


TSMC Will Move to 28nm Process in Early 2010
The company will be able to offer the 28nm process as a full node technology
http://news.softpedia.com/news/TSMC-Will-Move-to-28nm-Process-in-Early-2010-94593.shtml

Global Foundries:

32nm SOI technology will be shipping in 2010 in an AMD design though GF does have a 32nm bulk technology that they will begin accepting orders for in the second half of this year.
http://www.pcper.com/comments.php?nid=7237

32nm Bulk@ H2 2009

-> RV870@32/28nm H1 2010 (??)

RV870 will be alone on it's own league for some quite time. (3 months++)
So Fermi could face a new RV870 shrink node later?

tajoh111
09-30-2009, 04:00 PM
Expect a shrink in '10 and a new architecture 1H '11, if not before.
You seem to not realize that architectures take 3-4years of design, not 2-3 months.

What the heck, I thought thats what my post implies. AMD won't be coming out with anything spectacular anytime soon because of the shoestring budget they have been working with because of so many bad quarters. AMD Graphic division has made a total profit of 50 million or so from its profitable quarters, but has lost hundred of millions if not billions in there quarters where they posted a loss.

AMD probably thought NV was going to go for a die shrinked G200 architecture as they had one in the pipeline until they decided to cancel it for gt300. As a result, I am not sure how much of an offence they can mount against gt300. All they can hope for is for gt300 to be a failure in efficiency, which NV I don't think will make because of the lessons learned from g200.

NV has been a much more profitable company overall and has probably been working on something pretty complex for the last 4 years as todays news confirms.

ubuntu83
09-30-2009, 04:06 PM
What the heck, I thought thats what my post implies. AMD won't be coming out with anything spectacular anytime soon because of the shoestring budget they have been working with because of so many bad quarters. AMD Graphic division has made a total profit of 50 million or so from its profitable quarters, but has lost hundred of millions if not billions in there quarters where they posted a loss.

NV has been a much more profitable company overall and has probably been working on something pretty complex for the last 4 years as todays news confirms.

By the current specs of GT300 we know it's HPC performance will be really good but that can't be said about the gaming. Can you tell me how many of those 3 billion transistors are dedicated to 3D performance. I don't think it will be a ground breaking product as far as the 3D performance is concerned. Not any changes except for the MIMD units instead of SIMD.

DilTech
09-30-2009, 04:15 PM
I concur with everything you said except this. If something has taught us the 5870, is that doubling everything up doesn't translate always in doubling performance. Even without having the SLI/CF handicap

The difference here is the GTX-380 is a brand new architecture, while the 5870 is still the same as the 4870 with double everything but the memory bandwidth and clock speeds. ATi are reaching the point of diminishing returns with the 5870, which is the problem we see and what stopped them from seeing double the performance. NVidia's new architectures have generally produced double the performance, if not more.

As for the fact that they aren't speaking about the gaming performance...of course not. They would tick off all their partners if they killed off their current gen sales by telling the consumer about their next gen performance.

tajoh111
09-30-2009, 04:16 PM
By the current specs of GT300 we know it's HPC performance will be really good but that can't be said about the gaming. Can you tell me how many of those 3 billion transistors are dedicated to 3D performance. I don't think it will be a ground breaking product as far as the 3D performance is concerned. Not any changes except for the MIMD units instead of SIMD.

NV would be shooting itself in the foot, if they didn't offer good gaming performance as it is likely either company is going to have start to show their cards if they want their chip to be included in the next gaming console. 512 shaders with higher shader clocks should already increase performance significantly.

Even those the chip is massive, NV still seems to want more efficiency from it sharders. Additionally to what NV has said, the new shaders would have to be more powerful than the ones in the past if NV want this chip to scale past one generation. This should be obvious with such a large chip. So more powerful shaders + 512 seems to be a decent recipe for a card with good gaming performance.

The problem with AMD right now, is die shrinks are getting harder to come by as they sometimes take a while to successful implementate. Also as we have seen with the 5870, they are starting to hit a wall when it comes to adding more shaders = more performance. This can be seen with 5870 vs 5850 especially, as the 5850 is far more efficient gaming perfomance wise vs tflop than the 5870.

ubuntu83
09-30-2009, 04:18 PM
The difference here is the GTX-380 is a brand new architecture, while the 5870 is still the same as the 4870 with double everything but the memory bandwidth and clock speeds. ATi are reaching the point of diminishing returns with the 5870, which is the problem we see and what stopped them from seeing double the performance. NVidia's new architectures have generally produced double the performance, if not more.

As for the fact that they aren't speaking about the gaming performance...of course not. They would tick off all their partners if they killed off their current gen sales by telling the consumer about their next gen performance.

What are the ground breaking changes that you think will bring exceptional performance for the 3D apps?

tajoh111
09-30-2009, 04:23 PM
What are the ground breaking changes that you think will bring exceptional performance for the 3D apps?

This is kind of a guess, but I can imagine the new on chip cache reducing the hit AA has on the chip big time. Heck it might be even free up to a certain extent.

gamervivek
09-30-2009, 04:31 PM
The difference here is the GTX-380 is a brand new architecture, while the 5870 is still the same as the 4870 with double everything but the memory bandwidth and clock speeds. ATi are reaching the point of diminishing returns with the 5870, which is the problem we see and what stopped them from seeing double the performance. NVidia's new architectures have generally produced double the performance, if not more.

As for the fact that they aren't speaking about the gaming performance...of course not. They would tick off all their partners if they killed off their current gen sales by telling the consumer about their next gen performance.

you could say the same for rv670-->rv770.It was 2.5 times but the performance was about 2x times,and iirc gtx280 was beaten by 8800gt SLI when it was introduced.Diminshing returns for the 5870 is too early to call for ati's arch,will they change it for something radically new depends on how far they want to take it up against nvidia in the gpgpu sector.

Mechromancer
09-30-2009, 04:32 PM
It looks like Nvidia made a CPU that pretends to be a GPU. This thing will be a monster at GPGPU no doubt. There is no question about it; Fermi is a generation beyond AMD's GPGPU capabilities. The next generation of GPUs from both companies are truly exciting this round.

One has to wonder how this will affect OpenCL. GT300 can do DXCompute, OpenCL and C++. RV870 can only do DXCompute and OpenCL. Will C++/CUDA GPGPU programming take off and leave OpenCL in the dust? Or will OpenCL and DXCompute, as they're supported by both companies, reign supreme. I think it will come down to the flexibility in each implementation. C++ has a leg up on that front. Larrabee may be the deciding factor. 2010 will be an incredibly interesting year.

zanzabar
09-30-2009, 04:34 PM
What are the ground breaking changes that you think will bring exceptional performance for the 3D apps?

im not sure that it will be better than 2 of the g200 in 3d, if it was ground breaking then we would have seen a bigger drop in price when the 5870 came out to remove all of the back stock or they will come dec
i want numbers

XCheater
09-30-2009, 04:38 PM
If GT300 is a monster of general purpose computing, why don't we make the NVIDIA card do the general purposes computation, while ATI cards do the rendering?
While we used high-end multicore Intel/AMD CPU for "northbridge replacement"
It's a swap back to "northbridge" industry" but with much powerful substitution :rofl:
That's might make standalone desktop computer a 10 TFlops machine BTW

zerazax
09-30-2009, 04:38 PM
I don't see it being 2x G200 either. They keep saying new architecture, but from what I've seen, it's built heavily on G200, just like G200 was built heavily on G80. G200 was nearly 2x the units of G80, but it didn't hit 2x the performance until far after release when newer games were optimized/able to take advantage of what G200 had extra.

However, I think without any hard data on clocks, it's impossible to claim where it will end up. I'm hopeful it's good, but when I hear them admit that it's been delayed, that's usually not a positive sign

Sh1tyMcGee
09-30-2009, 04:40 PM
Its so funny how people were counting nVidia out because they were late 1 week announcing their product. I purchased a 5870 because im so impatient. The nvidia Fermi is a advancement in GPU technology, where as ATI 5870 is just the same old thing just doubled. I am looking forward to purchasing the Fermi soon!

jaredpace
09-30-2009, 04:47 PM
courtesy of bit-tech:


http://i36.tinypic.com/2zz7ntf.jpg

xbrian88
09-30-2009, 04:48 PM
http://www.hwmania.org/gallery/file.php?n=1133&w=l

HwMania - Visualizza messaggio singolo - [Thread Ufficiale] nVIDIA Fermi (http://www.hwmania.org/forum/65385-post2.html)

RAW-Raptor22
09-30-2009, 04:52 PM
ISA???

http://content.answers.com/main/content/img/CDE/_ISA8_16.GIF

Chumbucket843
09-30-2009, 04:55 PM
I don't see it being 2x G200 either. They keep saying new architecture, but from what I've seen, it's built heavily on G200, just like G200 was built heavily on G80. G200 was nearly 2x the units of G80, but it didn't hit 2x the performance until far after release when newer games were optimized/able to take advantage of what G200 had extra.

However, I think without any hard data on clocks, it's impossible to claim where it will end up. I'm hopeful it's good, but when I hear them admit that it's been delayed, that's usually not a positive sign

512 shaders, gddr5, new memory system, new ISA, better scheduling wont cut the mustard huh? the white paper said 1.5ghz is a conservative clock speed too. this thing is fast.

DilTech
09-30-2009, 05:13 PM
I love how no one noticed one thing in that picture...it only needs 1 8pin power connector.

Mechromancer
09-30-2009, 05:15 PM
I love how no one noticed one thing in that picture...it only needs 1 8pin power connector.

Good catch. With a uarch like that it probably has some pretty aggressive power saving features. Nvidia probably has a winner if they can get the price right.

zanzabar
09-30-2009, 05:24 PM
512 shaders, gddr5, new memory system, new ISA, better scheduling wont cut the mustard huh? the white paper said 1.5ghz is a conservative clock speed too. this thing is fast.

that stuff is great for GPGPU but im not sure that it will help with 3d, i think that clock for clock it will be about the same as 2 g200 maybe +20% or so since its not sli and has a few more shaders. what will have a huge difference is GPGPU they can now properly handle 64bit floating point and that was about the only point were ati stream was better than cuda, but at this point i have seen nothing that needs a gpgpu for personal use sure there is folding/crunching on the gpu and encoding but encoding works on everything with openCL and seams IO limited to me.

so im just waiting for numbers, but it looks like it will edge out the 5890 but suck more power and cost alot more but not scale as well so it will all be the same just like the last gen. i am not saying that the gt300 is bad just that i dont see it being revolutionary, and with the 8+6 pin connector so it will be above the 225W. i would expect to be near the max 300W mark since the 40nm node dosnt seam to drop wattage much and with the added cashe+ more than double the shaders and less wait time from the improved means of command que will lead to a huge jump in power if all works right.


I love how no one noticed one thing in that picture...it only needs 1 8pin power connector.

it loosk like it has an 8 and a 6 one on each side, and one 8 could put u at 225W from 150W 8pin and 75W slot

edit, it looks like just one 8 but it also says tesla so thats not the gforce that people want, and only 1 dvi
i had been looking at this and thought i saw a 6 and 8
http://www.xtremesystems.org/forums/showpost.php?p=4041477&postcount=124

edit 2 there is an 8 and 6 for 300W max
http://www.bit-tech.net/news/hardware/2009/09/30/first-fermi-card-pictured/1

DilTech
09-30-2009, 05:28 PM
Circle the 6 for me, because I honestly am not seeing it...

AbelJemka
09-30-2009, 05:31 PM
http://www.hwmania.org/gallery/file.php?n=1133&w=l




Then timing is just as valid, because while Fermi currently exists on paper, it's not a product yet. Fermi is late. Clock speeds, configurations and price points have yet to be finalized. NVIDIA just recently got working chips back and it's going to be at least two months before I see the first samples. Widespread availability won't be until at least Q1 2010.

It's always good to have something to show in your hand during a presentation.
:up:

ps:



AMD R and D budget is tiny compared to Intel and NV(especially Intel)
Actually it seems that AMD R&D is twice Nvidia R&D, but sure AMD make CPU and GPU.

black_edition
09-30-2009, 05:34 PM
I love how no one noticed one thing in that picture...it only needs 1 8pin power connector.


Update: There's also a six pin connector on the board as well.

http://www.bit-tech.net/news/hardware/2009/09/30/first-fermi-card-pictured/1

tajoh111
09-30-2009, 05:46 PM
[B]


It's always good to have something to show in your hand during a presentation.
:up:

ps:

Actually it seems that AMD R&D is twice Nvidia R&D, but sure AMD make CPU and GPU.

Obviously I was talking about GPU only.

hennyo
09-30-2009, 05:49 PM
The way I see it, is that 3D rendering is hitting a brick wall. It has gotten to a point where the real noticeable differences in rendering are taking such incredible amounts power to materialize that to further games graphically, they have to focus in a new direction. I believe this direction is physics and AI. The thing about real time graphical physics, is that to be efficient, it all has to all be done on the GPU. There are many reasons for this, but probably one of the biggest and most obvious is overtaxing the PCI-E bus, and the delay of having it done on the CPU and then transferred over to the GPU for rendering. I see an internal unified memory architecture on a GPU as a HUGE step in the right direction for keeping physics on the GPU and allowing them to get much more complicated. One of the biggest hurdles I see for the future of physics is having enough memory on the GPU to both render and run a physics program at the same time.

On a secondary note, for people who do things like video encoding, the GT300 offers a ton of excitement because of the crazy amount of money it cost in the past to reach this level of computational power that it offers. I can see the GT300 really cutting into mainstay workstation tasks in general, making it where people don't need to invest in multiprocessor systems nearly as much.

AbelJemka
09-30-2009, 05:55 PM
Obviously I was talking about GPU only.
No numbers for R&D repartition in AMD.
But in the same time Nvidia doesn't only make GPU so i guess that all the R&D is not only to make Geforce.
In 2003, ATI R&D was like 50 millions$ and Nvidia 60 millions$ per trimester.

tajoh111
09-30-2009, 06:31 PM
No numbers for R&D repartition in AMD.
But in the same time Nvidia doesn't only make GPU so i guess that all the R&D is not only to make Geforce.
In 2003, ATI R&D was like 50 millions$ and Nvidia 60 millions$ per trimester.

Read this nvidia blog post(particularly 8/24/09). NV says their latest chips cost $1 billion in R and D and 3-4 years to makes.

2003 was a way different time compared to now. That was when the 9700 pro was strong and was during AMD prime.

It doesn't take a genius to know the years before rv7xx were really bad, and the rv770 generation hasn't been that profitable. Especially when AMD itself is so starving for cash.

NV net income for 2007 was 800 million, for 2006 it was 450 million and for 2005 it was 2005 was more than 200 million.(google wikipedia, answers and nvidia press releases). 2009 hasn't been peachy(2008 was still a profitable year, although not very profitable compared to earlier years)

http://seekingalpha.com/article/154910-nvidia-income-statement-analysis-for-july-09-quarter

Since April 2007 NV has spent typically 150-219 million a quarter on research and you know its mostly on GPU AbelJemka.

AbelJemka
09-30-2009, 08:40 PM
Ok i search like you and i find real numbers!
-2006 AMD R&D : 1.205 Billions$
-2006 ATI R&D :458 Millions$
with 167 Millions$ spent Q1'06+Q2'06 and 291 Millions$ for Q3'06+Q4'06 :shocked:
-So 2006 AMD+ATI :1.663 Billions$
-2006 Nvidia R&D : 554 Millions$

-2007 AMD+ATI R&D : 1.847 Billions$
-2007 Nvidia R&D : 692 Millions$

-2008 AMD+ATI R&D : 1.848 Billions$
-2008 Nvidia R&D : 856 Millions$

So numbers can't lies, Nvidia had increased it R&D expense since 2006 but so had AMD+ATI.

You said that they mostly research on GPU since 2007 but you seem to forget that since 2007 Tesla and Cuda are push very hard by Nvidia so they must eat some not negligeable ressources and that Nvidia is also promoting Tegra and Ion.

gumballguy
09-30-2009, 08:56 PM
http://i38.tinypic.com/4j9oxj.jpg

I'm supposed to laugh at the size of the gtx300 in there, but i find the photochop of the radeon and the small size of the case funnier. Maybe its just my cases tend to be huge.

Am I alone? :rolling:

jaredpace
09-30-2009, 09:05 PM
it was a combination number. I also have this photoshopped radeon:

http://i34.tinypic.com/29fwbom.jpg

More bland though, not as funny.

tajoh111
09-30-2009, 09:36 PM
Ok i search like you and i find real numbers!
-2006 AMD R&D : 1.205 Billions$
-2006 ATI R&D :458 Millions$
with 167 Millions$ spent Q1'06+Q2'06 and 291 Millions$ for Q3'06+Q4'06 :shocked:
-So 2006 AMD+ATI :1.663 Billions$
-2006 Nvidia R&D : 554 Millions$

-2007 AMD+ATI R&D : 1.847 Billions$
-2007 Nvidia R&D : 692 Millions$

-2008 AMD+ATI R&D : 1.848 Billions$
-2008 Nvidia R&D : 856 Millions$

So numbers can't lies, Nvidia had increased it R&D expense since 2006 but so had AMD+ATI.

You said that they mostly research on GPU since 2007 but you seem to forget that since 2007 Tesla and Cuda are push very hard by Nvidia so they must eat some not negligeable ressources and that Nvidia is also promoting Tegra and Ion.

Tesla and cuda are part of the gpu research and design so they are related since they involve making the Gpu more powerful. Its obvious those from those numbers NV should be spending substantially more if the ratio's mean anything from the 2006 numbers of AMD + ATI.

If we look at those numbers AMD spent 2006-2007 spent 11% more and between 2007-2008 they didn't increase spending at all. Compare this to NV who spent 2006-2007 spent 25 percent more and 23.7% more

Not to mention AMD likely spent alot of money getting to 55nm and 40nm to first plus all the money they spent on DDR5 and DDR4 research. NV waited for all this to happen so they didn't have to spent much on research and getting there as much.

I can imagine since its AMD was running the show for the most part, I can see alot more money spent on their CPU then their GPU side, especially considering how behind they were during the conroe years, and looking at simple economics, getting that side on the better side of profitable was alot more important than getting it gpu side going.

LordEC911
09-30-2009, 09:44 PM
Tesla and cuda are part of the gpu research and design so they are related since they involve making the Gpu more powerful. Its obvious those from those numbers NV should be spending substantially more if the ratio's mean anything from the 2006 numbers of AMD + ATI.

If we look at those numbers AMD spent 2006-2007 spent 11% more and between 2007-2008 they didn't increase spending at all. Compare this to NV who spent 2006-2007 spent 25 percent more and 23.7% more

Not to mention AMD likely spent alot of money getting to 55nm and 40nm to first plus all the money they spent on DDR5 and DDR4 research. NV waited for all this to happen so they didn't have to spent much on research and getting there as much.

I can imagine since its AMD was running the show for the most part, I can see alot more money spent on their CPU then their GPU side, especially considering how behind they were during the conroe years, and looking at simple economics, getting that side on the better side of profitable was alot more important than getting it gpu side going.

That is a LOT of assuming going on there...

AbelJemka
09-30-2009, 10:09 PM
Tesla and cuda are part of the gpu research and design so they are related since they involve making the Gpu more powerful. Its obvious those from those numbers NV should be spending substantially more if the ratio's mean anything from the 2006 numbers of AMD + ATI.

If we look at those numbers AMD spent 2006-2007 spent 11% more and between 2007-2008 they didn't increase spending at all. Compare this to NV who spent 2006-2007 spent 25 percent more and 23.7% more

Not to mention AMD likely spent alot of money getting to 55nm and 40nm to first plus all the money they spent on DDR5 and DDR4 research. NV waited for all this to happen so they didn't have to spent much on research and getting there as much.

I can imagine since its AMD was running the show for the most part, I can see alot more money spent on their CPU then their GPU side, especially considering how behind they were during the conroe years, and looking at simple economics, getting that side on the better side of profitable was alot more important than getting it gpu side going.
You like speculation a lot more that me!
Tesla and Cuda are part of gpu research but they have a cost. A cost in time or developpers and one or another cost money.

You take percentage because it suits your purpose more but in term of brute numbers AMD 2006 to 2007 its +184 Millions$ and Nvidia 2006 to 2007 its +138 Millions$.

What the cost going to 55nm? You don't know. Going to 40nm? You don't know? GDDR4 research? 2900XT launch six months late in 2007 but due in 2006 so no impact. GDDR4 basically the same as GDDR4 so not a great deal.

For the AMD part you play guessing game. But AMD, graphic division was the first thing who manage too have success of RV670 and RV770. So It may indicate something.

tajoh111
09-30-2009, 10:14 PM
You like speculation a lot more that me!
Tesla and Cuda are part of gpu research but they have a cost. A cost in time or developpers and one or another cost money.

You take percentage because it suits your purpose more but in term of brute numbers AMD 2006 to 2007 its +184 Millions$ and Nvidia 2006 to 2007 its +138 Millions$.

What the cost going to 55nm? You don't know. Going to 40nm? You don't know? GDDR4 research? 2900XT launch six months late in 2007 but due in 2006 so no impact. GDDR4 basically the same as GDDR4 so not a great deal.

For the AMD part you play guessing game. But AMD, graphic division was the first thing who manage too have success of RV670 and RV770. So It may indicate something.

It doesn't take much assuming to see that CPU cost more to develop than GPU and NV spent a whole lot of money in 2008 for a GPU company.

Similarly you don't know how much they spent on cuda or ion for research and development and yet you put it in your argument.

iTravis
09-30-2009, 10:20 PM
We have 5 threads about GT300, should combine em all to one. More pix:
http://img194.imageshack.us/img194/5690/001238704.jpg (http://img194.imageshack.us/i/001238704.jpg/)
http://img38.imageshack.us/img38/774/001238705.jpg (http://img38.imageshack.us/i/001238705.jpg/)

Look how happy he is :D
http://img38.imageshack.us/img38/909/001238703.jpg (http://img38.imageshack.us/i/001238703.jpg/)


Source:http://www.pcpop.com/doc/0/448/448052.shtml

eleeter
09-30-2009, 11:10 PM
Why are they not showing the card running in a system? Or did I miss it?

eric66
09-30-2009, 11:11 PM
is it me or that card has only one dvi

astrallite
09-30-2009, 11:21 PM
Hah it would be funny if it was just a GT200 with some custom cooler, hence why they didn't show anything on it.

AKM
09-30-2009, 11:24 PM
http://www.youtube.com/watch?v=RMtQ62CnBMA

Dante80
09-30-2009, 11:26 PM
GT300 looks like a revolutionary product as far as HPC and GPU Computing are concerned. Happy times ahead, for professionals and scientists at least...:)

Regarding the 3d gaming market though, things are not as optimistic. GT300 performance is rather irrelevant, due to the fact that nvidia currently does not have a speedy answer for the discrete, budget, mainstream and lower performance segments. Price projections aside, the GT300 will get the performance crown, and act as a marketing boost for the rest of the product line. Customers in the higher performance and enthusiast markets that have brand loyalty towards the greens are locked anyway. And yes, thats still irrelevant.

I know that this is XS and all, but remember ppl, the profit and bulk in the market is in a price segment nvidia does not even try to address currently. We can only hope that the greens can get sth more than GT200 rebranding/respins out for the lower market segments. Fast. Ideally, the new architecture should be able to be downscaled easily. Lets hope for that, or its definitely rough times ahead for nvidia. Especially if you look closely at the 5850 performance per $ ratio, as well as the juniper projections. And add in the economy crisis, shifting consumer focus, the difference of performance needed by sotware and performance given by the hw, the locking of TFT resolutions and heat/power consumption concerns.

With AMD getting out of the warehouses the whole 5XXX family in under 6months (I think thats a first for the GPU industry, I might be wrong though), the greens are in a rather tight spot atm. GT200 respins wont save the round, GT300 @500$++ wont save the round, and tesla wont certainly save the round (just look at sales and profit in the last years concerning the HPC-GPUCU segments).

Lets hope for the best, its in our interest as consumers anyway..;)

netkas
10-01-2009, 12:43 AM
No h/w tesselation unit in Fermi.

http://vr-zone.com/articles/nvidia-fermi--arriving-in-q1-2010/7786.html?doc=7786


On the gaming side of things, DirectX 11 is of course supported, though Tesselation appears to be software driven through the CUDA cores.

also


48 ROPs are present, and a 384-bit memory interface mated to GDDR5 RAM.

48 is not the double of gt200 rops (32), depends on rop performance, but maybe 3d performance will not be doubled.

Heinz68
10-01-2009, 12:59 AM
If hardware isn't required, then you know as well as I do there's a good chance the developers won't code the games with ATi's tesselator in mind, right?

We'll know soon enough.

There isn't ATI tesselator. It's DX11 tessellation, ATI had tessellation before but had to change it for DX11.

Unless there is going to be some instruction in the TWIMTBP games like "remove tessellation when ATI cards are is detected". :) There should not be any problem.

Very strange that some people would actually approve of such restriction if it could possibly work even though the AMD is first to supply the developers with DX11 hardware and assistance.

v_rr
10-01-2009, 01:05 AM
Nvidia Fermi - Arriving in Q1 2010


Following Nvidia CEO Jen Hsung Huang's keynote speech, details about Nvidia's next gen architecture Fermi are finally available, putting rest to months of speculation.

We reported most key specifications previously, but now we have most of our gaps filled.

First thing worth pointing out is that Nvidia sees clear potential in High Performance Computing and GPU Stream Computing - perhaps even more than gaming - and believe there is multi-billion dollar potential in the HPC industry, which is currently dominated by much more expensive and less powerful CPUs. As a result, Fermi is the closest a GPU has ever come to resembling a CPU, complete with greater programmability, leveled cache structure and significantly improved double precision performance. As such, today's event and whitepaper concentrates more on stream computing with little mention of gaming.

That said - GF100 is still a GPU - and a monster at that. Packing in 3 billion transistors @ 40nm, GF100 sports 512 shader cores (or CUDA cores) over 16 shader clusters (or Streaming Microprocessors, as Nvidia calls them). Each of these SMs contain 64KB L1 cache, with a unified 768KB L2 cache serving all 512 cores. 48 ROPs are present, and a 384-bit memory interface mated to GDDR5 RAM. On the gaming side of things, DirectX 11 is of course supported, though Tesselation appears to be software driven through the CUDA cores. Clock targets are expected to be around 650 / 1700 / 4800 (core/shader/memory). It remains to be seen how close to these targets Nvidia can manage.

Of course, at 3 billion transistors, GF100 will be massive and hot. Assuming similar transistor density to Cypress at the same process (RV770 had a higher density than GT200), we are approaching 500 mm2. In terms of DirectX/OpenGL gaming applications, we expect GF100 to end up comfortably faster than HD 5870, something Nvidia confirms (though they refuse to show benchmarks at this point). However, it is unknown as to where GF100 performs compared to Hemlock.

Products based on the Fermi architecture will only be available on retail stores in Q1 2010 - which is a rather long time away. This length delay and yields/costs could be two major problems for Nvidia. While there is no doubt Fermi/GF100 is shaping up to be a strong architecture/GPU, it will be costlier to produce than Cypress. We have already heard horror stories about the 40nm yields, which if true, is something Nvidia will surely fix before the product hits retail. However, this does take time, and Nvidia's next-gen is thus 3-6 months away. By then, AMD will have an entire range of next-gen products, most of them matured, and would perhaps be well on their way to die shrinks, especially for Cypress, which would might end up being half a year old at the time. There is no information about pricing either, although we can expect the monster that is GF100 to end up quite expensive. More economical versions of Fermi are unknown at this point too, which might mean the mainstream Juniper will go unchallenged for many months.

If you are in the market for a GPU today - we don't see any point in holding out for Nvidia's GF100. However, if you are satisfied with your current GPU or looking forward to much improved stream computing - Fermi/GF100 might just be what you are after.

In the meantime, we can expect price cuts and entry-level 40nm products from Nvidia. With all that has transpired today (being HD 5850 release day as well), there's one conclusion - ATI Radeon HD 5850 does seem like the GPU to get. If you are on a tighter budget, Juniper might have something for you soon.
http://vr-zone.com/articles/nvidia-fermi--arriving-in-q1-2010/7786.html?doc=7786

Ups, sorry. Already up here :p

Xoulz
10-01-2009, 03:01 AM
Not sure if been posted: AnandTech: Nvidia's Fermi... (http://www.anandtech.com/video/showdoc.aspx?i=3651)

Looks like Nvidia is trying to gain a foothold in the CPU market, somewhat conceding on the gaming end of the business. :shrug:

Mechromancer
10-01-2009, 03:10 AM
I hope to goodness that we can use 1 5870 for gaming and one GT300 for GPGPU on a Lucid Hydra enabled motherboard under Windows 7 next year. GT300 looks like it will be insane for GPGPU and OK for gaming. I hope Nvidia drops that driver BS they implemented when an ATI card is detected and lets us use our systems freely.

Xoulz
10-01-2009, 03:52 AM
I hope to goodness that we can use 1 5870 for gaming and one GT300 for GPGPU on a Lucid Hydra enabled motherboard under Windows 7 next year. GT300 looks like it will be insane for GPGPU and OK for gaming.

....I hope Nvidia drops that driver BS they implemented when an ATI card is detected and lets us use our systems freely.

How will nvidia's software detect..?

Mad1723
10-01-2009, 04:57 AM
How will nvidia's software detect..?

Easy


If
ATICatalyst=1
Then
Disable Physx and all features

It's not wizardry for them to detect a software and disable features if it's still present. The problem they could face is if someone switches from ATI to Nvidia and didn't uninstall CCC correctly.... then all his nice features will be disabled on the Nvidia card, which kills Nvidia's plan :rofl:

RaZz!
10-01-2009, 05:26 AM
Easy


If
ATICatalyst=1
Then
Disable Physx and all features

It's not wizardry for them to detect a software and disable features if it's still present. The problem they could face is if someone switches from ATI to Nvidia and didn't uninstall CCC correctly.... then all his nice features will be disabled on the Nvidia card, which kills Nvidia's plan :rofl:

i think he means when using nvidia and ati graphics cards with a lucid motherboard.

Tim
10-01-2009, 05:30 AM
I know one thing, before I buy ANY new card, ATI or nVidia, I'm sitting back and waiting till GT300 is launched. Then I will buy, with my head, and a little bit with my heart. I want my next purchase to last as long as my G80, so I will be doing lots of thinking and weighing pros and cons of each high end card. ATI's launch of the 5870 excited me, but I hope GT300 will exite me even more. 3-6 months is a while though.

Clairvoyant129
10-01-2009, 05:38 AM
I know one thing, before I buy ANY new card, ATI or nVidia, I'm sitting back and waiting till GT300 is launched. Then I will buy, with my head, and a little bit with my heart. I want my next purchase to last as long as my G80, so I will be doing lots of thinking and weighing pros and cons of each high end card. ATI's launch of the 5870 excited me, but I hope GT300 will exite me even more. 3-6 months is a while though.

+1

I don't understand why some people are so happy Fermi won't be available for several months. All it means is that you can't buy your AMD cards for a cheaper price.

Whatever suits you guys. :rolleyes:

Manicdan
10-01-2009, 06:44 AM
from the anand article

Double precision floating point (FP64) performance is improved tremendously. Peak 64-bit FP execution rate is now 1/2 of 32-bit FP, it used to be 1/8 (AMD's is 1/5). Wow.

does this mean LRB is going to have a tough time keeping up in the DP float? (if i remember correctly, thats where LRB was much stronger than traditional GPUs)

Farinorco
10-01-2009, 07:42 AM
So, now I'm starting to make an idea about this Fermi chips. This new architecture brings some architectural changes about GPGPU over GT200, but graphic rendering wise it seems there's not much apart from DX11 support and more processing units.

I like the GPGPU thing. And they're doing some very interesting things. I think that right now NVIDIA is one generation ahead AMD regarding GPU computing.

The other side of the coin is graphic rendering performance. More or less same architecture, 512 CP (let's start naming them by their new name :lol:), 48 ROP, ~225GB/s memory bandwidth... it seems that more or less the improvement GTX285->GTX380 will be proportional to HD4890->HD5870, or what is more or less the same, GTX280->GTX380 ~= HD4870->HD5870. Maybe a slightly higher to GeForce parts.

If it's so, the situation is going to be more or less the same than last generation, with the aggravating for NVIDIA that the 3-4 months late comparing to Radeon parts, are going to make them compete against a product in a more advanced stage of its life cycle (so AMD will have got their initial income with the product, they will can price more aggresively their cards). So probably NVIDIA will find an even more hostile and aggresive pricing environment than last time (even when last time was infernal for them).

This obvious focus in GPGPU side of things it's going to pay off on the long term, in my opinion, but it's going to give them a good amount of headaches on the short term, I think.


i think he means when using nvidia and ati graphics cards with a lucid motherboard.

What's the difference? Even if the Lucid Hydra operates over the graphics driver levels, intercepting API calls and balancing the calls amongst the installed cards, those cards will need their drivers to work. Videocards are not going to work by magic paths even with Hydra.

AbelJemka
10-01-2009, 07:43 AM
It doesn't take much assuming to see that CPU cost more to develop than GPU and NV spent a whole lot of money in 2008 for a GPU company.

Similarly you don't know how much they spent on cuda or ion for research and development and yet you put it in your argument.

I will give you an historic of our conversation and you will see who put things he doesn't even know but "he's assuming" because "it' doesn't need a genius to know" :

I said :

With a launch at Q1 that let's a lots of time for AMD to make a move.

You supposed :

Knowing AMD they might just completely forfeit the high end and fight at 250 and below for the next year if this card turns out to be 50-60% percent faster than the 5870, like they did against intel lately or 88xx generation to an extent.If the gtx 380 is able to somehow beat r800, it might just completely abandon it altogether, as I doubt it would sell well at all, as 3870x2 bombed and it was still beating the 8800 ultra.]This was lots to do with NV just being a stronger brand due to marketing.
Your assumption in this post is that GT300 is G80-like so history will reproduce himself. You begun melting AMD strategy and ATI strategy, you begun praising Nvidia ("stonger brand due to marketing").

I said :

AMD launch it card before this time, what AMD is doing now : extrapolate GT300 performance and cost.
Performance? GTX285 SLI is like 30% faster than 5870 in average. GTX380 may be more like 50% to 60% faster than 5870 in average. Maybe even less.
Cost? 40% more transistors than RV870 and 384bits istead of 256bits. 600$? More?
Diltech speaks about GTX395 but in Nvidia history multigpu cards were launched very late (More than 6 months in average).
Basically AMD have 3 months to sell DX11 card with the help of Windows 7.
I post only fact. And my assumption are basically in the Anandtech's article about Fermi. Like Diltech or...you i do a guessing about GT300 performance and use history to point that a GTX395 model may come late.
AMD having 3 months to sell card is a giving fact, no?

You said :

AMD R and D budget is tiny compared to Intel and NV(especially Intel), you can over estimate your rivals performance by 1000% and it will do nothing if you don't have the r and d budget to get something going to match that estimate.

With so many losing quarters in the past(except a couple quarters lately), I can imagine AMD graphic division was working on a shoestring budget, especially when AMD itself is so in the hole. Thankfully the research ATI put into r600 before the AMD and ATI merger paid off to some extent with r7xx and possibly to an extent r8xx as it turned out r6xx turned out to be a very scalable architecture. However research for the next big thing I can imagine being lacking for AMD and if this thing performs 50-60% faster than rv870, then AMD will need to come out with something new and not just a bigger chip with more shaders as returns have started to decrease with more shaders.

It will take either a big chip from AMD(which seems to be against their design philosophy) or a new architecture. I think a new architecture is not coming any time soon because of budget issues.

What AMD did with the R8xx is stretch the limits of the design(which NV did with g80->g200) that began with r600, it's all you can do when your company doesn't have the money to design a new architecture.
Basically you spoke a lot to just say AMD have no money so they have no R&D budget, so they can't design a new architecture.

You added reponding LordEC911 :

[..]AMD won't be coming out with anything spectacular anytime soon because of the shoestring budget they have been working with because of so many bad quarters.[...]NV has been a much more profitable company overall and has probably been working on something pretty complex for the last 4 years as todays news confirms.
AMD no money they can nothing, Nvidia have money so etc...

I posted this :

Ok i search like you and i find real numbers!
-2006 AMD R&D : 1.205 Billions$
-2006 ATI R&D :458 Millions$
with 167 Millions$ spent Q1'06+Q2'06 and 291 Millions$ for Q3'06+Q4'06
-So 2006 AMD+ATI :1.663 Billions$
-2006 Nvidia R&D : 554 Millions$

-2007 AMD+ATI R&D : 1.847 Billions$
-2007 Nvidia R&D : 692 Millions$

-2008 AMD+ATI R&D : 1.848 Billions$
-2008 Nvidia R&D : 856 Millions$
Real numbers pointing out that AMD use a lot maybe more money on R&D than Nvidia. So your main argument AMD has no R&D money go to trash.

You took defensive stance and became "Mr Assumption" :

If we look at those numbers AMD spent 2006-2007 spent 11% more and between 2007-2008 they didn't increase spending at all. Compare this to NV who spent 2006-2007 spent 25 percent more and 23.7% more

Not to mention AMD likely spent alot of money getting to 55nm and 40nm to first plus all the money they spent on DDR5 and DDR4 research. NV waited for all this to happen so they didn't have to spent much on research and getting there as much.

I can imagine since its AMD was running the show for the most part, I can see alot more money spent on their CPU then their GPU side, especially considering how behind they were during the conroe years, and looking at simple economics, getting that side on the better side of profitable was alot more important than getting it gpu side going.

You prove your math skills and you them to try to show that 25% of 700M$ is better than 11% of 1.65B$...
You try to explain AMD and Nvidia expense with your "Assumption-O-Maker".

I don't deny i made assumption but i use fact to do it.
You posted nearly zero fact since the beginning of this discussion!

DilTech
10-01-2009, 08:53 AM
I know one thing, before I buy ANY new card, ATI or nVidia, I'm sitting back and waiting till GT300 is launched. Then I will buy, with my head, and a little bit with my heart. I want my next purchase to last as long as my G80, so I will be doing lots of thinking and weighing pros and cons of each high end card. ATI's launch of the 5870 excited me, but I hope GT300 will exite me even more. 3-6 months is a while though.

Bingo, some else gets it... Even when the GTX-380 comes out I'm going to have a hard time convincing myself to upgrade, and possibly won't until we see how well AvP performs on said parts. I still say the 8800GTX was the longest running video card period, and you can count the amount of games it can't max on just your fingers.

The fact that even CryTek is going consoles now is a very bad sign.


So, now I'm starting to make an idea about this Fermi chips. This new architecture brings some architectural changes about GPGPU over GT200, but graphic rendering wise it seems there's not much apart from DX11 support and more processing units.

I like the GPGPU thing. And they're doing some very interesting things. I think that right now NVIDIA is one generation ahead AMD regarding GPU computing.

The other side of the coin is graphic rendering performance. More or less same architecture, 512 CP (let's start naming them by their new name :lol:), 48 ROP, ~225GB/s memory bandwidth... it seems that more or less the improvement GTX285->GTX380 will be proportional to HD4890->HD5870, or what is more or less the same, GTX280->GTX380 ~= HD4870->HD5870. Maybe a slightly higher to GeForce parts.

If it's so, the situation is going to be more or less the same than last generation, with the aggravating for NVIDIA that the 3-4 months late comparing to Radeon parts, are going to make them compete against a product in a more advanced stage of its life cycle (so AMD will have got their initial income with the product, they will can price more aggresively their cards). So probably NVIDIA will find an even more hostile and aggresive pricing environment than last time (even when last time was infernal for them).

This obvious focus in GPGPU side of things it's going to pay off on the long term, in my opinion, but it's going to give them a good amount of headaches on the short term, I think.



What's the difference? Even if the Lucid Hydra operates over the graphics driver levels, intercepting API calls and balancing the calls amongst the installed cards, those cards will need their drivers to work. Videocards are not going to work by magic paths even with Hydra.

It's not that the focus is on GPGPU, it's that the only info they're giving right now is GPGPU info because they don't want to ruin their partners business by showing off any graphics performance and out-right killing the sales of their current video cards.

Also, about it bein similar in jump to the hd4870-5870, there's more than a few differences. This one is an entirely new architecture, and NVidia said they were not happy with their shader efficiency in the G80 and GTX-280(which says something, because on paper they beat the 4870 with 30% of the shaders). If they found a way to make them even more efficient than they were with the GTX-280 then a full 2x performance should come easy.

I will say the gpgpu focus may just pay off though, especially with native C++ support. If they can get some 3d companies on board to accelerate rendering using them I can see companies like Pixar having Tesla farms.

Finally, the bad news for intel, this thing should be a beast for Ray Tracing, as that's essentially still gpgpu work. :up:

flippin_waffles
10-01-2009, 09:11 AM
Well why stop at gt300? lol Estimates put gt300 3-6 months away. i'll suggest that in another 3-6 months after that, their will be something well worth your time waiting for. so really, you might as well wait another 6-12 months, unless what you really want to say is "i'd rather own an nv card". if so, just grow some balls and say it.

.

grimREEFER
10-01-2009, 09:17 AM
wait a minute, if nvidia can do something complex like tesselation via cuda, what the hell is gonna stop gt300 from supporting every future api via cuda?

DilTech
10-01-2009, 09:18 AM
flippin, perhaps people don't mind the wait thanks to the fact that even though the 5870 is presently the fastest single gpu there's no games besides crysis the other cards don't maul and crysis can't be run at high resolution with AA on the ATi cards without choking anyway. Some of us, myself included, would like to play through it again at ultra high resolution with maxed out AA and the realism mods, but right now that's a mere pipe dream and isn't possible.

In other words, most people don't see a need TO upgrade. I'm not interested in any upcoming PC games until AvP anyway, and the GTX-380 will definitely be out before that shows up.

Mad1723
10-01-2009, 09:25 AM
what the hell is gonna stop gt300 from supporting every future api via cuda?
The fact that shaders change considerably each time a new API comes out, that there are new requirements for precision and calculation capabilities, new compression algorithms, bigger textures... lots of stuff changes and it wouldn't be efficient to keep it as is, the performance hit of having programmable shaders doing specialized shaders stuff would probably be pretty high.

Then again, it could be possible, I'm not a specialist in any, I could be wrong. :shrug:

Farinorco
10-01-2009, 09:37 AM
wait a minute, if nvidia can do something complex like tesselation via cuda, what the hell is gonna stop gt300 from supporting every future api via cuda?

Not via CUDA but via shaders. I haven't took a more or less in-depth view of DX11, but as far as I have seen, I think there are 2 new types of shaders that allow to program the tesselation, Domain and Hull Shaders (in addition to the previous Pixel, Vertex and Geometry Shaders). That's not something especif to NVIDIA, but the way DX11 is defined.

Anyway, via CUDA (i.e., via GPGPU, be it CUDA, OpenCL, ATI Stream or what you want) you can effectively program an entire rendering process from scratch, using whatever aproach you want, and modeling the rendering pipeline to your convenience.

The downside? You would be programming everything to be run on general compute processors. The reason why even today we use Direct3D/OpenGL with their mostly fixed pipeline it's because the hw implements part of the tasks with hw especif to them (there are units to map the 2D textures to vertices of the 3D meshes, to apply filters, to proyect the 3D data to a 2D bitmap created by the fustrum of the camera, and so on). All this work is (logically) done much faster with hw with the especific mission to do it (TMUs and ROPs basically).

But yeah, I think the future of the 3D graphics will be on completely programmable pipelines, and the especific hw units to do particular tasks will disappear. When there's enough power to allow it, of course. That's the general direction with computers. The more power, the more we tend to flexibility.

flippin_waffles
10-01-2009, 09:47 AM
flippin, perhaps people don't mind the wait thanks to the fact that even though the 5870 is presently the fastest single gpu there's no games besides crysis the other cards don't maul and crysis can't be run at high resolution with AA on the ATi cards without choking anyway. Some of us, myself included, would like to play through it again at ultra high resolution with maxed out AA and the realism mods, but right now that's a mere pipe dream and isn't possible.

In other words, most people don't see a need TO upgrade. I'm not interested in any upcoming PC games until AvP anyway, and the GTX-380 will definitely be out before that shows up.

That's a strange argument DilTech. What you seem to be suggesting, is that there currently are no games that the 5870 can't handle besides crysis, so why bother upgrading until gt300. At which time it'll be worth upgrading to gt300 because you think it will be able to maul crysis like a redheaded stepchild.

Interestingly, what nv has shown in it's recent presentation raises doubts as to whether they even have working silicon, so how would anyone outside of the inner circles of nv have any idea how it will perform. And then add to that, there is much speculation that gt300 is designed more for gpgpu than it is for 3d rendering.

And besides, if the only thing you are interested in upgrading your graphics for is crysis, then there is always 2x or 3x 5850 which will likely come in cheaper than gt300.

And there is also Eyefinity which IMO is a much more compelling reason to upgrade, and if you are really want the ultimate immersion in your gaming, there is no alternative here. Have a look at [H]'s video review. Now that is something worth getting excited about, and it requires the horsepower generated by the 5800 series.

http://www.hardocp.com/article/2009/09/28/amds_ati_eyefinity_technology_review

And then there is DX11 with compute shaders, tesellation etc., and the fact that all of the top developers are glowing about the possibilities it brings. The next generation 3d engines are being written specifically for dx11 hardware. Hell, nv is still struggling to get dx10.1 out the door, all while bribing certain developers to disable that feature until it has finally been able to impliment it in it's own hardware. Funny that. Three cheers for TWIMTBP!!

Anyway, I'd say there are many more reasons to upgrade now, than there was with the G80 launch.

Newblar
10-01-2009, 09:51 AM
flippin, perhaps people don't mind the wait thanks to the fact that even though the 5870 is presently the fastest single gpu there's no games besides crysis the other cards don't maul and crysis can't be run at high resolution with AA on the ATi cards without choking anyway. Some of us, myself included, would like to play through it again at ultra high resolution with maxed out AA and the realism mods, but right now that's a mere pipe dream and isn't possible.

In other words, most people don't see a need TO upgrade. I'm not interested in any upcoming PC games until AvP anyway, and the GTX-380 will definitely be out before that shows up.

:up:
im still on my 8800gt and i play all of the games i like fine.. but my other games, folding and seti would like a gt300 very much :yepp:

DilTech
10-01-2009, 10:02 AM
flippin, eyefinity, while seemingly interesting doesn't matter to me as I run a 1080p plasma tv. I have no intention of buying 2 more for eyefinity, and even if I did it wouldn't work because eyefinity requires an active displayport for 3 monitors to work. Besides, I refuse to play with monitor borders interfering with my view.

Also, when did I say no games the 5870 can't handle... Most games an old 8800GTX can max out just fine, same with a 4850... That's why I said there's presently not much of a need to upgrade. DX11 presently only makes a difference in one title, being the lame "Battleforge" battle card monster game(avg review score of 7.3), and by the time the big one(AvP) comes out there, which will be february, both brands will have their cards on the table.

The only games the previous gen can't max out fine for the most part is Stalker: Clear Sky, Crysis, and ArmA2. Arma2 at my resolution drops down below 20 without even fully maxing out everything, S: clear sky averages at ~30fps maxed out with 4xAA at my resolution on a 5870(i.e. too low), and Crysis still isn't playable at my resolution with AA. Basically, the same things last gen can't do the 5870 can't do either, as such, what's the reason to buy a 5870? On top of that, ati's problem with dx10 and HDTV's.

Now, let's compare that to the G80 launch, where there was plenty of titles the previous gen couldn't max out at high resolution, and the 8800GTX had no problem being 2x+ faster than everything but the 7950GX2, which it STILL was always faster by a good margin. This was when oblivion was still a big title(and the G80 was at times 3x faster than the previous gen in said title). Now, compare that to the 5870 launch, where there's not much the last gen couldn't do that it can do....

See why I said waiting isn't an issue for most?

Manicdan
10-01-2009, 10:16 AM
my opinion is any card since DX10 is going to last until we see the next consoles released. sure you can go stronger to get more eye candy, but every game will look damn fine at 1680x1050 a little AF and possibly some AA until a new console arrives. a good selling game has to work on old hardware otherwise they wouldnt sell. and the biggest incetive is usually consoles.

so there is no NEED to upgrade for another 3 years, but we all want to, and at each persons price limits and performance expectations, we will all be buying different things at different times.

Ursus
10-01-2009, 10:17 AM
I used to be quite curious regarding nvidia's new products, but now they disabled physics on systems with an amd gpu and a dedicated nvidia physics gpu, I will be boycotting them in in whatever way i can.

I think this latest gripe shows exactly where nvidia's priorities are. When they can make money by screwing over their own customers, they will.
it's quite shameless really, and only not a very big deal because at this stage physics is not a very big deal.

I hope the physics franchise fails in every single way.

flippin_waffles
10-01-2009, 10:37 AM
DilTech, no soup for you. First, I understand that you'd be tickled pink to convince as many people as possible to wait for nv's silicon to finally be ready, whenever that may be ( judging from what Charlie has to say, 6 months isn't a guarantee either. And yeah, his track record on nv is an order of magnitude more accurate than anything nv has said). That argument you are using is the oldest in the book, and it's maybe time to update your way of thinking. The fact of the matter is, there are MUCH more compelling reasons to upgrade now than there was for either G80, or GT200. True, GT200 did flop because ATi 4800 hit the sweet spot and was a worthwhile upgrade for a minimal investment. 5800 gets you dx11, Eyefinity, and the best performing card on the market.
And as for gaming on a 1080p plasma, how is that a PC again? where is the immersion in that? you might as well be running a console! lol Yeah, Eyefinity is where immersion is at, and that is reason enough to pick up a 5800 series card. Probably last a good 3 years without the need to upgrade.

The only advantage and answer to Eyefinity that nv has, is that there is no way to even come close to producing and representing the immense level of immersion through a video over the internet. Marketing it will be tough, but the real enthusiasts will know cool this is.
So while I don't doubt you have no intention of placing a 5800 series in your console, I think most will definitely have reason to NOT wait. :p:

Helloworld_98
10-01-2009, 11:08 AM
The fact of the matter is, there are MUCH more compelling reasons to upgrade now than there was for either G80, or GT200. True, GT200 did flop because ATi 4800 hit the sweet spot and was a worthwhile upgrade for a minimal investment. 5800 gets you dx11, Eyefinity, and the best performing card on the market.
And as for gaming on a 1080p plasma, how is that a PC again? where is the immersion in that? you might as well be running a console! lol Yeah, Eyefinity is where immersion is at, and that is reason enough to pick up a 5800 series card. Probably last a good 3 years without the need to upgrade.



I don't think eyefinity is really a major plus for it, since you need a DP monitor for it and to make it worth while by using 3 monitors, same model, it's going to cost you £1200+, and then you also have to pay another £310 for another card for CF to make sure you get good performance.

also gaming on a 1080p plasma, probably better than an LCD monitor since you get a bigger screen, better contrast and you don't really get pixelation.

AbelJemka
10-01-2009, 11:39 AM
I don't think eyefinity is really a major plus for it, since you need a DP monitor for it and to make it worth while by using 3 monitors, same model, it's going to cost you £1200+, and then you also have to pay another £310 for another card for CF to make sure you get good performance.

also gaming on a 1080p plasma, probably better than an LCD monitor since you get a bigger screen, better contrast and you don't really get pixelation.
I think the fight Eyefinity against big HDTV is without end.
It depends of user preference and will.
I don't think all games are suited to be play on big HDTV.
In the same time i don't think all game are suited to be play on Eyefinity.

marten_larsson
10-01-2009, 12:01 PM
I don't think eyefinity is really a major plus for it, since you need a DP monitor for it and to make it worth while by using 3 monitors, same model, it's going to cost you £1200+, and then you also have to pay another £310 for another card for CF to make sure you get good performance.


Why would I need three monitors of exact same type? I'm aiming at one good 22" which I already bought and two slightly worse TN-displays for my peripheral vision, also 22" with 1680x1050 res. I want to see more, not only bigger.

DilTech
10-01-2009, 12:18 PM
DilTech, no soup for you. First, I understand that you'd be tickled pink to convince as many people as possible to wait for nv's silicon to finally be ready, whenever that may be ( judging from what Charlie has to say, 6 months isn't a guarantee either. And yeah, his track record on nv is an order of magnitude more accurate than anything nv has said). That argument you are using is the oldest in the book, and it's maybe time to update your way of thinking. The fact of the matter is, there are MUCH more compelling reasons to upgrade now than there was for either G80, or GT200. True, GT200 did flop because ATi 4800 hit the sweet spot and was a worthwhile upgrade for a minimal investment. 5800 gets you dx11, Eyefinity, and the best performing card on the market.
And as for gaming on a 1080p plasma, how is that a PC again? where is the immersion in that? you might as well be running a console! lol Yeah, Eyefinity is where immersion is at, and that is reason enough to pick up a 5800 series card. Probably last a good 3 years without the need to upgrade.

The only advantage and answer to Eyefinity that nv has, is that there is no way to even come close to producing and representing the immense level of immersion through a video over the internet. Marketing it will be tough, but the real enthusiasts will know cool this is.
So while I don't doubt you have no intention of placing a 5800 series in your console, I think most will definitely have reason to NOT wait. :p:

So let me get this straight....

First, not being able to max out any game you can't on a last gen card is a reason to upgrade, and more of a reason than the 8800GTX was? Last time I checked, gaming performance is the number 1 reason, and when it doesn't bring you anything you couldn't already do then it just becomes pointless.

DX11? I'll care about that when we have a game worthwhile that runs DX11... AvP. Comes out in february.

Playing on a plasma TV makes my PC a console? Why would one even say that? Most monitors these days are 1080p, which is the same resolution as my TV. At the same time, a good plasma has better black levels, better color reproduction, no ghosting, perfect color uniformity, less video delay, and more than most consumer level LCDs.. On top of that a much bigger screen than could be had by a computer monitor. If that makes my computer a "console", then I am much happier with my console than I would be with your computer. Think about it, a 42" screen sitting on your desk, covering your entire vision in all directions... How do you get more immersive than that?

Now, about that eyefinity issue... I really don't care too much about it, especially since most games won't even allow you to set the FOV wide enough for it to make sense. Plus, you'd be stuck buying at least one new monitor in most cases, and dealing with the borders from the multiple monitors, which is a massive turn off to me. Maybe when a triple monitor in one frame comes out at a decent price it'll attract my attention, but in the mean time my 42" plasma is much better for me because I have no bars and the quality of the image is like looking through a window, which is something you can't get no matter how many lcds you put together because you still have parts interrupting your image.

So yes, I can honestly say I see no reason not to wait for NVidia's part, because even if it turns out not to be worth buying it will at least drop the price of the HD5870, which is still a win considering right now there's nothing that I could do with it that I can't do with my current set up.

Clairvoyant129
10-01-2009, 12:19 PM
DilTech, no soup for you. First, I understand that you'd be tickled pink to convince as many people as possible to wait for nv's silicon to finally be ready, whenever that may be ( judging from what Charlie has to say, 6 months isn't a guarantee either. And yeah, his track record on nv is an order of magnitude more accurate than anything nv has said). That argument you are using is the oldest in the book, and it's maybe time to update your way of thinking. The fact of the matter is, there are MUCH more compelling reasons to upgrade now than there was for either G80, or GT200. True, GT200 did flop because ATi 4800 hit the sweet spot and was a worthwhile upgrade for a minimal investment. 5800 gets you dx11, Eyefinity, and the best performing card on the market.
And as for gaming on a 1080p plasma, how is that a PC again? where is the immersion in that? you might as well be running a console! lol Yeah, Eyefinity is where immersion is at, and that is reason enough to pick up a 5800 series card. Probably last a good 3 years without the need to upgrade.

The only advantage and answer to Eyefinity that nv has, is that there is no way to even come close to producing and representing the immense level of immersion through a video over the internet. Marketing it will be tough, but the real enthusiasts will know cool this is.
So while I don't doubt you have no intention of placing a 5800 series in your console, I think most will definitely have reason to NOT wait. :p:

GT200 flopped? Really? Just because it may have sold less it's a flop? PC Gaming on a big HDTV = console gaming? What? :ROTF:

and I thought some of the NV fanboys were bad. :rolleyes:

annihilat0r
10-01-2009, 12:39 PM
The fact of the matter is, there are MUCH more compelling reasons to upgrade now than there was for either G80, or GT200.

LOLWUT

This is one of the MOST inaccurate posts I have read lately here.

There are much more compelling reasons to upgrade (to 5870) now than there was for G80? To be able to say that you must have completely missed the G80 launch and the times before that.

Fact is, G80 was a HUGE, HUGE leap. Before G80 it was a pain in the ass to even play oblivion maxed at higher resolutions. And super high resolutions like 2560 (and in some cases even 1920) were out of question for the previous generation (7000s/x1000s).

With G80, suddenly nearly all games were playable at super high resolutions with max settings. THREE years after its launch, a 8800GTX is still enough to max a lot of modern games.

Whereas today, Nvidia GT200s and ATI 4000s are able to play nearly all games at all resolutions (save Crysis and poorly coded games like Stalker) and there is absolutely no compelling reason to upgrade to something else right now. DX11? There are no DX11 games except Battleforge. And by the time important DX11 games (AvP2) hit the shelves, GT300 will be either ready or very close to launch.

Why oh why would someone today need to upgrade from the previous generation to 5800s? No reason, nada.

Whereas, 8800GTX was a huge, huge leap and there was EVERY reason to upgrade from a 7000s or x1000s series to the 8800GTX. What was that reason? Not actually being able to play games maxed at the previous generation.

Manicdan
10-01-2009, 12:48 PM
i only upgraded my 2900xt cause of power consumption, it was too much on my electric bill, but no game i played had any issues. and a 4850 for 100$ is hard to pass up, and will be just fine i bet until i see a 5850 for under 200$

gamervivek
10-01-2009, 01:05 PM
GT200 flopped? Really? Just because it may have sold less it's a flop? PC Gaming on a big HDTV = console gaming? What? :ROTF:

and I thought some of the NV fanboys were bad. :rolleyes:

it did,quite badly for nvidia.:up:

flippin_waffles
10-01-2009, 01:29 PM
Alright fine DilTech. I didn't realize you sat 3 feet away from your tv when you are on your pc. In that case yes, it probably would have some immersion. No where near what Eyefinity offers, but that's your choice.

And you know what I find pointless and very irrational? Waiting 6 months or more for a card that may or may not come, to play crysis at slightly higher settings than what the most powerful card on the market now can handle, providing it actually can. You want to lay down any guarantees? How many times do you think people want to play through crysis anyway? I'm willing to bet people are sick to death of it. You must be one of the only ones willing to spend $600 on a card to play a single game, at slightly higher settings. Yup, makes sense.

For $600 I could get 2 5850's in crossfire, and mop the floor with a gt300, have it now, and have Eyefinity support, where the real immersion comes from. Isn't that what gamers and enthusiats have been asking for, for around a decade now? Thought so.

So yes, I can honestly say it's silly to wait for gt300, to simply offer what you guess might be a better crysis experience.

To annihalator bloviating about G80: Are you trying to say that G80 could play every game on the market at 2560x1600 at max quality settings? If not, then by DilTech's logic, what was the point of upgrading? Surely you could have waited until there was a card on the market to do so. It'd make little sense to buy something otherwise...:rolleyes:

Sly Fox
10-01-2009, 01:36 PM
I enjoy a good ole Nvidia bashing as much as the next guy, but it really does make sense to wait and at least see what Nvidia has to offer.

If it's crap, whatever, buy ATI. If it's good, buy it. What's the problem here? :shrug:

marten_larsson
10-01-2009, 01:44 PM
I enjoy a good ole Nvidia bashing as much as the next guy, but it really does make sense to wait and at least see what Nvidia has to offer.

If it's crap, whatever, buy ATI. If it's good, buy it. What's the problem here? :shrug:

But you don't know how long that will take. According to Anandtech it's at least three months and indicating even longer (my guess late Q1). By then the 5870 is six months old. By then we'll probably hear rumours about something new from ATI to counter GF100..

When GT200 launched it was reasonable to wait and see as 4000-series was only weeks away but now it is at least three months, probably nearly the double.

N19h7m4r3
10-01-2009, 01:46 PM
I won't be buying a new ATI card or nVidia one. I always skip a gen or two after I buy a card.

I went from a 4800Ti SE-6600GT-8800GTX-4870X2. I'll be on my current card until the 6xxx series from ATI or the GT400 is out from nV.

By then there should be alot of DX11 games, not to mention new and better cpus.

I do find Eyefinity intersting though, and I'm looking forward to seeing nV's new cards.

Sly Fox
10-01-2009, 01:46 PM
But you don't know how long that will take. According to Anandtech it's at least three months and indicating even longer (my guess late Q1). By then the 5870 is six months old. By then we'll probably hear rumours about something new from ATI to counter GF100..

When GT200 launched it was reasonable to wait and see as 4000-series was only weeks away but now it is at least three months, probably nearly the double.

Good point. :up:

I guess in my case it's a bit different since the only game I'd possibly play that would require more GPU power than I have is Crysis. I don't mind waiting if I have to.

For more active gamers or people using higher-res LCD's, I think you're right though.

marten_larsson
10-01-2009, 01:51 PM
Good point. :up:

I guess in my case it's a bit different since the only game I'd possibly play that would require more GPU power than I have is Crysis. I don't mind waiting if I have to.

For more active gamers or people using higher-res LCD's, I think you're right though.

Well, that's always a choice you have to make. I mean, I haven't owned a single top end card ever, always waited for better and cheaper but that never seem to end :)

I don't think people with 4870X2s or GTX295s should upgrade unless they see something they want more, like Eyefinity or better powerconsumption. Performance is roughly the same anyway... Still, these are the costumers that upgrade more frequently as well so to them it might not be that hard to decide (buy 5870s now and GF100 later if they perform better).

DilTech
10-01-2009, 03:40 PM
But you don't know how long that will take. According to Anandtech it's at least three months and indicating even longer (my guess late Q1). By then the 5870 is six months old. By then we'll probably hear rumours about something new from ATI to counter GF100..

When GT200 launched it was reasonable to wait and see as 4000-series was only weeks away but now it is at least three months, probably nearly the double.

Guess people missed it... You see, NVidia were talking about the Tesla cards, which have always launched later than the desktop variant. That would still give the desktop graphics card a chance at launch Q4 of this year. :up:

Xoulz
10-01-2009, 04:41 PM
Guess people missed it... You see, NVidia were talking about the Tesla cards, which have always launched later than the desktop variant. That would still give the desktop graphics card a chance at launch Q4 of this year. :up:



That^^ sounds more like wishful thinking than reality though... :cool:



Coincidentally, you should read Shimpi's article more closely. He knows a great deal more than he's allowed to tell us, but he hints that Nvidia sacrificed some of their performance for greater sales in other markets. Looking over Fermi's architecture, I tend to agree.

Nvidia, has moved it's business model from gaming, to Scientific Computing. Reading threw the article and seeing the "highlights" of the new architecture on Nvidia's own web sight, the only thing that is great for 3d rendering is the stacking and the efficiency. The rest is to aid Nvidia's move into other markets.. :down:


Nvidia's got pushed out of the chipset business and has been looking to expand, C++ and CUDA is their new co-processor! That they'll market heavily, as "everyone needing". They tease us with price, but I highly doubt Nvidia will break the sub $199 barrier, so they will NEED to be able to sell these HUGE chips as "co-processors" to the scientific community, etc.



Lastly, what makes you think the GT300 will be worthy of an upgrade over a HD5890, etc? Nothing in Fermi's architectural changes, suggest that games will play 3x greater than the GTX285... or am I missing something?

LordEC911
10-01-2009, 04:46 PM
Guess people missed it... You see, NVidia were talking about the Tesla cards, which have always launched later than the desktop variant. That would still give the desktop graphics card a chance at launch Q4 of this year. :up:
Umm... G200/G200b?
Gefore parts were released afterwards... by a good few months.

Chumbucket843
10-01-2009, 04:46 PM
That^^
Lastly, what makes you think the GT300 will be worthy of an upgrade over a HD5890, etc? Nothing in Fermi's architectural changes, suggest that games will play 3x greater than the GTX285... or am I missing something?
the whitepapers said there will be future versions with less double precision for gaming. that probably wont happen this gen though. no one is expecting 3x performance in games. 2x faster could be possible.

570091D
10-01-2009, 08:09 PM
the whitepapers said there will be future versions with less double precision for gaming. that probably wont happen this gen though. no one is expecting 3x performance in games. 2x faster could be possible.

i am hearing 2.4x increase over gtx280 performance. and while nvidia have added a lot of fp performance, why would that hurt gaming performance? it might take up extra room on the die and consume a bit more power, but i don't understand how it would hurt gaming performance. it seems to me that there are A LOT of people on here that want the chip to fail hard, why? why does a percieved lack of competition in the market give you joy? do you wish to pay more for gfx cards? :down: it also seems to me that some here are forgetting that yesterday's show and tell was all about tesla. everything nvidia is talking about in terms of fermi now is related to telsa, they have said that they will not talk about gaming performance because they don't want to tip their hand.

003
10-01-2009, 08:40 PM
i am hearing 2.4x increase over gtx280 performance. and while nvidia have added a lot of fp performance, why would that hurt gaming performance? it might take up extra room on the die and consume a bit more power, but i don't understand how it would hurt gaming performance.

Exactly. People are being very thick with regards to the GT300. For example, consider this excerpt from Ars Technica:

But Fermi marks the point at which NVIDIA has officially begin making its discrete GPU tradeoffs favor the HPC market at the expense of gamers. ... and quite possibly leaving the single-chip gaming GPU crown in the hands of AMD's more specialized Evergreen this time around. Seriously? Who is writing this garbage? GT300 will be at least as fast as the GT200 architecture with the SPs increased by 2.13x in games. However, in reality, it will be a bit faster due to the efficiency of the shaders being increased.

While it is true a number of the HPC tailored features won't necessarily benefit game performance very much, they also will not hurt performance in any way.

Regardless, when the GTX380 is finally released, all this garbage information will be laid to rest.

YukonTrooper
10-01-2009, 08:45 PM
The fact of the matter is, there are MUCH more compelling reasons to upgrade now than there was for either G80, or GT200.
HA! GT200? Sure. G80? Not a chance. G80 was like the Jesus of video cards.

gOJDO
10-01-2009, 09:24 PM
That would still give the desktop graphics card a chance at launch Q4 of this year. :up:You mean a paper launch? :)

Anyway, according to the specs and the few details described in the articles(like the one from Ananad), GT300(or whatever it will be called) will kick ass. I think the same like you DilTech, it should be 40%-60% faster than 5870.

As for the playing game with a 42" on your desk, IMO it's stupid. You'll have to turn your head instead of moving your eyes to locate things on the screen. I'm playing games on my 37" full HD TV, sitting on the sofa. It makes you feeling like you are playing ona console, but with much better graphics. Anyway, I am still 3~4m away from the TV.

Solus Corvus
10-01-2009, 09:31 PM
Seriously? Who is writing this garbage? GT300 will be at least as fast as the GT200 architecture with the SPs increased by 2.13x in games. However, in reality, it will be a bit faster due to the efficiency of the shaders being increased.
Scaling isn't exactly linear. Some games will see around a 2.13x speedup over GT200, some will see less. In cases where the new arch removes bottlenecks, there may be a few games with more then 2.13x increase. But of course we will need to see benchmarks to know how it works out in reality.


While it is true a number of the HPC tailored features won't necessarily benefit game performance very much, they also will not hurt performance in any way.
They might not hurt performance, but they do cost die space and increase power consumption. Only time will tell if these end up being useful features for most customers, or just wasted space/electricity. It really depends on how the GPGPU market evolves.

003
10-01-2009, 09:35 PM
Scaling isn't exactly linear. Some games will see around a 2.13x speedup over GT200, some will see less. In cases where the new arch removes bottlenecks, there may be a few games with more then 2.13x increase. But of course we will need to see benchmarks to know how it works out in reality.

True. I'm referring to the people who run around like a chicken with its head cut off screaming that the GT300 is going to suck for games and it will be beaten by RV870. Honestly, in a WORST case scenario, it will be roughly twice as fast as the GTX285, which will trump a 5870 easily.

astrallite
10-01-2009, 09:37 PM
You sit 12 feet from a 37" TV? You have damn good eyes. The standard convention in the TV industry for optimal viewing, the distance you should sit from a TV should not exceed 50% of your screen size. So a 37" TV should be watched from 4.6 feet away, or 1.4m.

By the way, even at this distance, the perceived size of the screen is much smaller than a 22" monitor (which is considered small end for PCs) on your desk. Trying to read text from a typical PC game over time even at 1.4 meters away on a 37" screen will cause noticeable eye strain over time. I have 20/20 vision and my eyes start to hurt if I sit too far away from a screen with a PC game while reading text. Console gaming is completely different since the text size is normalized for typical TV subtitles.

LordEC911
10-01-2009, 09:53 PM
the whitepapers said there will be future versions with less double precision for gaming. that probably wont happen this gen though. no one is expecting 3x performance in games. 2x faster could be possible.
Please quote or tell me what page that is on. I have read through the whitepaper 3 times and haven't seen ANY mention of that.


True. I'm referring to the people who run around like a chicken with its head cut off screaming that the GT300 is going to suck for games and it will be beaten by RV870. Honestly, in a WORST case scenario, it will be roughly twice as fast as the GTX285, which will trump a 5870 easily.

~2x as fast is the BEST case scenario. Shaders are being doubled but completely overhauled which should bring IPC but not necessarily.
Also there are many other parts of GF100 that could possibly bottleneck the architecture.

Solus Corvus
10-01-2009, 09:55 PM
True. I'm referring to the people who run around like a chicken with its head cut off screaming that the GT300 is going to suck for games and it will be beaten by RV870. Honestly, in a WORST case scenario, it will be roughly twice as fast as the GTX285, which will trump a 5870 easily.

The worst scenario could be way worse then that. Roughly twice as fast as a GTX285 is probably the REALISTIC scenario.

IMO, it sounds like this will be a repeat of the last round. NV will have the fastest single chip and ATI will have a somewhat slower, but price/performance competitive offering. I don't know about the dual cards. Obviously ATI will have an x2. I imagine NV will want to release a dual GPU card to counter. But how do you cool 6B full speed transistors in 2 slots of space? They'll have to cut down and/or reduce the speed of the chips or wait for a shrink. The fastest single card halo could go either way, imo.

003
10-01-2009, 10:20 PM
Roughly twice as fast as a GTX285 is probably the REALISTIC scenario. I'm not convinced. We'll all find out when it's released though.


I imagine NV will want to release a dual GPU card to counter. Nvidia has already indicated that there will be a dual GPU version of the GT300.


But how do you cool 6B full speed transistors in 2 slots of space? Number of transistors is not what determines heat output, that would be the TDP, which will be similar to the GTX285, so that really won't be much of an issue.
The fastest single card halo could go either way, imo. Nvidia won't let that happen, and based on performance of the 5870, and knowing the specs of GT300, I believe it is pretty clear the 380 will be faster, which nvidia has already confirmed.

LordEC911
10-01-2009, 10:28 PM
Nvidia has already indicated that there will be a dual GPU version of the GT300.
Eventually, yes.
Full spec'ed and full speed? No.
6months after release? Maybe, most likely longer.
32/28nm shrink needed? Possibly.

RealTelstar
10-01-2009, 10:33 PM
The worst scenario could be way worse then that. Roughly twice as fast as a GTX285 is probably the REALISTIC scenario.


I expect the performance to be MORE than two 285 in high-res 8xAA.

003
10-01-2009, 10:33 PM
Eventually, yes.
Full spec'ed and full speed? No.
6months after release? Maybe, most likely longer.
32/28nm shrink needed? Possibly.

Those are all guesses. Full spec and speed should very well be possible if TDP is similar to the GTX285 (which it should be).

6+ months is not going to be the case IMO. There will be no need for a shrink if TDP is similar to the 285.

Andrew LB
10-01-2009, 10:36 PM
Well... I for one am quite happy that I held onto my money and waited for the GT300 specs to come to light. I'll more than likely buy a second gtx285 if they drop in price or just hold onto it until the initial GT300 cards prices equalize after their release.


I used to be quite curious regarding nvidia's new products, but now they disabled physics on systems with an amd gpu and a dedicated nvidia physics gpu, I will be boycotting them in in whatever way i can.

Why? Do you honestly believe that it is in nVidia's best business interests to spend millions of dollars on driver development to make their cards architecture will work smoothly in conjunction as a physx card in conjunction with an ATi main graphics card?

yeah, that's a great idea.

... spend tons of money to help out your competitor.... :rolleyes:

LordEC911
10-01-2009, 10:44 PM
Those are all guesses. Full spec and speed should very well be possible if TDP is similar to the GTX285 (which it should be).

6+ months is not going to be the case IMO. There will be no need for a shrink if TDP is similar to the 285.
Ummm... so GTX295 is two full spec'ed GTX285s? That is news to me...
Yes, my post was guesses and speculation but no less than your posts.

Solus Corvus
10-01-2009, 10:52 PM
I'm not convinced. We'll all find out when it's released though.
If it was just a doubling of resources without major architecture changes then I would say that the average performance would would be LESS then 2x the previous gen. You can't expect linear scaling.

But I'm taking into account that there are architecture changes. Most of the changes don't sound like they would make much, if any, performance difference in games. But perhaps the efficiency improvements will help make the most of the resources available. But without major changes to the shaders and arch I don't see how you'd get more then 2.13x scaling, except in corner cases.


Nvidia has already indicated that there will be a dual GPU version of the GT300.
Any timeframe? Will it launch with/near the other cards, or will we have to wait for a shrink like the 295?


Number of transistors is not what determines heat output, that would be the TDP, which will be similar to the GTX285, so that really won't be much of an issue. Nvidia won't let that happen, and based on performance of the 5870, and knowing the specs of GT300, I believe it is pretty clear the 380 will be faster, which nvidia has already confirmed.
You know what I mean, of course 10 billion 1 mhz transistors wouldn't be very hard to cool.

A TDP near the GTX285 would be very much of an issue. How many NV cards are on the market with 2 full speed, not cut down GTX285 cores, even now? 1000? To make the standard 295 they had to cut the number of shaders and the speed. To make the 395 they will probably have to make even deeper cuts to be able to fit it in the power/TDP envelope.

Nvidia didn't confirm anything, they said they "believe" it will be faster in games. And I'm not going to take their word for it, benchmarks will tell us the truth.


I expect the performance to be MORE than two 285 in high-res 8xAA.
Why?

570091D
10-01-2009, 11:02 PM
~2x as fast is the BEST case scenario. Shaders are being doubled but completely overhauled which should bring IPC but not necessarily.
Also there are many other parts of GF100 that could possibly bottleneck the architecture.

i would like to know which of the listed features will bottleneck this gpu.


Eventually, yes.
Full spec'ed and full speed? No.
6months after release? Maybe, most likely longer.
32/28nm shrink needed? Possibly.


http://www.fudzilla.com/content/view/15758/34/ (http://www.fudzilla.com/content/view/15758/34/)

i know it's fud, and i'm looking for corroborating sources, but they're stating a full range release (even the 2 gpu version). when/if i find anyone else with the same info, i'll share it.

DeathReborn
10-01-2009, 11:08 PM
You know what I mean, of course 10 billion 1 mhz transistors wouldn't be very hard to cool.

A TDP near the GTX285 would be very much of an issue. How many NV cards are on the market with 2 full speed, not cut down GTX285 cores, even now? 1000? To make the standard 295 they had to cut the number of shaders and the speed. To make the 395 they will probably have to make even deeper cuts to be able to fit it in the power/TDP envelope.

Whackypedia (http://en.wikipedia.org/wiki/GeForce_300_Series) has you not far off the transistor count...


The initial model in the series to be released will use the GT300 chip, a very large chip that is heavily modified of G92 GPU(Up to nine billion transistors in quad core version) manufactured by TSMC in a 55-nanometer process. Versions will be available with 1.5GB, 3GB or 6GB of memory, attached to six separate 64-bit GDDR4 memory controllers on the chip.

:rofl:

Also the GTX295 only had lowered clocks & 448bit Memory Bus instead of 512bit. It has the full 240 shaders on each chip.

Solus Corvus
10-01-2009, 11:17 PM
Also the GTX295 only had lowered clocks & 448bit Memory Bus instead of 512bit. It has the full 240 shaders on each chip.
Yeah, actually that's correct. Though they did have to wait for the shrink before they could do it. And even now only the MARS has full speed cores.

003
10-01-2009, 11:25 PM
Ummm... so GTX295 is two full spec'ed GTX285s? That is news to me...
Yes, my post was guesses and speculation but no less than your posts.

1. The GTX295 is not two full spec 285s, but it still is the fastest single card.

2. Full spec dual 285 on a single card is perfectly possible. Look at the Asus Mars. Not only is it full spec, but it actually has 2GB of memory per GPU for a total of 4GB of DRAM chips. Evga was also working on a dual 285 card, but I'm not sure if they will release it with GT300 right around the corner.

LordEC911
10-01-2009, 11:31 PM
i would like to know which of the listed features will bottleneck this gpu.
ROP throughput?
Also performance is dependent on clockspeeds, which is very underwhelming at the moment.

Solus Corvus
10-01-2009, 11:32 PM
1. The GTX295 is not two full spec 285s, but it still is the fastest single card.

2. Full spec dual 285 on a single card is perfectly possible. Look at the Asus Mars. Not only is it full spec, but it actually has 2GB of memory per GPU for a total of 4GB of DRAM chips. Evga was also working on a dual 285 card, but I'm not sure if they will release it with GT300 right around the corner.

"Look at the Asus MARS and Evga's possibly aborted project" aren't really good arguments for the likelihood of a full speed GT300, lol.

saaya
10-01-2009, 11:52 PM
Why? Do you honestly believe that it is in nVidia's best business interests to spend millions of dollars on driver development to make their cards architecture will work smoothly in conjunction as a physx card in conjunction with an ATi main graphics card?

yeah, that's a great idea.

... spend tons of money to help out your competitor.... :rolleyes:
what? again they did not, NOT fix something for ati, they BROKE it on purpose...
it worked fine before, then they released a new driver that blocks it...

not supporting it, well their decision, if you want to spread physix as a standard you need to support as many customer configs as possible... but whatever...

but BLOCKING it on purpose... thats lame...
but hey, we know it from sli :D
i guess now that nvidia was forced to unblock sli, they are probably looking for other things to block instead ^^

or maybe they actually think they can do the same as with sli and ask for license fees or money from ati and intel to allow physix on their systems :D

solus corvus, yes, and gt212 seems to be dead too, otherwise there wouldnt have been mars or matrix or evga beefed up cards...
so gt212 (40nm gt200) is definately cancelled and gt300 delayed, meh :/

DilTech
10-02-2009, 12:04 AM
Saaya, are we sure it worked fine before? Multiple vendors drivers generally don't play nicely together, which usually causes all kinds of stability issues. Could you imagine the sheer number of non-computer knowledgable people calling NVidia complaining when the 2 drivers together cause graphical corruption and they blame NVidia because before they added the geforce card for physx the system worked fine? You worked in CS before, you know it would happen.

If we have a completely impartial judge to test this out, see if it worked without issue before in all physx enabled titles, then we can make that call. I'd actually be interested in seeing those numbers myself!

tdream
10-02-2009, 12:30 AM
Saaya, are we sure it worked fine before? Multiple vendors drivers generally don't play nicely together, which usually causes all kinds of stability issues. Could you imagine the sheer number of non-computer knowledgable people calling NVidia complaining when the 2 drivers together cause graphical corruption and they blame NVidia because before they added the geforce card for physx the system worked fine? You worked in CS before, you know it would happen.

If we have a completely impartial judge to test this out, see if it worked without issue before in all physx enabled titles, then we can make that call. I'd actually be interested in seeing those numbers myself!
Yes it did clearly work before. Nvidia just blocked it with updated drivers. :down:

http://www.rage3d.com/board/showpost.php?p=1336030431&postcount=628


I have just got a GeForce GTS 250 as dedicated PhysX card and my 4870x2 for rendering, I have run the game benchmark with those results:

- Drivers: Catalyst 9.9 & ForceWare 185.68
- Resolution: 1920x1200
- Vsync: On
- AntiAliasing: 4x with AAA forced via CCC
- Anisotropic: 16x
- PhysX: High
- Minimum: 30
- Average: 58
- Maximum 60:

It's really cool to see the ATI and nVidia cards working together. PhysX are cool in this game, I have seen banners, smoke and flying papers, it adds a bit of atmosphere to the game.

Farinorco
10-02-2009, 12:32 AM
~2x as fast is the BEST case scenario. Shaders are being doubled but completely overhauled which should bring IPC but not necessarily.
Also there are many other parts of GF100 that could possibly bottleneck the architecture.

I'm with LordEC911 on that one. Very few things make think in a 3d rendering performance increase further than processing units count increase and/or clocks seeing the new arch.

With a 113% CP increase, a 50% ROP increase, an unknown TMU increase, a 60% memory bandwidth increase, seems highly unlikely that we can say that a 100% real world performance increase is a worst case scenario. It's more like a best case scenario with a completely shader bottlenecked situation.

I think we will see something similar to last generation, maybe slightly better for NVIDIA performance wise (I would say 20-30% performance advantage to the higher end GTX380, and a very little advantage for the GTX360) but even worse price wise (competing with a product 3-4 months old and that can cut down prices after months selling, and remember that GT300 is a much more expensive chip than RV870)

DilTech
10-02-2009, 12:47 AM
Yes it did clearly work before. Nvidia just blocked it with updated drivers. :down:

http://www.rage3d.com/board/showpost.php?p=1336030431&postcount=628

That's why I asked, rather than saying it didn't. :up:

I'm not on windows 7 and I don't have a dual pci-e slotted mobo.

If that truly is the case, next time I talk to our NV rep I'll see if he can give me an answer for what their reasoning is with this one. They usually don't talk to us in PR speak. :yepp:

gamervivek
10-02-2009, 01:04 AM
It is VERY strange to see all those people who fiercely DEFEND a greedy, multi billion dollar corporation that IS OBVIOUSLY PLAYING DIRTY TRICKS not only to it's competitor, but to YOU - CUSTOMERS as well.

Think, god damn it!

Is it fanboyism? Or stupidity? Or blind patriotism or something? Damn...

naivete i say.Wait isn't his the G300 thread?

Xoulz
10-02-2009, 01:55 AM
True. I'm referring to the people who run around like a chicken with its head cut off screaming that the GT300 is going to suck for games and it will be beaten by RV870. Honestly, in a WORST case scenario, it will be roughly twice as fast as the GTX285, which will trump a 5870 easily.

That^^ is still to be determined! Most of whats new architecturally, is indeed for the CUDA end of Nvidia's expansion into the Scientific Community. So, given Anand's comments, the GT300 could certainly debut @ only 2X the 285's performance! :down:


Meaning it on par with a HD5870 1GB ~ :comp10: then, think price?

DilTech
10-02-2009, 02:19 AM
That^^ is still to be determined! Most of whats new architecturally, is indeed for the CUDA end of Nvidia's expansion into the Scientific Community. So, given Anand's comments, the GT300 could certainly debut @ only 2X the 285's performance! :down:


Meaning it on par with a HD5870 1GB ~ :comp10: then, think price?

2x GTX 285 performance(not sli, but DOUBLE) would be better than "par" with the 5870...

Of course, no one knows the full performance on these cards, as been stated, but I heavily doubt it'll be less than double the GTX285(and again, NOT sli, but total double).

zanzabar
10-02-2009, 02:30 AM
2x GTX 285 performance(not sli, but DOUBLE) would be better than "par" with the 5870...

Of course, no one knows the full performance on these cards, as been stated, but I heavily doubt it'll be less than double the GTX285(and again, NOT sli, but total double).

it all depends were its clocked, the 5870 is about 2x the 280, and we havnt seen a large review of the 5870 overclocked if it clocks and scales like the 4890 it will have a huge gain compared to the 300 if that clocks and scales like the 285. and price/watt will also be interesting it looks like 4 oced 5870 will go against 2-3 300's in wattage and price.

its going to be interesting to see performance and openCL now that khonos is finally validating drivers we should finally get some pro grade software (although im not sure what it will do for consumers)

saaya
10-02-2009, 02:42 AM
That's why I asked, rather than saying it didn't. :up:

I'm not on windows 7 and I don't have a dual pci-e slotted mobo.

If that truly is the case, next time I talk to our NV rep I'll see if he can give me an answer for what their reasoning is with this one. They usually don't talk to us in PR speak. :yepp:this whole thing started getting attention when a customer actually contacted nv tech support about physix no longer working in his system equipped with an ati vga.

he got an official nvidia reply after a week that said it was a corporate business strategy decision or something along those lines...

youll probably get the same reply... :/

i dont think this is a big deal cause i dont expect a lot of people to actually use an ati vga and an nvidia vga for physix... but even then it clearly shows what kind of business practices nvidia follows... still nothing compared to what apple does, but not exactly fair-play and customer oriented...

oh and guys, i think it makes no sense to argue over gt300 perf right now...
we have no idea what clockspeeds range itll be able to hit... id say sit back and wait for some game devs and gpu gurus to read their way through the nvidia whitepapers and infos from nvidia, and we will have some pretty good guesses within a few weeks :)

Farinorco
10-02-2009, 04:00 AM
2x GTX 285 performance(not sli, but DOUBLE) would be better than "par" with the 5870...

Of course, no one knows the full performance on these cards, as been stated, but I heavily doubt it'll be less than double the GTX285(and again, NOT sli, but total double).

Yeah but... why do you expect it to have 2x the performance (I'm suppose you're talking about real world performance) if it's going to have +113% CPs more but only +50% ROP more, +60% mem bandwidth more...

Consider that HD5870 is exactly double the HD4890 (+100% everything at the same clocks) except bandwidth (aprox. +30%) and it's far from double the real world performance (that's one of the most recent proves that doubling everything doesn't mean doubling real world performance), and NVIDIA is not even doubling processing units.

Can they improve the performance per unit and per clock? Sure. Maybe. But how much and why, I think is way soon with the info we have to say it's going to be 2x real world performance of a GTX285. I even would say I hugely doubt it, given that they are more focused in get the new (future?) HPC market before Intel has their Larrabee working (if it happens to be on this century).

annihilat0r
10-02-2009, 04:09 AM
Yeah but... why do you expect it to have 2x the performance (I'm suppose you're talking about real world performance) if it's going to have +113% CPs more but only +50% ROP more, +60% mem bandwidth more...

Consider that HD5870 is exactly double the HD4890 (+100% everything at the same clocks) except bandwidth (aprox. +30%) and it's far from double the real world performance (that's one of the most recent proves that doubling everything doesn't mean doubling real world performance), and NVIDIA is not even doubling processing units.

Can they improve the performance per unit and per clock? Sure. Maybe. But how much and why, I think is way soon with the info we have to say it's going to be 2x real world performance of a GTX285. I even would say I hugely doubt it, given that they are more focused in get the new (future?) HPC market before Intel has their Larrabee working (if it happens to be on this century).

You're being almost dogmatic with this post. AMD and Nvidia are completely different brands with completely different chips. Saying "amd doubled everything but didn't double the performance, so nvidia can't double 285's performance" is less worthy than not saying anything, which most people in this thread should do.

DilTech
10-02-2009, 04:20 AM
The 5870 is pretty much a 4870 doubled up with a little faster ram. As we all know, architectures do reach their ceilings.

I will remind you, however, that the 4870 had the same amount of rops as the 3870, but they improved how well they handle(ipc if you will). The 4870 wasn't double the 3870 in specifications. 4870 was easily 2x as fast as the 3870 when AA was enabled. The 9800GTX has less rops than the 8800GTX but for the most part was faster(even if only mildly), and also had less memory bandwidth. The only reason the 5870 isn't 2x the 4870 is because AMD are reaching the point of diminishing returns with their architecture. See, they still count on multi-way shaders, which we do know is very hard to get them all working at the same time. NVidia go for simpler shaders which is why, even with less of them, they've had no problem competing. Yes, the shaders run at a faster clock speed, but when you're pitting 240 up against 800, and winning, it's pretty telling about who's more efficient.

This is also why in some titles you'll see the 4890 so close to the 5850, even though the 5850 should destroy that card as they're the same architecture but the 5850 has much better specs. Some titles just don't play nicely with ATi's shader design, but the ones that do FLY on it.

In NVidia's case, they even specifically stated in the article they were disappointed with the shader efficiency with the GTX-280, which is a tell-tale sign that this part is to be a lot more efficient in it's shader use. Now, you increase shader efficiency and OVER double them and tell me what happens, along with more rops, a lot more memory bandwidth... There's a big reason why I say if it's less than double the performance I'll be in shock.

jfromeo
10-02-2009, 04:27 AM
I find Farinocco's post pretty logical.

He isn't stating nVidia can't double 285's real performance. He is just saying that by doubling the specs (VS, PS, TMUs, ROPs...) while maintaing clocks (plus, mem clock was increased +225MHz) on a similar architecture doesn't lead to double the real world performance, as we have checked with HD5870 and HD4890.

DilTech
10-02-2009, 04:28 AM
jfro, he's forgetting one thing though... The GT300 is a completely new architecture, that's been confirmed. Based loosely on the G80, but it is a new architecture.

4870 to 5870 was just a doubled up chip with DX11 added to it.

Farinorco
10-02-2009, 05:34 AM
You're being almost dogmatic with this post. AMD and Nvidia are completely different brands with completely different chips. Saying "amd doubled everything but didn't double the performance, so nvidia can't double 285's performance" is less worthy than not saying anything, which most people in this thread should do.

:confused:

Who has said what you say that I've said? Read again, please ;)


The 5870 is pretty much a 4870 doubled up with a little faster ram. As we all know, architectures do reach their ceilings.

I will remind you, however, that the 4870 had the same amount of rops as the 3870, but they improved how well they handle(ipc if you will). The 4870 wasn't double the 3870 in specifications. 4870 was easily 2x as fast as the 3870 when AA was enabled. The 9800GTX has less rops than the 8800GTX but for the most part was faster(even if only mildly), and also had less memory bandwidth. The only reason the 5870 isn't 2x the 4870 is because AMD are reaching the point of diminishing returns with their architecture. See, they still count on multi-way shaders, which we do know is very hard to get them all working at the same time. NVidia go for simpler shaders which is why, even with less of them, they've had no problem competing. Yes, the shaders run at a faster clock speed, but when you're pitting 240 up against 800, and winning, it's pretty telling about who's more efficient.

This is also why in some titles you'll see the 4890 so close to the 5850, even though the 5850 should destroy that card as they're the same architecture but the 5850 has much better specs. Some titles just don't play nicely with ATi's shader design, but the ones that do FLY on it.

In NVidia's case, they even specifically stated in the article they were disappointed with the shader efficiency with the GTX-280, which is a tell-tale sign that this part is to be a lot more efficient in it's shader use. Now, you increase shader efficiency and OVER double them and tell me what happens, along with more rops, a lot more memory bandwidth... There's a big reason why I say if it's less than double the performance I'll be in shock.

HD4870 was 2.5x SPs, 2.5x TUs, near 2x memory bandwidth the HD3870, and even then, it only was about double in MSAA scenaries, with a completely reworked AA logic that was the great fault of HD3870 (AA made deep sadly the performance of HD2000 and 3000 series), so I don't find it to be the best example.

Of course, doubling specs is doubling potential performance, but that doesn't happen everytime. Indeed, it only happens in best case situations. That's why I say that doubling specs is not equal to doubling performance in real world cases (as a whole, there are always concrete cases).

I understand that may be other changes that can be made (mainly architectural changes) that may affect performance. I only say that I wouldn't take for granted that there are going to be so many changes as to make so a drastical architectural improvement, based on what we know right now about the new architecture. I'm not saying it's not going to happen. If it does, great for everybody :yepp:


jfro, he's forgetting one thing though... The GT300 is a completely new architecture, that's been confirmed. Based loosely on the G80, but it is a new architecture.

4870 to 5870 was just a doubled up chip with DX11 added to it.

I'm not forgetting that. I'm only making my own interpretation of what the words "a completely new architecture based on the previous one" means in the lips of a hardware vendor: an evolution of the previous one, i.e. taking the previous one and applying some architectural changes and improvements. There are going to be architectural changes, because there are several changes announced about the GPGPU matter. Only with this would be enough to justify that "completely new architecture based on the previous one" thing.

Am I saying that there are not going to be per unit and per clock improvements at 3D rendering performance? Nuop. I'm only saying that I wouldn't take it for granted based solely on these words. If I wouldn't take the possibility of that happening, I would say that certainly isn't going to be 2x real world performance, and I'm not saying that, again.

GT200 was said to be "a completely new architecture based on G80 one" also. And it was, if you understand it as an evolution of the previous one, mainly in the GPGPU aspect.

annihilat0r
10-02-2009, 06:34 AM
NEWS NEWS NEWS NEWS NEWS

Some guy at the Beyond3D forum got to talk to some Nvidia officials about Fermi, two important points:

- They guys told that what Jensen was holding in his hands at the presentation was a production mockup. However, they do have silicon working
- Ships in '09
- Gaming performance is still mysterious but he said that some guy told him it was about 1.6-1.8x of GTX 285.

DilTech
10-02-2009, 06:46 AM
Seeing as how this thread has the most info on the card compared to the rest, it's now the official thread. Please keep all GT300 info in this location.

Thanks in advance guys.

mibo
10-02-2009, 06:50 AM
@annihilat0r
I really don't know why you are worrying about performance.
Nvidia will release a card that beats the 5870. This is their goal!! They will push the frequencies and voltages to get to this performance.

Elfear
10-02-2009, 06:52 AM
NEWS NEWS NEWS NEWS NEWS

Some guy at the Beyond3D forum got to talk to some Nvidia officials about Fermi, two important points:

- They guys told that what Jensen was holding in his hands at the presentation was a production mockup. However, they do have silicon working
- Ships in '09
- Gaming performance is still mysterious but he said that some guy told him it was about 1.6-1.8x of GTX 285.

So ~SLI GTX 285 performance it sounds like. That sounds about right judging from previous generations. New gen is about equal to old gen multi-gpu. Hopefully the price won't be astronomical.

I'd be very surprised if Fermi ships in '09 unless they mean three cards to Newegg by Christmas time.

Farinorco
10-02-2009, 07:07 AM
NEWS NEWS NEWS NEWS NEWS

Some guy at the Beyond3D forum got to talk to some Nvidia officials about Fermi, two important points:

- They guys told that what Jensen was holding in his hands at the presentation was a production mockup. However, they do have silicon working
- Ships in '09
- Gaming performance is still mysterious but he said that some guy told him it was about 1.6-1.8x of GTX 285.

Great about shipping dates news! I was really :( about they releasing on 2010... let's hope it isn't 20th december :lol:

Bodkin
10-02-2009, 07:08 AM
Great about shipping dates news! I was really :( about they releasing on 2010... let's hope it isn't 20th december :lol:

Yeah, I am prying I can nab one in time for Christmas

spajdr
10-02-2009, 07:14 AM
Die pic for ya
http://i34.tinypic.com/23j1vz7.jpg

Kaldor
10-02-2009, 07:15 AM
Charlie is usually full of :banana::banana::banana::banana:, but he makes some very compelling arguments here:
http://www.semiaccurate.com/2009/10/01/nvidia-fakes-fermi-boards-gtc/

Bodkin
10-02-2009, 07:15 AM
Whats with the wired colours?

DilTech
10-02-2009, 07:27 AM
Charlie is usually full of :banana::banana::banana::banana:, but he makes some very compelling arguments here:
http://www.semiaccurate.com/2009/10/01/nvidia-fakes-fermi-boards-gtc/

Charlie made one big error, his statement that A1 is the first revision...

Anyone who knows a thing or two about chips knows A0 is the first revision. Anyone remember the Q6600 update, the G0(that's g zero)? If Charlie was correct in that assumption then that famous stepping would have been G1, not G0. These aren't counting the prototype samples which are just to test the features without making the full blown chip.

As such, kind of blows his argument clean out of the water in that regard, doesn't it?

Roger_D25
10-02-2009, 07:28 AM
Thanks Kaldor, that was a very interesting article about the Fermi sample shown (I'm not an expert but his points seem accurate)? I assume that many companies do this type of thing when showing off new hardware?

DilTech
10-02-2009, 07:31 AM
Thanks Kaldor, that was a very interesting article about the Fermi sample shown (I'm not an expert but his points seem accurate)? I assume that many companies do this type of thing when showing off new hardware?

Mockups are very common place amongst all forms of products.

Farinorco
10-02-2009, 07:37 AM
Charlie is usually full of :banana::banana::banana::banana:, but he makes some very compelling arguments here:
http://www.semiaccurate.com/2009/10/01/nvidia-fakes-fermi-boards-gtc/

:ROTF:

The first points about the serial numbers and dates on the IHS seem good (excepting the absolutely idiotic one about the "7") about defending their own previous writings.

But even if absolutely pointless (well, they have shown a "fake" card to decorate the presentation, who cares? It's not like this would mean anything) what I have enjoyed more of the article is the part about the "fake" card. Hey, the way it's written it is hilarious... "Those lead to... well, not the power connector", "The 6-pin connector, on the other hand, lines up with, umm, nothing", "Except glue. Notice the connector is black and the hole below it shows white. The only real question now is, Elmers or glue stick"... by here I was :rofl:

Roger_D25
10-02-2009, 07:40 AM
I thought that might be the case DilTech, thanks! Its probably a discussion for another day (or thread) but should people be upset that Nvidia tried to sell the mockup as the actual Fermi product (in this case probably should since it gives the impression their further along than they actually are)?

Farinorco - I also got a good laugh at the way its written!

mibo
10-02-2009, 07:45 AM
Charlie made one big error, his statement that A1 is the first revision...

Anyone who knows a thing or two about chips knows A0 is the first revision. Anyone remember the Q6600 update, the G0(that's g zero)? If Charlie was correct in that assumption then that famous stepping would have been G1, not G0. These aren't counting the prototype samples which are just to test the features without making the full blown chip.

As such, kind of blows his argument clean out of the water in that regard, doesn't it?

I know you hate Charlie for being a Nvidia hater, but except for the A0/A1 confusion he is right most of the time.
Didn't he write that a few GT300s might make it in 2009 but real availability will not start before 2010?
And by the huge amount of working GT300s that were shown to the audience, his yield numbers might be not that far off...

highoctane
10-02-2009, 07:46 AM
I didn't see the Fermi thread and didn't see this mentioned anywhere yet, looks like Fermi has a chance to gain traction.

http://www.dailytech.com/ORNL+to+Use+NVIDIA+Fermi+to+Build+Next+Gen+Super+C omputer/article16401.htm

Jamesrt2004
10-02-2009, 07:48 AM
Charlie made one big error, his statement that A1 is the first revision...

Anyone who knows a thing or two about chips knows A0 is the first revision. Anyone remember the Q6600 update, the G0(that's g zero)? If Charlie was correct in that assumption then that famous stepping would have been G1, not G0. These aren't counting the prototype samples which are just to test the features without making the full blown chip.

As such, kind of blows his argument clean out of the water in that regard, doesn't it?

ahh not to say your wrong or anythign but there was a massive discussion about this some time ago i remeber vividly and apparently Nvidia DO use "A1" as there first revisions :)


im not in the "know" to be 100% but i remember a argument here a while back :D

Farinorco
10-02-2009, 07:57 AM
ahh not to say your wrong or anythign but there was a massive discussion about this some time ago i remeber vividly and apparently Nvidia DO use "A1" as there first revisions :)


im not in the "know" to be 100% but i remember a argument here a while back :D

I didn't wanted to say anything, because I am more or less in the same situation than you, but I have thought exactly the same.

I don't know thought. I don't have any kind of knowledge about this, so I prefer to stay away. I'm only mentioning this because as I have thought the same, and I was (and I'm) unsure, and I was sort of relieved when I read your post ("hey, man, I'm not crazy, that has existed" :D)...

DilTech
10-02-2009, 08:11 AM
ahh not to say your wrong or anythign but there was a massive discussion about this some time ago i remeber vividly and apparently Nvidia DO use "A1" as there first revisions :)


im not in the "know" to be 100% but i remember a argument here a while back :D

I can tell you out-right that's not the case... The original NV15 chip(GF2 GTS IIRC) for OEM's was Rev A0. If you don't believe me just run a quick search thru this webpage and you'll find the info listed for you...

http://forums.gentoo.org/viewtopic-t-819-start-125.html

Same here, the gainward driver for that specific chip
http://www.givemefile.net/drivers/video/gainward.html

The reason so many think NV start at A1 is because almost never is the A0 perfect, and as such there's usually 1 to 2 revisions before it's ready for the public. As such most people never know they exist. :up:

AVB
10-02-2009, 08:19 AM
pic for ya

http://img525.imageshack.us/img525/4388/59921299.jpg (http://img525.imageshack.us/i/59921299.jpg/)


http://rs648.rapidshare.com/files/287810173/Tesla_Fermi_Key_Visual.jpg ( res. 6316 x 3240)

Fermi 1.4-1.6 x of GTX 295.

( 1.6-1.8x of GTX 285 is not too much)