ISA???
http://content.answers.com/main/cont...E/_ISA8_16.GIF
Printable View
I love how no one noticed one thing in that picture...it only needs 1 8pin power connector.
that stuff is great for GPGPU but im not sure that it will help with 3d, i think that clock for clock it will be about the same as 2 g200 maybe +20% or so since its not sli and has a few more shaders. what will have a huge difference is GPGPU they can now properly handle 64bit floating point and that was about the only point were ati stream was better than cuda, but at this point i have seen nothing that needs a gpgpu for personal use sure there is folding/crunching on the gpu and encoding but encoding works on everything with openCL and seams IO limited to me.
so im just waiting for numbers, but it looks like it will edge out the 5890 but suck more power and cost alot more but not scale as well so it will all be the same just like the last gen. i am not saying that the gt300 is bad just that i dont see it being revolutionary, and with the 8+6 pin connector so it will be above the 225W. i would expect to be near the max 300W mark since the 40nm node dosnt seam to drop wattage much and with the added cashe+ more than double the shaders and less wait time from the improved means of command que will lead to a huge jump in power if all works right.
it loosk like it has an 8 and a 6 one on each side, and one 8 could put u at 225W from 150W 8pin and 75W slot
edit, it looks like just one 8 but it also says tesla so thats not the gforce that people want, and only 1 dvi
i had been looking at this and thought i saw a 6 and 8
http://www.xtremesystems.org/forums/...&postcount=124
edit 2 there is an 8 and 6 for 300W max
http://www.bit-tech.net/news/hardwar...ard-pictured/1
Circle the 6 for me, because I honestly am not seeing it...
It's always good to have something to show in your hand during a presentation.Quote:
Then timing is just as valid, because while Fermi currently exists on paper, it's not a product yet. Fermi is late. Clock speeds, configurations and price points have yet to be finalized. NVIDIA just recently got working chips back and it's going to be at least two months before I see the first samples. Widespread availability won't be until at least Q1 2010.
:up:
ps:
Actually it seems that AMD R&D is twice Nvidia R&D, but sure AMD make CPU and GPU.
http://www.bit-tech.net/news/hardwar...ard-pictured/1Quote:
Update: There's also a six pin connector on the board as well.
The way I see it, is that 3D rendering is hitting a brick wall. It has gotten to a point where the real noticeable differences in rendering are taking such incredible amounts power to materialize that to further games graphically, they have to focus in a new direction. I believe this direction is physics and AI. The thing about real time graphical physics, is that to be efficient, it all has to all be done on the GPU. There are many reasons for this, but probably one of the biggest and most obvious is overtaxing the PCI-E bus, and the delay of having it done on the CPU and then transferred over to the GPU for rendering. I see an internal unified memory architecture on a GPU as a HUGE step in the right direction for keeping physics on the GPU and allowing them to get much more complicated. One of the biggest hurdles I see for the future of physics is having enough memory on the GPU to both render and run a physics program at the same time.
On a secondary note, for people who do things like video encoding, the GT300 offers a ton of excitement because of the crazy amount of money it cost in the past to reach this level of computational power that it offers. I can see the GT300 really cutting into mainstay workstation tasks in general, making it where people don't need to invest in multiprocessor systems nearly as much.
Read this nvidia blog post(particularly 8/24/09). NV says their latest chips cost $1 billion in R and D and 3-4 years to makes.
2003 was a way different time compared to now. That was when the 9700 pro was strong and was during AMD prime.
It doesn't take a genius to know the years before rv7xx were really bad, and the rv770 generation hasn't been that profitable. Especially when AMD itself is so starving for cash.
NV net income for 2007 was 800 million, for 2006 it was 450 million and for 2005 it was 2005 was more than 200 million.(google wikipedia, answers and nvidia press releases). 2009 hasn't been peachy(2008 was still a profitable year, although not very profitable compared to earlier years)
http://seekingalpha.com/article/1549...uly-09-quarter
Since April 2007 NV has spent typically 150-219 million a quarter on research and you know its mostly on GPU AbelJemka.
Ok i search like you and i find real numbers!
-2006 AMD R&D : 1.205 Billions$
-2006 ATI R&D :458 Millions$
with 167 Millions$ spent Q1'06+Q2'06 and 291 Millions$ for Q3'06+Q4'06 :shocked:
-So 2006 AMD+ATI :1.663 Billions$
-2006 Nvidia R&D : 554 Millions$
-2007 AMD+ATI R&D : 1.847 Billions$
-2007 Nvidia R&D : 692 Millions$
-2008 AMD+ATI R&D : 1.848 Billions$
-2008 Nvidia R&D : 856 Millions$
So numbers can't lies, Nvidia had increased it R&D expense since 2006 but so had AMD+ATI.
You said that they mostly research on GPU since 2007 but you seem to forget that since 2007 Tesla and Cuda are push very hard by Nvidia so they must eat some not negligeable ressources and that Nvidia is also promoting Tegra and Ion.
it was a combination number. I also have this photoshopped radeon:
http://i34.tinypic.com/29fwbom.jpg
More bland though, not as funny.
Tesla and cuda are part of the gpu research and design so they are related since they involve making the Gpu more powerful. Its obvious those from those numbers NV should be spending substantially more if the ratio's mean anything from the 2006 numbers of AMD + ATI.
If we look at those numbers AMD spent 2006-2007 spent 11% more and between 2007-2008 they didn't increase spending at all. Compare this to NV who spent 2006-2007 spent 25 percent more and 23.7% more
Not to mention AMD likely spent alot of money getting to 55nm and 40nm to first plus all the money they spent on DDR5 and DDR4 research. NV waited for all this to happen so they didn't have to spent much on research and getting there as much.
I can imagine since its AMD was running the show for the most part, I can see alot more money spent on their CPU then their GPU side, especially considering how behind they were during the conroe years, and looking at simple economics, getting that side on the better side of profitable was alot more important than getting it gpu side going.
You like speculation a lot more that me!
Tesla and Cuda are part of gpu research but they have a cost. A cost in time or developpers and one or another cost money.
You take percentage because it suits your purpose more but in term of brute numbers AMD 2006 to 2007 its +184 Millions$ and Nvidia 2006 to 2007 its +138 Millions$.
What the cost going to 55nm? You don't know. Going to 40nm? You don't know? GDDR4 research? 2900XT launch six months late in 2007 but due in 2006 so no impact. GDDR4 basically the same as GDDR4 so not a great deal.
For the AMD part you play guessing game. But AMD, graphic division was the first thing who manage too have success of RV670 and RV770. So It may indicate something.
It doesn't take much assuming to see that CPU cost more to develop than GPU and NV spent a whole lot of money in 2008 for a GPU company.
Similarly you don't know how much they spent on cuda or ion for research and development and yet you put it in your argument.
We have 5 threads about GT300, should combine em all to one. More pix:
http://img194.imageshack.us/img194/5690/001238704.jpg
http://img38.imageshack.us/img38/774/001238705.jpg
Look how happy he is :D
http://img38.imageshack.us/img38/909/001238703.jpg
Source:http://www.pcpop.com/doc/0/448/448052.shtml
Why are they not showing the card running in a system? Or did I miss it?
is it me or that card has only one dvi
Hah it would be funny if it was just a GT200 with some custom cooler, hence why they didn't show anything on it.
GT300 looks like a revolutionary product as far as HPC and GPU Computing are concerned. Happy times ahead, for professionals and scientists at least...:)
Regarding the 3d gaming market though, things are not as optimistic. GT300 performance is rather irrelevant, due to the fact that nvidia currently does not have a speedy answer for the discrete, budget, mainstream and lower performance segments. Price projections aside, the GT300 will get the performance crown, and act as a marketing boost for the rest of the product line. Customers in the higher performance and enthusiast markets that have brand loyalty towards the greens are locked anyway. And yes, thats still irrelevant.
I know that this is XS and all, but remember ppl, the profit and bulk in the market is in a price segment nvidia does not even try to address currently. We can only hope that the greens can get sth more than GT200 rebranding/respins out for the lower market segments. Fast. Ideally, the new architecture should be able to be downscaled easily. Lets hope for that, or its definitely rough times ahead for nvidia. Especially if you look closely at the 5850 performance per $ ratio, as well as the juniper projections. And add in the economy crisis, shifting consumer focus, the difference of performance needed by sotware and performance given by the hw, the locking of TFT resolutions and heat/power consumption concerns.
With AMD getting out of the warehouses the whole 5XXX family in under 6months (I think thats a first for the GPU industry, I might be wrong though), the greens are in a rather tight spot atm. GT200 respins wont save the round, GT300 @500$++ wont save the round, and tesla wont certainly save the round (just look at sales and profit in the last years concerning the HPC-GPUCU segments).
Lets hope for the best, its in our interest as consumers anyway..;)