Page 6 of 42 FirstFirst ... 345678916 ... LastLast
Results 126 to 150 of 1035

Thread: The official GT300/Fermi Thread

  1. #126
    Xtreme Cruncher
    Join Date
    May 2009
    Location
    Bloomfield
    Posts
    1,968
    Quote Originally Posted by zerazax View Post
    I don't see it being 2x G200 either. They keep saying new architecture, but from what I've seen, it's built heavily on G200, just like G200 was built heavily on G80. G200 was nearly 2x the units of G80, but it didn't hit 2x the performance until far after release when newer games were optimized/able to take advantage of what G200 had extra.

    However, I think without any hard data on clocks, it's impossible to claim where it will end up. I'm hopeful it's good, but when I hear them admit that it's been delayed, that's usually not a positive sign
    512 shaders, gddr5, new memory system, new ISA, better scheduling wont cut the mustard huh? the white paper said 1.5ghz is a conservative clock speed too. this thing is fast.

  2. #127
    Registered User
    Join Date
    May 2005
    Posts
    3,691
    I love how no one noticed one thing in that picture...it only needs 1 8pin power connector.
    Quote Originally Posted by Leon2ky
    "dammit kyle what's with the 30 second sex lately?" "Sorry sweetie, I overclocked my nuts and they haven't been stable since"
    Quote Originally Posted by trinibwoy View Post
    I don't think his backside has internet access.
    Quote Originally Posted by n00b 0f l337 View Post
    Hey I just met you
    And this is crazy
    But I'm on bath salts
    And your face looks tasty

  3. #128
    Xtreme Addict
    Join Date
    Apr 2008
    Location
    Texas
    Posts
    1,663
    Quote Originally Posted by DilTech View Post
    I love how no one noticed one thing in that picture...it only needs 1 8pin power connector.
    Good catch. With a uarch like that it probably has some pretty aggressive power saving features. Nvidia probably has a winner if they can get the price right.
    Core i7 2600K@4.6Ghz| 16GB G.Skill@2133Mhz 9-11-10-28-38 1.65v| ASUS P8Z77-V PRO | Corsair 750i PSU | ASUS GTX 980 OC | Xonar DSX | Samsung 840 Pro 128GB |A bunch of HDDs and terabytes | Oculus Rift w/ touch | ASUS 24" 144Hz G-sync monitor

    Quote Originally Posted by phelan1777 View Post
    Hail fellow warrior albeit a surat Mercenary. I Hail to you from the Clans, Ghost Bear that is (Yes freebirth we still do and shall always view mercenaries with great disdain!) I have long been an honorable warrior of the mighty Warden Clan Ghost Bear the honorable Bekker surname. I salute your tenacity to show your freebirth sibkin their ignorance!

  4. #129
    I am Xtreme zanzabar's Avatar
    Join Date
    Jul 2007
    Location
    SF bay area, CA
    Posts
    15,871
    Quote Originally Posted by Chumbucket843 View Post
    512 shaders, gddr5, new memory system, new ISA, better scheduling wont cut the mustard huh? the white paper said 1.5ghz is a conservative clock speed too. this thing is fast.
    that stuff is great for GPGPU but im not sure that it will help with 3d, i think that clock for clock it will be about the same as 2 g200 maybe +20% or so since its not sli and has a few more shaders. what will have a huge difference is GPGPU they can now properly handle 64bit floating point and that was about the only point were ati stream was better than cuda, but at this point i have seen nothing that needs a gpgpu for personal use sure there is folding/crunching on the gpu and encoding but encoding works on everything with openCL and seams IO limited to me.

    so im just waiting for numbers, but it looks like it will edge out the 5890 but suck more power and cost alot more but not scale as well so it will all be the same just like the last gen. i am not saying that the gt300 is bad just that i dont see it being revolutionary, and with the 8+6 pin connector so it will be above the 225W. i would expect to be near the max 300W mark since the 40nm node dosnt seam to drop wattage much and with the added cashe+ more than double the shaders and less wait time from the improved means of command que will lead to a huge jump in power if all works right.

    Quote Originally Posted by DilTech View Post
    I love how no one noticed one thing in that picture...it only needs 1 8pin power connector.
    it loosk like it has an 8 and a 6 one on each side, and one 8 could put u at 225W from 150W 8pin and 75W slot

    edit, it looks like just one 8 but it also says tesla so thats not the gforce that people want, and only 1 dvi
    i had been looking at this and thought i saw a 6 and 8
    http://www.xtremesystems.org/forums/...&postcount=124

    edit 2 there is an 8 and 6 for 300W max
    http://www.bit-tech.net/news/hardwar...ard-pictured/1
    Last edited by zanzabar; 09-30-2009 at 05:41 PM.
    5930k, R5E, samsung 8GBx4 d-die, vega 56, wd gold 8TB, wd 4TB red, 2TB raid1 wd blue 5400
    samsung 840 evo 500GB, HP EX 1TB NVME , CM690II, swiftech h220, corsair 750hxi

  5. #130
    Registered User
    Join Date
    May 2005
    Posts
    3,691
    Circle the 6 for me, because I honestly am not seeing it...
    Quote Originally Posted by Leon2ky
    "dammit kyle what's with the 30 second sex lately?" "Sorry sweetie, I overclocked my nuts and they haven't been stable since"
    Quote Originally Posted by trinibwoy View Post
    I don't think his backside has internet access.
    Quote Originally Posted by n00b 0f l337 View Post
    Hey I just met you
    And this is crazy
    But I'm on bath salts
    And your face looks tasty

  6. #131
    Xtreme Enthusiast
    Join Date
    Dec 2006
    Location
    France
    Posts
    741
    Quote Originally Posted by xbrian88 View Post

    Then timing is just as valid, because while Fermi currently exists on paper, it's not a product yet. Fermi is late. Clock speeds, configurations and price points have yet to be finalized. NVIDIA just recently got working chips back and it's going to be at least two months before I see the first samples. Widespread availability won't be until at least Q1 2010.
    It's always good to have something to show in your hand during a presentation.


    ps:
    Quote Originally Posted by tajoh111 View Post

    AMD R and D budget is tiny compared to Intel and NV(especially Intel)
    Actually it seems that AMD R&D is twice Nvidia R&D, but sure AMD make CPU and GPU.

  7. #132
    Registered User
    Join Date
    Apr 2005
    Posts
    34
    Quote Originally Posted by DilTech View Post
    I love how no one noticed one thing in that picture...it only needs 1 8pin power connector.
    Update: There's also a six pin connector on the board as well.
    http://www.bit-tech.net/news/hardwar...ard-pictured/1

  8. #133
    Xtreme Addict
    Join Date
    Mar 2006
    Location
    Saskatchewan, Canada
    Posts
    2,207
    Quote Originally Posted by AbelJemka View Post
    [B]


    It's always good to have something to show in your hand during a presentation.


    ps:

    Actually it seems that AMD R&D is twice Nvidia R&D, but sure AMD make CPU and GPU.
    Obviously I was talking about GPU only.

  9. #134
    Xtreme Member
    Join Date
    Mar 2008
    Location
    utah ogden
    Posts
    110
    The way I see it, is that 3D rendering is hitting a brick wall. It has gotten to a point where the real noticeable differences in rendering are taking such incredible amounts power to materialize that to further games graphically, they have to focus in a new direction. I believe this direction is physics and AI. The thing about real time graphical physics, is that to be efficient, it all has to all be done on the GPU. There are many reasons for this, but probably one of the biggest and most obvious is overtaxing the PCI-E bus, and the delay of having it done on the CPU and then transferred over to the GPU for rendering. I see an internal unified memory architecture on a GPU as a HUGE step in the right direction for keeping physics on the GPU and allowing them to get much more complicated. One of the biggest hurdles I see for the future of physics is having enough memory on the GPU to both render and run a physics program at the same time.

    On a secondary note, for people who do things like video encoding, the GT300 offers a ton of excitement because of the crazy amount of money it cost in the past to reach this level of computational power that it offers. I can see the GT300 really cutting into mainstay workstation tasks in general, making it where people don't need to invest in multiprocessor systems nearly as much.

  10. #135
    Xtreme Enthusiast
    Join Date
    Dec 2006
    Location
    France
    Posts
    741
    Quote Originally Posted by tajoh111 View Post
    Obviously I was talking about GPU only.
    No numbers for R&D repartition in AMD.
    But in the same time Nvidia doesn't only make GPU so i guess that all the R&D is not only to make Geforce.
    In 2003, ATI R&D was like 50 millions$ and Nvidia 60 millions$ per trimester.

  11. #136
    Xtreme Addict
    Join Date
    Mar 2006
    Location
    Saskatchewan, Canada
    Posts
    2,207
    Quote Originally Posted by AbelJemka View Post
    No numbers for R&D repartition in AMD.
    But in the same time Nvidia doesn't only make GPU so i guess that all the R&D is not only to make Geforce.
    In 2003, ATI R&D was like 50 millions$ and Nvidia 60 millions$ per trimester.
    Read this nvidia blog post(particularly 8/24/09). NV says their latest chips cost $1 billion in R and D and 3-4 years to makes.

    2003 was a way different time compared to now. That was when the 9700 pro was strong and was during AMD prime.

    It doesn't take a genius to know the years before rv7xx were really bad, and the rv770 generation hasn't been that profitable. Especially when AMD itself is so starving for cash.

    NV net income for 2007 was 800 million, for 2006 it was 450 million and for 2005 it was 2005 was more than 200 million.(google wikipedia, answers and nvidia press releases). 2009 hasn't been peachy(2008 was still a profitable year, although not very profitable compared to earlier years)

    http://seekingalpha.com/article/1549...uly-09-quarter

    Since April 2007 NV has spent typically 150-219 million a quarter on research and you know its mostly on GPU AbelJemka.
    Last edited by tajoh111; 09-30-2009 at 06:56 PM.

  12. #137
    Xtreme Enthusiast
    Join Date
    Dec 2006
    Location
    France
    Posts
    741
    Ok i search like you and i find real numbers!
    -2006 AMD R&D : 1.205 Billions$
    -2006 ATI R&D :458 Millions$
    with 167 Millions$ spent Q1'06+Q2'06 and 291 Millions$ for Q3'06+Q4'06
    -So 2006 AMD+ATI :1.663 Billions$
    -2006 Nvidia R&D : 554 Millions$

    -2007 AMD+ATI R&D : 1.847 Billions$
    -2007 Nvidia R&D : 692 Millions$

    -2008 AMD+ATI R&D : 1.848 Billions$
    -2008 Nvidia R&D : 856 Millions$

    So numbers can't lies, Nvidia had increased it R&D expense since 2006 but so had AMD+ATI.

    You said that they mostly research on GPU since 2007 but you seem to forget that since 2007 Tesla and Cuda are push very hard by Nvidia so they must eat some not negligeable ressources and that Nvidia is also promoting Tegra and Ion.

  13. #138
    Xtreme Enthusiast
    Join Date
    Oct 2008
    Posts
    547
    Quote Originally Posted by jaredpace View Post
    I'm supposed to laugh at the size of the gtx300 in there, but i find the photochop of the radeon and the small size of the case funnier. Maybe its just my cases tend to be huge.

    Am I alone? :rolling:

  14. #139
    Xtreme Addict
    Join Date
    Jan 2008
    Posts
    1,463
    it was a combination number. I also have this photoshopped radeon:



    More bland though, not as funny.
    Bring... bring the amber lamps.
    [SIGPIC][/SIGPIC]

  15. #140
    Xtreme Addict
    Join Date
    Mar 2006
    Location
    Saskatchewan, Canada
    Posts
    2,207
    Quote Originally Posted by AbelJemka View Post
    Ok i search like you and i find real numbers!
    -2006 AMD R&D : 1.205 Billions$
    -2006 ATI R&D :458 Millions$
    with 167 Millions$ spent Q1'06+Q2'06 and 291 Millions$ for Q3'06+Q4'06
    -So 2006 AMD+ATI :1.663 Billions$
    -2006 Nvidia R&D : 554 Millions$

    -2007 AMD+ATI R&D : 1.847 Billions$
    -2007 Nvidia R&D : 692 Millions$

    -2008 AMD+ATI R&D : 1.848 Billions$
    -2008 Nvidia R&D : 856 Millions$

    So numbers can't lies, Nvidia had increased it R&D expense since 2006 but so had AMD+ATI.

    You said that they mostly research on GPU since 2007 but you seem to forget that since 2007 Tesla and Cuda are push very hard by Nvidia so they must eat some not negligeable ressources and that Nvidia is also promoting Tegra and Ion.
    Tesla and cuda are part of the gpu research and design so they are related since they involve making the Gpu more powerful. Its obvious those from those numbers NV should be spending substantially more if the ratio's mean anything from the 2006 numbers of AMD + ATI.

    If we look at those numbers AMD spent 2006-2007 spent 11% more and between 2007-2008 they didn't increase spending at all. Compare this to NV who spent 2006-2007 spent 25 percent more and 23.7% more

    Not to mention AMD likely spent alot of money getting to 55nm and 40nm to first plus all the money they spent on DDR5 and DDR4 research. NV waited for all this to happen so they didn't have to spent much on research and getting there as much.

    I can imagine since its AMD was running the show for the most part, I can see alot more money spent on their CPU then their GPU side, especially considering how behind they were during the conroe years, and looking at simple economics, getting that side on the better side of profitable was alot more important than getting it gpu side going.

  16. #141
    Xtreme Addict
    Join Date
    May 2007
    Location
    'Zona
    Posts
    2,346
    Quote Originally Posted by tajoh111 View Post
    Tesla and cuda are part of the gpu research and design so they are related since they involve making the Gpu more powerful. Its obvious those from those numbers NV should be spending substantially more if the ratio's mean anything from the 2006 numbers of AMD + ATI.

    If we look at those numbers AMD spent 2006-2007 spent 11% more and between 2007-2008 they didn't increase spending at all. Compare this to NV who spent 2006-2007 spent 25 percent more and 23.7% more

    Not to mention AMD likely spent alot of money getting to 55nm and 40nm to first plus all the money they spent on DDR5 and DDR4 research. NV waited for all this to happen so they didn't have to spent much on research and getting there as much.

    I can imagine since its AMD was running the show for the most part, I can see alot more money spent on their CPU then their GPU side, especially considering how behind they were during the conroe years, and looking at simple economics, getting that side on the better side of profitable was alot more important than getting it gpu side going.
    That is a LOT of assuming going on there...
    Originally Posted by motown_steve
    Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.

    Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.

  17. #142
    Xtreme Enthusiast
    Join Date
    Dec 2006
    Location
    France
    Posts
    741
    Quote Originally Posted by tajoh111 View Post
    Tesla and cuda are part of the gpu research and design so they are related since they involve making the Gpu more powerful. Its obvious those from those numbers NV should be spending substantially more if the ratio's mean anything from the 2006 numbers of AMD + ATI.

    If we look at those numbers AMD spent 2006-2007 spent 11% more and between 2007-2008 they didn't increase spending at all. Compare this to NV who spent 2006-2007 spent 25 percent more and 23.7% more

    Not to mention AMD likely spent alot of money getting to 55nm and 40nm to first plus all the money they spent on DDR5 and DDR4 research. NV waited for all this to happen so they didn't have to spent much on research and getting there as much.

    I can imagine since its AMD was running the show for the most part, I can see alot more money spent on their CPU then their GPU side, especially considering how behind they were during the conroe years, and looking at simple economics, getting that side on the better side of profitable was alot more important than getting it gpu side going.
    You like speculation a lot more that me!
    Tesla and Cuda are part of gpu research but they have a cost. A cost in time or developpers and one or another cost money.

    You take percentage because it suits your purpose more but in term of brute numbers AMD 2006 to 2007 its +184 Millions$ and Nvidia 2006 to 2007 its +138 Millions$.

    What the cost going to 55nm? You don't know. Going to 40nm? You don't know? GDDR4 research? 2900XT launch six months late in 2007 but due in 2006 so no impact. GDDR4 basically the same as GDDR4 so not a great deal.

    For the AMD part you play guessing game. But AMD, graphic division was the first thing who manage too have success of RV670 and RV770. So It may indicate something.

  18. #143
    Xtreme Addict
    Join Date
    Mar 2006
    Location
    Saskatchewan, Canada
    Posts
    2,207
    Quote Originally Posted by AbelJemka View Post
    You like speculation a lot more that me!
    Tesla and Cuda are part of gpu research but they have a cost. A cost in time or developpers and one or another cost money.

    You take percentage because it suits your purpose more but in term of brute numbers AMD 2006 to 2007 its +184 Millions$ and Nvidia 2006 to 2007 its +138 Millions$.

    What the cost going to 55nm? You don't know. Going to 40nm? You don't know? GDDR4 research? 2900XT launch six months late in 2007 but due in 2006 so no impact. GDDR4 basically the same as GDDR4 so not a great deal.

    For the AMD part you play guessing game. But AMD, graphic division was the first thing who manage too have success of RV670 and RV770. So It may indicate something.
    It doesn't take much assuming to see that CPU cost more to develop than GPU and NV spent a whole lot of money in 2008 for a GPU company.

    Similarly you don't know how much they spent on cuda or ion for research and development and yet you put it in your argument.

  19. #144
    Xtreme Addict
    Join Date
    Nov 2007
    Location
    San Diego, CA
    Posts
    1,062
    We have 5 threads about GT300, should combine em all to one. More pix:



    Look how happy he is



    Source:http://www.pcpop.com/doc/0/448/448052.shtml

    CPU: Core i7-2600K@4.8Ghz Mobo: Asus Sabertooth P67 Case: Corsair 700D w/ 800D window
    CPU Cooler:
    Corsair H70 w/ 2 GTs AP-15 GPU: 2xGigabyte GTX 670 WindForce OC SLI
    RAM: 2x8GB G.Skill Ripjaws PSU: Corsair AX850W Sound card: Asus Xonar DX + Fiio E9
    HDD:
    Crucial M4 128GB + 4TB HDD Display: 3x30" Dell UltraSharp 3007WFP-HC
    Speakers: Logitech Z-5500 Headphone: Sennheiser HD650

  20. #145
    Banned
    Join Date
    Jan 2008
    Location
    Canada
    Posts
    707
    Why are they not showing the card running in a system? Or did I miss it?

  21. #146
    Xtreme Addict
    Join Date
    Nov 2007
    Posts
    1,195
    is it me or that card has only one dvi

  22. #147
    Xtreme Member
    Join Date
    Oct 2007
    Posts
    107
    Hah it would be funny if it was just a GT200 with some custom cooler, hence why they didn't show anything on it.

  23. #148

  24. #149
    Xtreme Member
    Join Date
    Dec 2008
    Location
    Athens ~ Greece
    Posts
    119
    GT300 looks like a revolutionary product as far as HPC and GPU Computing are concerned. Happy times ahead, for professionals and scientists at least...

    Regarding the 3d gaming market though, things are not as optimistic. GT300 performance is rather irrelevant, due to the fact that nvidia currently does not have a speedy answer for the discrete, budget, mainstream and lower performance segments. Price projections aside, the GT300 will get the performance crown, and act as a marketing boost for the rest of the product line. Customers in the higher performance and enthusiast markets that have brand loyalty towards the greens are locked anyway. And yes, thats still irrelevant.

    I know that this is XS and all, but remember ppl, the profit and bulk in the market is in a price segment nvidia does not even try to address currently. We can only hope that the greens can get sth more than GT200 rebranding/respins out for the lower market segments. Fast. Ideally, the new architecture should be able to be downscaled easily. Lets hope for that, or its definitely rough times ahead for nvidia. Especially if you look closely at the 5850 performance per $ ratio, as well as the juniper projections. And add in the economy crisis, shifting consumer focus, the difference of performance needed by sotware and performance given by the hw, the locking of TFT resolutions and heat/power consumption concerns.

    With AMD getting out of the warehouses the whole 5XXX family in under 6months (I think thats a first for the GPU industry, I might be wrong though), the greens are in a rather tight spot atm. GT200 respins wont save the round, GT300 @500$++ wont save the round, and tesla wont certainly save the round (just look at sales and profit in the last years concerning the HPC-GPUCU segments).

    Lets hope for the best, its in our interest as consumers anyway..
    Last edited by Dante80; 09-30-2009 at 11:29 PM.

  25. #150
    Registered User
    Join Date
    Dec 2008
    Posts
    91
    No h/w tesselation unit in Fermi.

    http://vr-zone.com/articles/nvidia-f....html?doc=7786

    On the gaming side of things, DirectX 11 is of course supported, though Tesselation appears to be software driven through the CUDA cores.
    also

    48 ROPs are present, and a 384-bit memory interface mated to GDDR5 RAM.
    48 is not the double of gt200 rops (32), depends on rop performance, but maybe 3d performance will not be doubled.
    Last edited by netkas; 10-01-2009 at 12:47 AM.

Page 6 of 42 FirstFirst ... 345678916 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •