Page 28 of 42 FirstFirst ... 182526272829303138 ... LastLast
Results 676 to 700 of 1035

Thread: The official GT300/Fermi Thread

  1. #676
    Xtreme Member
    Join Date
    Aug 2009
    Posts
    244
    Quote Originally Posted by Enforcer View Post
    For anyone wondering about DP ALU power/size/cost:
    http://www.notur.no/news/inthenews/f...ort_100208.pdf
    Page 176:

    DP-FMA FPU on aggressive voltage/frequency on 45nm node:
    - 0.13 mm^2
    - 2 GHz
    - 0.09 Watt

    One can estimate ~33 mm^2 and 23 W for 256 DP-FMA units in Fermi.
    http://forum.beyond3d.com/showpost.p...&postcount=539

  2. #677
    Xtreme Addict
    Join Date
    May 2007
    Location
    'Zona
    Posts
    2,346
    He forgot one important factor, it isn't going to reach 2ghz speeds.
    Originally Posted by motown_steve
    Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.

    Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.

  3. #678
    Xtreme Member
    Join Date
    Dec 2008
    Location
    Athens ~ Greece
    Posts
    119
    nvidia has alienated both x86 licence holders, moved away from the chipset biz, ran into a wall with ion (intel not selling atoms without chipset + pineview coming) and put an awful lot of eggs in the GT300 basket. Everyone SHOULD WANT nvidia to succeed with Fermi. No other way around it. If it fails, we are screwed...
    Last edited by Dante80; 10-08-2009 at 09:44 PM.

  4. #679
    Xtreme Addict
    Join Date
    Aug 2008
    Location
    Hollywierd, CA
    Posts
    1,284
    Quote Originally Posted by To(V)bo Co(V)bo View Post
    Nvidia needs to double down and split their hand. Make a dedicated PPU and make a seperate gpu. They can still support cuda on the gpu, just not make it the one and only thing its made for. That would free up silicon and make them competitive again. There is no point going in the direction there going right now because its a dead end. You cant be competitive in 3d graphics if your not making gpus.
    so, by adding features to their product that add value to more customers, they are going to fail?
    [SIGPIC][/SIGPIC]

    I am an artist (EDM producer/DJ), pls check out mah stuff.

  5. #680
    Xtreme Member
    Join Date
    Mar 2009
    Location
    Miltown, Wisconsin
    Posts
    353
    Quote Originally Posted by 570091D View Post
    so, by adding features to their product that add value to more customers, they are going to fail?
    Dude, do you know how to read? Like I stated, there is no problem with keeping cuda on the gpu. They only need to add enough performance to make games that use it run good and smooth. 3d games today dont need the amount of gpgpu power as a server farm. That way they can focus on making efficient 3D graphics cards in a competitive market. Making huge dies that cater to the scientific/workstation market and selling them to consumers to browse the internet isnt a smart buisiness plan. If I need the power of a server farm Ill buy a gpu that can handle that, but if I want to play games, give me something to do just that and do it well for a good price! The only way to do this is to make different cards for there purposes. Look at ati they have 2 different dies to serve different markets. How is gt300 suppose to be competitive with juniper, a card that costs 1/5 to make and a 1/4 of the size. You dont have to be a genius to see whats wrong here, but oh yeah your dislexic. So here is a way to make you understand better, Nvidia has bigger dies and costs alot more to make. That way they can make more money selling the same big card against a tiny ati card.
    Last edited by To(V)bo Co(V)bo; 10-10-2009 at 09:52 PM.

  6. #681
    Xtreme Mentor
    Join Date
    Oct 2005
    Posts
    2,788
    Quote Originally Posted by To(V)bo Co(V)bo View Post
    How is gt300 suppose to be competitive with juniper
    Cost aside, Fermi will tear Juniper to shreds, both in gaming and GPGPU. It will beat the 5870 as well.

    Consider. GT200 can execute 33.3% more dependent instructions than RV770, and it is typically faster across the board.

    Fermi will be able to execute 37.5% more dependent instructions than Cypress-XT, and the efficiency at which they can be executed is greatly increased. Cypress-XT is not much, if at all more efficient than RV770.

    Therefore it can be expected that Fermi will be faster than Cypress-XT by a greater factor than GT200 was the RV770.
    Asus Rampage II Gene | Core i7 920 | 6*2GB Mushkin 998729 | BFG GTX280 OCX | Auzentech X-Fi Forte | Corsair VX550
    —Life is too short to be bound by the moral, ethical and legal constraints imposed on us by modern day society.

  7. #682
    Xtreme Member
    Join Date
    Mar 2009
    Location
    Miltown, Wisconsin
    Posts
    353
    Thats fine and dandy, but you dont need a missle to swat a mosquito. Gt300 only serves the highest end of the market with no room left below it, sure they can harvest dies, but it wont be any way near as profitable as a juniper is. In buisiness this is taboo and worse than blasphemy. ALL nvidias AIB's will jump ship because there is no money to be made.

    You think XFX jumped ship because they were makin money over fist selling nvidia parts?
    Last edited by To(V)bo Co(V)bo; 10-08-2009 at 10:58 PM.

  8. #683
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    Quote Originally Posted by To(V)bo Co(V)bo View Post
    Nvidia needs to double down and split their hand. Make a dedicated PPU and make a seperate gpu. They can still support cuda on the gpu, just not make it the one and only thing its made for. That would free up silicon and make them competitive again. There is no point going in the direction there going right now because its a dead end. You cant be competitive in 3d graphics if your not making gpus.
    totally agree!

    does intel try to stuff lrb into their cpus?
    of course not! but nvidia is basically stuffing their attempt at something like lrb into their gpu...

    Quote Originally Posted by mindfury View Post
    One can estimate ~33 mm^2 and 23 W for 256 DP-FMA units in Fermi.
    http://forum.beyond3d.com/showpost.p...&postcount=539
    only 23W? 0_o

    Quote Originally Posted by Dante80 View Post
    nvidia has alienated both x86 licence holders, moved away from the chipset biz, ran into a wall with ion (intel not selling atoms without chipset + pineview coming) and put an awful lot of eggs in the GT300 basket. Everyone SHOULD WANT nvidia to succeed with Fermi. No other way around it. If it fails, we are screwed...
    well even if it succeeds... then what? theres an expensive super vga that can run 2560x1600+ with 8aa or even 16aa... you need a 30" display for 1000$+... how does that produce enough income for nvidia to survive? what they really need is a gpu thats anywhere from slightly slower to faster than rv870 at the same or lower price... not a super fast chip that costs a lot more... im really confused why they dont just cut gt300 in half and have it almost immediatly after gt300s launch... it cant be that hard to scale down a big gpu...

    well and then theres still tegra, but so far arm isnt taking off, and without arm taking off... tegra will have a hard time as well... and im curious how good it actually is compared to the integrated power vr chips arm usually integrates...

    Quote Originally Posted by 570091D View Post
    so, by adding features to their product that add value to more customers, they are going to fail?
    if they sacrifice performance of what its actually meant for to get those features, and the features are only gimmicks, then yes...
    would you think its a good idea for a ferrari to somehow sacrifice horse power to power a cokctail mixer inside the car? thats kinda what nvidia does with cuda...
    Last edited by saaya; 10-08-2009 at 11:01 PM.

  9. #684
    Xtreme Addict
    Join Date
    Jul 2007
    Posts
    1,488
    Quote Originally Posted by 003 View Post
    Cost aside, [snip]
    If only it was that easy in the real world.

  10. #685
    Xtreme Member
    Join Date
    Jul 2009
    Location
    Madrid (Spain)
    Posts
    352
    Quote Originally Posted by 003 View Post
    You talk as if it will be slow or something. MOST of the transistors come from more than doubling the shaders to 512 from 240, and that alone will bring the biggest performance in games.

    GT200 had 240 SPs and 1.4b transistors.

    Fermi has 512 SPs and 3.0b transistors.

    Guess what? The ratio transistors to SPs on Fermi is virtually the same as GT200. What does that mean?

    All the people saying "omg all those transistors are wasted on GPGPU" are freaking morons!!! The GPGPU features on Fermi are at no extra cost to gamers, and game performance is not sacrificed.
    It doesn't work this way. There's a lot of stuff in a GPU apart from processing units (SPs, TUs and ROPs), a lot of logic, and usually doubling the number of processing units doesn't translate into doubling the number of transistors. For example, going from RV670 (320SPs, 666 million transistors) to RV770 (800SPs, 956 million transistors) meant an increase of x2.5 processing units (not only SPs were incresed by x2.5) and only x1.45 transistors.

    If the ratio of processing units to transistors is kept, that means that lots of other stuff are being added or increased. Most of this changes, in the case of GT200 formerly, and Fermi now, are on the GPU computing ground. And a great deal of them, on GPU computing features that aren't focused on mainstream GPU computing (which could end up being useful to the mainstream consumer) but on what they call "the High Performance Computing" market (which they cover with their Tesla solutions).

    Even if you use all the insults in the world, that doesn't changes things. Nothing is free in a processor, nothing gets calculated magically. There need to be transistors in there, and if NVIDIA are adding lots of HPC stuff to their chips, there is going to be a cost in transistors and in die surface regardless people being more or less moronic.

    And when you feel tempted to call someone "a freakin moron", you should think twice, and not doing it in the end. You know, it's not like if it was a useful argument, or a useful attitude.

    EDIT and PS: I'm not trying to say that Fermi "will suck" as a 3D rendering device, or that it's going to be exactly equal than GT200. I'm not a fortune-teller, and using logic, I suppose that part of the changes will be benefitial also to 3D rendering tasks. But what it's obvious is that NV is focusing a lot on the HPC market lately, and they are including lots of features based on that market on their chips. And that features, will have a cost in transistors and surface. Even if you want to call "freakin morons" to the people who thinks so.
    Last edited by Farinorco; 10-09-2009 at 01:07 AM.

  11. #686
    Xtreme Member
    Join Date
    Apr 2006
    Posts
    183
    Eidos programmer: "Hey, we've got a great idea for an awesome new game!"

    NV TWIMTBP rep: "Well sparky, let's not get too crazy. What's your idea?"

    Eidos programmer: "Well, we all know you like to pervert a normal game by adding PhysX and in-game AA only for your cards so we have a great new game idea to do just that again."

    NV TWIMTBP rep: "Cough, cough..well, well, let's not screw the other 50% of gamers again..we have a new idea."

    Eidos programmer: "What?"

    NV TWIMTBP rep: "Let's screw 100% of the gamers! Yes, we'll pay you $100,000 to not make that crappy game and make a scientific app instead.
    Hell, we'll even throw in the programmers for you."

    Eidos programmer: "WTF?"

    NV TWIMTBP rep: "Well now that 'gaming' is of secondary importance to us, so are you!"

    Eidos programmer: "Gulp!"

    NV TWIMTBP rep: "And if you DO decide to make that game you really want to make without the green goblin god's blessing, you better make it using double precision, ECC and only CUDA cores, period!"

    Eidos programmer: "Oh &%#$!"

    NV TWIMTBP rep: "Have a nice day!"

    NVidia - "The Way It's Bent To Be Played(we mean executed)"
    [SIGPIC][/SIGPIC] The Obamanator!!!

    GTX 480 Griddle Edition - Bit-Tech.net

  12. #687
    Banned
    Join Date
    Jun 2008
    Location
    Mi
    Posts
    1,063
    Quote Originally Posted by highoctane View Post
    I would say they are trying to get a leg up on the competition with their heavy Tesla push. If they can actually get their gpgpu initiative to take root first with developers they can very well be a market leader until the other players catch up.

    There's definitely no guarantees of landslide success but the potential market is there if they can get enough developers aware of & on board to utilize the technology. It may not necessarily be huge with the general user but say the medical & research fields this could be a huge chance for them to make in roads into untouched markets with potentially much higher margins.

    Nvidia is essentially getting squeezed out of mainstream markets with chipsets pretty much dead in the water they need to find other ways to sustain their growth or else go the way of VIA.

    Their best chance is to move quickly now while they have the resources and influence to still make big moves.

    I don't think they're forgetting their roots at all but are trying hard making more noise about their gpgpu initiatives to bring about awareness and spark curiosity & imagination with potential developers & customers which seems to be working on the awareness side at least.

    We'll have to wait and see which is the hard part in the I want it yesterday technology world.


    You're right!

    But, without a strong showing from the GT300, nVidia might just loose a good portion of the video card industry! Somewhat crushing it's growth, while they migrate back to their origins.... GAMES!

  13. #688
    Banned
    Join Date
    Jun 2008
    Location
    Mi
    Posts
    1,063
    Quote Originally Posted by tajoh111 View Post
    So you want NV, to just die a slow death? Well of course you do xoulz, you hate Nvidia.
    ^^


    That is why I am critiquing their business decision to leave their backside so open & just let ATi, not to just walk threw, but to Stomp threw your house and down your driveway..!


  14. #689
    Xtreme Member
    Join Date
    Dec 2008
    Location
    Athens ~ Greece
    Posts
    119
    Quote Originally Posted by saaya View Post
    well even if it succeeds... then what? theres an expensive super vga that can run 2560x1600+ with 8aa or even 16aa... you need a 30" display for 1000$+... how does that produce enough income for nvidia to survive? what they really need is a gpu thats anywhere from slightly slower to faster than rv870 at the same or lower price... not a super fast chip that costs a lot more... im really confused why they dont just cut gt300 in half and have it almost immediatly after gt300s launch... it cant be that hard to scale down a big gpu...

    well and then theres still tegra, but so far arm isnt taking off, and without arm taking off... tegra will have a hard time as well... and im curious how good it actually is compared to the integrated power vr chips arm usually integrates...
    If Fermi succeeds (we are talking about performance/$ here, scalability and margins) then nvidia will have the tools to offer derivatives based on the arch for lower market segments. Fast. I think that they canceled a GT200 respin due to the fact that it would not be very competitive (as far as margins are concerned) so this launch will lack full range market coverage for their new line.

    If Fermi doesn't succeed in the 3d gaming market, nvidia will be in a lot of trouble due to the fact that the HPC market is miniscule atm (and is expected to be so for the immediate future).

    Quote Originally Posted by saaya View Post
    if they sacrifice performance of what its actually meant for to get those features, and the features are only gimmicks, then yes...
    would you think its a good idea for a ferrari to somehow sacrifice horse power to power a cokctail mixer inside the car? thats kinda what nvidia does with cuda...
    Look, I'm the first that said I can't understand why nvidia did not put a tesselation HW unit, or a double rasteriser there. Especially since the transistor penalty would be low. From what I understand, we may see in the future R&D on 2 dies, one for the gaming segment and one for HPC. The magic word here though, is arch SCALABILITY. Going for the performance crown is the same to shipbuilders going for the Blue Riband, profitability be damned. It will give you the marketing juice needed to boost the rest of the line, but with no line to boost and the market shifting to other price segments its a moot point. Nvidia should better understand this and fast...

  15. #690
    Xtreme Mentor
    Join Date
    Feb 2007
    Location
    Oxford, England
    Posts
    3,433
    Quote Originally Posted by Dante80 View Post
    nvidia has alienated both x86 licence holders, moved away from the chipset biz, ran into a wall with ion (intel not selling atoms without chipset + pineview coming) and put an awful lot of eggs in the GT300 basket. Everyone SHOULD WANT nvidia to succeed with Fermi. No other way around it. If it fails, we are screwed...
    we wont be screwed nvidia wont go anywhere tbh, there 9400igp are flying off of the shelves in laptops etc.. im sure they are makign a killing off of those alone for now
    "Cast off your fear. Look forward. Never stand still, retreat and you will age. Hesitate and you will die. SHOUT! My name is…"
    //James

  16. #691
    Xtreme Mentor
    Join Date
    Jul 2008
    Location
    Shimla , India
    Posts
    2,631
    Yep with no HW tesselation i am not sure how the card can be FULLY Directx 11 compatible??? Simulated tesselation cant be as fast as HW tesselation. DX11 tesselation is very interesting hell it was interesting since the time ati introduced it.
    Coming Soon

  17. #692
    Xtreme Member
    Join Date
    Oct 2009
    Location
    Bucharest, Romania
    Posts
    381
    I also agree with users worrying about no HW tesselation. Tesselation is one of the big or maybe the biggest feature of DX11, smoother surfaces, better models. I do not understand why they didn't implement DX11 at it's fullest.

    It may bite them in the or not, we will have to wait for AvP to see what this lack in feature means.

    It reminds me of FX 5800 vs Radeon 9700 Pro all over, when the FX series didn't support properly DX9, which was seen later when games like Oblivion came which required SM 2.0 and a lot of effects which could be enjoyed on the 9800/9700/9600/9500 series couldn't be enjoyed on the FX series without a serious performance hit.
    Last edited by Florinmocanu; 10-09-2009 at 03:39 AM.

  18. #693
    Xtreme Member
    Join Date
    Jul 2009
    Location
    Madrid (Spain)
    Posts
    352
    But who said they are not implementing DX11 or tesselation at its fullest?

    DX11, as an intermediate layer between hardware and applications, it's an interface than offers a certain functionality (with a given interface) in the side of the applications, and that asks the hardware to implement certain functionality (with a given interface) to be compliant with the API.

    What the hardware does internally to execute (and resolve) that functionality, it's its own business, while it picks the right ins and gives the right outs.

    The NV's decision of implementing tesselation completely as operations in the CUDA Processors instead of having dedicated hardware (a dedicated set of transistors) to solve it, it's a technical decision. We will see what it means in real world when we have numbers.

    Usually not having dedicated, fixed function hardware for a task, mean that you have to use non specialized hw to do it, so you usually take more resources to do it. But it has its benefits. Dedicated fixed function hardware do take resources too (it's made of transistors that could be used to put in there some more general computing resources). And it rises the complexity and costs of developement.

    We will see what this implementation choice they have made means in the end when we have data... probably it will take a higher performance impact when doing tesselation than Radeon cards, specially with shader intensive games, but maybe it's not that noticeably.

  19. #694
    Xtreme Member
    Join Date
    Oct 2009
    Location
    Bucharest, Romania
    Posts
    381
    your right. But Cuda means another software layer between directx and execution, which i don't think it's beneficial.
    Of course, everything goes to a more non specialized way to execute commands, but if that would be done through a single software intermediary, like Directx, for all GPUs, that would be ok. When you add another one, CUDA, than that doesn't make that much sense.

  20. #695
    Xtreme Member
    Join Date
    Jul 2009
    Location
    Madrid (Spain)
    Posts
    352
    I don't think it works that way. I'm not an expert so excuse me if I'm wrong with this, but what I think it happens it's that the drivers of the hardware device (the videocard in this case) exposes an interface which includes the functionality required for the APIs they are compliant to. Then, the goal of the drivers it's to convert that calls into the hardware instruction set in order to have the hardware doing its work. The difference between having dedicated fixed function hardware and doing it in general computing processors as the CUDA processors probably is that in the former case you would have specific hardware level instructions to manage the dedicated fixed function hardware and do it there, and in the other case you would have to use the generic instructions of the generic processors to solve it. I don't think there's going to be any additional layer anywhere.

  21. #696
    Xtreme Member
    Join Date
    Oct 2009
    Location
    Bucharest, Romania
    Posts
    381
    the layering usually is
    Application
    Direct
    Drivers
    video card

    If you do tesselation through cuda, you have

    Application
    Directx
    Drivers
    Cuda
    video card

    With hardware tesselation, the hardware already knows how to do the job, you don't need another software (in this case Cuda) to tell the shader how to do it, the shader just receives the data and does the math.

  22. #697
    Xtreme Mentor
    Join Date
    Jul 2008
    Location
    Shimla , India
    Posts
    2,631
    @Farinorco implementation of physx over cuda is not as efficient and effective as the real thing "PPU". Nvidia gpu loses more than a few shade units when it simulates the PPU


    EDIT:- Why is Nvidia not co-operating with VIA??? I am very confused as to why nvidia does not support VIA in designing chips as well as financially. VIA can then make a bigger chip and multi core it to rival low end semptron's/ celeron's.

    I had a VIA SoC and it was very good based on a C7 it was a bit slow but it was quite good. A similar SoC with a larger core and multi cores can rival low end intels/ AMD's such as semptron's/ celeron's.
    Last edited by ajaidev; 10-09-2009 at 04:56 AM.
    Coming Soon

  23. #698
    Xtreme Mentor
    Join Date
    Jul 2004
    Posts
    3,247
    Has this been posted?
    Nvidia GT300 whitepaper disclosed
    http://en.inpai.com.cn/doc/enshowcont.asp?id=7137

  24. #699
    Xtreme Member
    Join Date
    Jul 2009
    Location
    Madrid (Spain)
    Posts
    352
    Quote Originally Posted by Florinmocanu View Post
    the layering usually is
    Application
    Direct
    Drivers
    video card

    If you do tesselation through cuda, you have

    Application
    Directx
    Drivers
    Cuda
    video card

    With hardware tesselation, the hardware already knows how to do the job, you don't need another software (in this case Cuda) to tell the shader how to do it, the shader just receives the data and does the math.
    I thought they would simply decompose the DirectX tesselation calls into CUDA processors (formerly shader processors) instructions instead of some specific tesselator instructions in the drivers, since I've always supposed that the instructions set of each architecture wouldn't directly fit into the external interface that the drivers expose, but I'm not surprised if I'm wrong, since I'm more in the software side of things than in the hardware side

    Quote Originally Posted by ajaidev View Post
    @Farinorco implementation of physx over cuda is not as efficient and effective as the real thing "PPU". Nvidia gpu loses more than a few shade units when it simulates the PPU
    Yeah, that part I knew about and I have already mentioned it. What I don't know is how many CUDA processors they are going to have to use to do the tesselation, and therefore how much of an impact it will have on performance when used in real world apps, that's why I said that it can be seen when we have some real world data.

    Of course it might turn into a huge performance impact . Then... ouch!

    Quote Originally Posted by onethreehill View Post
    Has this been posted?
    Nvidia GT300 whitepaper disclosed
    http://en.inpai.com.cn/doc/enshowcont.asp?id=7137
    Yep, the whitepaper is accesible through the Fermi's website along some analysts white papers that NVIDIA has payed to write about Fermi, and I think it was in this forum where I got the link to the white paper.

    Here is the link to the Fermi's site, and here the white paper itself...
    Last edited by Farinorco; 10-09-2009 at 05:26 AM.

  25. #700
    Xtreme Member
    Join Date
    Oct 2009
    Location
    Bucharest, Romania
    Posts
    381
    Quote Originally Posted by Farinorco View Post
    I thought they would simply decompose the DirectX tesselation calls into CUDA processors (formerly shader processors) instructions instead of some specific tesselator instructions in the drivers, since I've always supposed that the instructions set of each architecture wouldn't directly fit into the external interface that the drivers expose, but I'm not surprised if I'm wrong, since I'm more in the software side of things than in the hardware side
    Well yeah, but how do they decompose DirectX tesselation calls ? through Cuda, the aditional software layer i was talking..

    While if you have hardware tesselation, you just send those calls directly to the tesselators, no need for intermediaries.

    At least that's how i assume it will be done, we will wait to see the reviews in dec/ian for more info on this issue.

Page 28 of 42 FirstFirst ... 182526272829303138 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •