MMM
Results 1 to 25 of 1035

Thread: The official GT300/Fermi Thread

Hybrid View

  1. #1
    Xtreme Addict
    Join Date
    Mar 2006
    Location
    Saskatchewan, Canada
    Posts
    2,207
    http://www.dailytech.com/article.aspx?newsid=16401

    ORNL to Use NVIDIA Fermi to Build Next Gen Super Computer

    NVIDIA announced its new Fermi architecture at its GPU Technology Conference recently. The new architecture was designed from the ground up to enable a new level of supercomputing using GPUs rather than CPUs. At the conference, Oak Ridge National Laboratory (ORNL) associate lab director for Computing and Computational Sciences, Jeff Nichols, announced that ORNL would be building a next generation supercomputer using the Fermi architecture.

    The new supercomputer is expected to be ten times faster than today's fastest supercomputer. Nichols said that Fermi would enable substantial scientific breakthroughs that would have been impossible without the technology.

    Looks like NV has customers already for Fermi.

  2. #2
    Xtreme Addict
    Join Date
    Apr 2006
    Posts
    2,462
    Quote Originally Posted by tajoh111 View Post
    Looks like NV has customers already for Fermi.
    Which should be due to the fact that GPGPU on ATI pretty much sucks. Well, I think I'm exaggerating here but NVIDIA clearly has a lead there.
    Notice any grammar or spelling mistakes? Feel free to correct me! Thanks

  3. #3
    Xtreme Member
    Join Date
    Jul 2009
    Location
    Madrid (Spain)
    Posts
    352
    Quote Originally Posted by FischOderAal View Post
    Which should be due to the fact that GPGPU on ATI pretty much sucks. Well, I think I'm exaggerating here but NVIDIA clearly has a lead there.
    At least talking about supporting technologies and so (the new C/C++ compiled code support, for example). Regarding computing power, I don't have it so clear.

    The only example that I have been able to find comparing GPGPU performance in equality of conditions (that includes both running the same code, so it has to be a DirectCompute or OpenCL piece), the HD5870 is pulverizing the GTX285.

    And the funniest part is that said example, is an NVIDIA demo of DirectCompute (run on both cards in AnandTech).

    To be exact, this one:



    But yes, I'm of the opinion that NVIDIA is one generation ahead when talking about GPGPUs technologies. We will see in terms of GPGPU performance when we have something else than this little demo to compare. I think it's a too specific program to make any conclusion from this.
    Last edited by Farinorco; 10-03-2009 at 03:06 AM.

  4. #4
    Xtreme Enthusiast
    Join Date
    Jul 2004
    Posts
    535
    Quote Originally Posted by Farinorco View Post
    But yes, I'm of the opinion that NVIDIA is one generation ahead when talking about GPGPUs technologies. We will see in terms of GPGPU performance when we have something else than this little demo to compare. I think it's a too specific program to make any conclusion from this.
    I would say half generation instead of full generation, seeing as RV870 looks to be a better GPGPU than GT200, while GT300 looks to be a better GPGPU than RV870.

  5. #5
    Xtreme Member
    Join Date
    Mar 2009
    Location
    Miltown, Wisconsin
    Posts
    353
    Quote Originally Posted by Farinorco View Post
    At least talking about supporting technologies and so (the new C/C++ compiled code support, for example). Regarding computing power, I don't have it so clear.

    The only example that I have been able to find comparing GPGPU performance in equality of conditions (that includes both running the same code, so it has to be a DirectCompute or OpenCL piece), the HD5870 is pulverizing the GTX285.

    And the funniest part is that said example, is an NVIDIA demo of DirectCompute (run on both cards in AnandTech).

    To be exact, this one:



    But yes, I'm of the opinion that NVIDIA is one generation ahead when talking about GPGPUs technologies. We will see in terms of GPGPU performance when we have something else than this little demo to compare. I think it's a too specific program to make any conclusion from this.


    here too is a comparison to crunching on guru.

    http://www.guru3d.com/article/radeon...review-test/25

  6. #6
    Xtreme Member
    Join Date
    Jul 2009
    Location
    Madrid (Spain)
    Posts
    352
    Oh man, I hate when things turn to a discussion over "why ATi is right and NVIDIA not? You're ATi... You're NVIDIA... you only say that because of hatred... you only say that because of love"... it makes keep the thread so difficult...

    I'd like to imagine for a moment that those names belong to a pair of companies that make products for us to buy, like Philips, LG, Woxter, :banana::banana::banana:or, Sony... not to some charismatic heroes of an epic world, or something.

    Quote Originally Posted by To(V)bo Co(V)bo View Post
    here too is a comparison to crunching on guru.

    http://www.guru3d.com/article/radeon...review-test/25
    But that comparison is not really a useful one because they are not comparing the same applications / code in different hw, but different code on different hw (note that ATI cards are running code in ATi Stream while NVIDIA cards are running code in CUDA, so that means that the code they are running is different). That's similar to compare the Windows loading time with 2 ssd's (A and B) using Windows XP in ssd A and Windows Vista in ssd B.

    Quote Originally Posted by 003 View Post
    1. I love how they leave out RV770, the real competition to GT200, and

    2. That is single precision. GT300 will decimate RV870 in double precision.
    They are testing HD5870, which is the reviewed card. They are using the most powerful NVIDIA solution, to compare. Which is exactly the point to include the RV770? About the GT300 decimating RV870 in double precision, we will see. That's too much fortune-telling for me. On the other hand I'm not all that interested in the applications that can get a benefit from double precision, since domestic applications and the kind of professional applications that may be useful to me, shouldn't do much about double precision floats.

  7. #7
    Xtreme Mentor
    Join Date
    Oct 2005
    Posts
    2,788
    Quote Originally Posted by Farinorco View Post
    At least talking about supporting technologies and so (the new C/C++ compiled code support, for example). Regarding computing power, I don't have it so clear.

    The only example that I have been able to find comparing GPGPU performance in equality of conditions (that includes both running the same code, so it has to be a DirectCompute or OpenCL piece), the HD5870 is pulverizing the GTX285.

    And the funniest part is that said example, is an NVIDIA demo of DirectCompute (run on both cards in AnandTech).

    To be exact, this one:



    But yes, I'm of the opinion that NVIDIA is one generation ahead when talking about GPGPUs technologies. We will see in terms of GPGPU performance when we have something else than this little demo to compare. I think it's a too specific program to make any conclusion from this.
    1. I love how they leave out RV770, the real competition to GT200, and

    2. That is single precision. GT300 will decimate RV870 in double precision.
    Asus Rampage II Gene | Core i7 920 | 6*2GB Mushkin 998729 | BFG GTX280 OCX | Auzentech X-Fi Forte | Corsair VX550
    —Life is too short to be bound by the moral, ethical and legal constraints imposed on us by modern day society.

  8. #8
    Xtreme Addict
    Join Date
    Jul 2009
    Posts
    1,023
    Quote Originally Posted by 003 View Post
    1. I love how they leave out RV770, the real competition to GT200, and

    2. That is single precision. GT300 will decimate RV870 in double precision.
    they didn't use the RV770 because it doesn't have directcompute 5.0 which directly compares to CUDA, so it would be kinda unfair since RV770 doesn't have the same feature set.

  9. #9
    Banned
    Join Date
    Jun 2008
    Location
    Mi
    Posts
    1,063
    Quote Originally Posted by 003 View Post
    1. I love how they leave out RV770, the real competition to GT200, and

    2. That is single precision. GT300 will decimate RV870 in double precision.
    I think you are a tad lost...!

    The 5870 is out! It cost $379, which is $10 different than the GTX285... We will not see the GT300 for 4 months!

    I hope the Nvidia GT300 will do something better, specially double precision, because thats the field in which Nvidia's moving their business model; Scientific Computing.... not games!

  10. #10
    Xtreme Mentor
    Join Date
    Oct 2005
    Posts
    2,788
    Quote Originally Posted by Xoulz View Post
    We will not see the GT300 for 4 months!
    Try a month to a month and a half.
    Asus Rampage II Gene | Core i7 920 | 6*2GB Mushkin 998729 | BFG GTX280 OCX | Auzentech X-Fi Forte | Corsair VX550
    —Life is too short to be bound by the moral, ethical and legal constraints imposed on us by modern day society.

  11. #11
    Xtreme Member
    Join Date
    Aug 2006
    Posts
    215
    Quote Originally Posted by 003 View Post
    Try a month to a month and a half.
    JHH wasn't even holding a real GF100 at the tesla launch. How in the name of god will that timeframe work?

  12. #12
    Banned
    Join Date
    Jun 2008
    Location
    Mi
    Posts
    1,063
    Quote Originally Posted by 003 View Post
    Try a month to a month and a half.
    Dude, are you paying attention to whats going on around you..? The GT300 will not be out for the Xmas holiday shopping season..

    Nvidia concedes this, just read Anand's review of the Tesla architecture. ATi will have 3 months of un-fettered sales. When the GT300 is released, ATi will simply drop the prices on their current line-up and release (or announce) their X2 line, etc.. competing $ for $.



    Do you really think Nvidia will be able to compete with Hemlock (5870x2) @ $599, (come this January or Febuary...) when Nvidia is due to release their GT300.

    Or, a 5870 @ $299, or the GT300 for $499..? If Nvidia doesn't have a DX11 $199 part soon, they will loose massive market share and loose $Billions in stock. It doesn't matter how good the GT300 actually is, if only 40k uber-hardcore people ever buy it. The 5870, for the foreseeable future, is good enough for almost anyone needs.

    Millions of people will be buying or upgrading their stuff for the Holiday Season. Nvidia will miss out on all those sales.


    Timing is everything!

  13. #13
    Xtreme Cruncher
    Join Date
    May 2009
    Location
    Bloomfield
    Posts
    1,968
    Quote Originally Posted by Farinorco View Post
    At least talking about supporting technologies and so (the new C/C++ compiled code support, for example). Regarding computing power, I don't have it so clear.
    they win in programmability which is half the battle. usually the speed of a gpgpu app is compared to a cpu. here is an interesting article about implementing molecular dynamics on gpu's.http://www3.interscience.wiley.com/c...77402/PDFSTART it ends up being the exact opposite of the directcompute app with regards to speed.

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •