Page 72 of 82 FirstFirst ... 226269707172737475 ... LastLast
Results 1,776 to 1,800 of 2036

Thread: The GT300/Fermi Thread

  1. #1776
    Xtreme Addict
    Join Date
    Mar 2005
    Location
    Rotterdam
    Posts
    1,553
    Quote Originally Posted by Piotrsama View Post
    While we are at it.... I'll take the chance to ask Neliz if he has some information on how's ATI going to counter the fermi release...
    Will it "only" be a 5870 with 2GB? Maybe with improved clocks?
    Thanks.
    A 5870 and a 5970 refresh with Higher clocks and more GRAM is ALL that AMD needs to do to counter Fermi. Unless dual Fermi is 50%+ faster than 5970 which I doubt, AMD just needs to with prices in order to keep their sales up. By the time Fermi is selling in mass, the 5K series will have great yields and better efficiency, if they decide to compete hard on Price/Performance as they did with 4k series, its good enough till 6k series is out.

    Besides that, a dual Fermi being 50% faster than 5970 Refresh would also cost 100% more most likely.
    Gigabyte Z77X-UD5H
    G-Skill Ripjaws X 16Gb - 2133Mhz
    Thermalright Ultra-120 eXtreme
    i7 2600k @ 4.4Ghz
    Sapphire 7970 OC 1.2Ghz
    Mushkin Chronos Deluxe 128Gb

  2. #1777
    Xtreme Mentor
    Join Date
    Jan 2009
    Location
    Oslo - Norway
    Posts
    2,879
    Quote Originally Posted by neliz View Post
    It's called dampening enthusiasm..

    And when I did leak benchmark scores (P number was correct, X wasn't ) I didn't hear you cheer either.. what's up with that?
    Where is this .. lets see what you got, and talk based on that.

    I'm not quite anonymous either, it would take you less then 3 minutes to find my real name, my address, phone number etc. (pics of the family, me and my beer belly!)
    Nobody asked for the pics of the family, you or your beer belly. I don't care how you are, you need to provide proper documentation, including benchmarks to act this sure about the performance.


    I'm just warning people here that think that GF100 will be [strike]God's[/strike]nVidia's gift to man. If it was, it would've been last year.. not anymore. We're talking about an expensive, marginal (10-30%) upgrade over HD5870.
    When did I say this is going to be cheep or a gift?
    I've already said (several times in this very thread, i can link if you doubt it) this is going to be expensive without a competing refresh (or a new GPU) from ATi. I've never said anything for sure about the performance either, I have always said "it is too early to be sure about the performance" (several times in this very thread, i can link if you doubt it).
    Are you trying to be funny or trying to misinform and mislead? I dare you to "QUOTE"-me on these childish claims you are making here about me.

    That's no speculation.

    edit: Dagnabbit... anyone know how the strikethrough tags work here?
    edit part Deux: availability will be the end of March, at the earliest.
    [/QUOTE]
    You may predict the release time, price, or whatever you want, it is OK. But you can't ask others to shut up, just because you know so much about the performance of a unreleased product, based on the pic of you beer belly.
    I'm still waiting to see benchmarks and documentation.

    ASUS P8P67 Deluxe (BIOS 1305)
    2600K @4.5GHz 1.27v , 1 hour Prime
    Silver Arrow , push/pull
    2x2GB Crucial 1066MHz CL7 ECC @1600MHz CL9 1.51v
    GTX560 GB OC @910/2400 0.987v
    Crucial C300 v006 64GB OS-disk + F3 1TB + 400MB RAMDisk
    CM Storm Scout + Corsair HX 1000W
    +
    EVGA SR-2 , A50
    2 x Xeon X5650 @3.86GHz(203x19) 1.20v
    Megahalem + Silver Arrow , push/pull
    3x2GB Corsair XMS3 1600 CL7 + 3x4GB G.SKILL Trident 1600 CL7 = 18GB @1624 7-8-7-20 1.65v
    XFX GTX 295 @650/1200/1402
    Crucial C300 v006 64GB OS-disk + F3 1TB + 2GB RAMDisk
    SilverStone Fortress FT01 + Corsair AX 1200W

  3. #1778
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    hah, his best buddy is msi? not evga?

    Quote Originally Posted by JohnZS View Post
    Packing in 3 billion transistors, double the CUDA cores of previous generation GPUs¹, a high speed GDDR5 memory interface, and full DirectX 11 support, GF100 is designed for groundbreaking graphics performance. With a revolutionary new scalable geometry pipeline and enhanced anti-aliasing capabilities, GF100 delivers both unrivalled performance and breathtaking image quality.
    what did they mention in the details for *1?
    double the cuda cores? GT200=240, sounds like the 380 will be 484 cores then?

    Quote Originally Posted by neliz View Post
    A shame that the they forget to mention the cut down DP performance
    i thought DP doesnt matter for desktop anyways, so... who cares...

  4. #1779
    Xtreme Member
    Join Date
    Jan 2009
    Posts
    261
    Quote Originally Posted by Sam_oslo View Post
    ...

    I'm still waiting to see benchmarks and documentation.
    And you're doing a fine job of insulting one of those who can "leak" these info.

  5. #1780
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    Quote Originally Posted by Sam_oslo View Post
    I'm still waiting to see benchmarks and documentation.
    so you or somebody else can report him to nvidia and get him sued?
    as if anything he posted would change the situation... dozens of people will still not believe it and say its fake or altered results etc...

  6. #1781
    Xtreme Mentor
    Join Date
    Jan 2009
    Location
    Oslo - Norway
    Posts
    2,879
    Quote Originally Posted by Teemax View Post
    And you're doing a fine job of insulting one of those who can "leak" these info.
    Thx , I'm not insulting (could you please QUOTE me on this?).

    I'm asking for benchmarks that can prove his bold claims about the performance. Everybody should be happy to see his documentations, I guess.

    ASUS P8P67 Deluxe (BIOS 1305)
    2600K @4.5GHz 1.27v , 1 hour Prime
    Silver Arrow , push/pull
    2x2GB Crucial 1066MHz CL7 ECC @1600MHz CL9 1.51v
    GTX560 GB OC @910/2400 0.987v
    Crucial C300 v006 64GB OS-disk + F3 1TB + 400MB RAMDisk
    CM Storm Scout + Corsair HX 1000W
    +
    EVGA SR-2 , A50
    2 x Xeon X5650 @3.86GHz(203x19) 1.20v
    Megahalem + Silver Arrow , push/pull
    3x2GB Corsair XMS3 1600 CL7 + 3x4GB G.SKILL Trident 1600 CL7 = 18GB @1624 7-8-7-20 1.65v
    XFX GTX 295 @650/1200/1402
    Crucial C300 v006 64GB OS-disk + F3 1TB + 2GB RAMDisk
    SilverStone Fortress FT01 + Corsair AX 1200W

  7. #1782
    Xtreme Member
    Join Date
    Dec 2009
    Posts
    184
    Quote Originally Posted by saaya View Post
    i thought DP doesnt matter for desktop anyways, so... who cares...
    One such group of "home" users running DP, where the application builder generally enjoys the splendid fruits of the "desktop" labor.

    Saying no one uses double-precision at home is a simple lie.

  8. #1783
    Xtreme Mentor
    Join Date
    Jul 2008
    Location
    Shimla , India
    Posts
    2,631
    Quote Originally Posted by saaya View Post
    i thought DP doesnt matter for desktop anyways, so... who cares...
    I run milkyway@home double precision hello i am nobody...

    Ohh ya forgot to add very good scaling with 5850' and my friend who own 4870's gets around 135gflops DP in it... Ammm now how much was GF100 DP rating
    Coming Soon

  9. #1784
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    Quote Originally Posted by neliz View Post
    One such group of "home" users running DP, where the application builder generally enjoys the splendid fruits of the "desktop" labor.
    Saying no one uses double-precision at home is a simple lie.
    well, not really a very widespread app...
    its kinda ironic though if you think how hard nvidia pushed gpgpu, cuda and physix and then cripples it artificially on their latest and greatest card

    i dont think most people care about it though... even after all the cuda propaganda barely anybody cares about it at all... and even those that do care are unlikely to have it notably influence their buying decision...

    and for you guys and the other few who care... just get an ati card...

    Quote Originally Posted by ajaidev View Post
    I run milkyway@home double precision hello i am nobody...

    Ohh ya forgot to add very good scaling with 5850' and my friend who own 4870's gets around 135gflops DP in it... Ammm now how much was GF100 DP rating
    supposedly 600 something...
    so if they cut it in 4, then itll be 150... about the same as a 5970 i guess...

  10. #1785
    Xtreme Mentor
    Join Date
    Jul 2008
    Location
    Shimla , India
    Posts
    2,631
    Quote Originally Posted by saaya View Post
    well, not really a very widespread app...
    its kinda ironic though if you think how hard nvidia pushed gpgpu, cuda and physix and then cripples it artificially on their latest and greatest card

    i dont think most people care about it though... even after all the cuda propaganda barely anybody cares about it at all... and even those that do care are unlikely to have it notably influence their buying decision...

    and for you guys and the other few who care... just get an ati card...

    supposedly 600 something...
    so if they cut it in 4, then itll be 150... about the same as a 5970 i guess...
    My friends 4870 score around 135-140 gflops in milyway@home yes and its double precision.... Sooo GF100 scores close to a 4870 in DP performance
    Coming Soon

  11. #1786
    Xtreme Addict
    Join Date
    Apr 2007
    Posts
    1,870
    It goes like this:

    Step 1: Application is important enough to drive sales.
    Step 2: Companies care.

    Right now the amount of money they would lose on Tesla sales if they allow Geforces to have full DP capability is far more than the amount of money they would make from people who care about Milkyway@home.

  12. #1787
    Xtreme Mentor
    Join Date
    Jul 2008
    Location
    Shimla , India
    Posts
    2,631
    Quote Originally Posted by trinibwoy View Post
    It goes like this:

    Step 1: Application is important enough to drive sales.
    Step 2: Companies care.

    Right now the amount of money they would lose on Tesla sales if they allow Geforces to have full DP capability is far more than the amount of money they would make from people who care about Milkyway@home.
    Yes but DP has a lot of advantages over SP for calculations in projects like medical research, math and physics. Its sad that nvidia went this way, we bought 4*5850 setup specially to contribute something.

    This move by Nvidia has created a void for DP performance due to which developers will prefer to release SP app's rather than more robust and better DP app's The number of people who contribute for the cause is ever growing and i am sure that the united processing power of these people can trump all the combined power of every tesla ever released.

    In small words: If Nvidia included full DP performance it would be better for humanity...
    Coming Soon

  13. #1788
    Xtreme Addict
    Join Date
    Oct 2004
    Posts
    1,838
    wait, whats going on now, only the tesla fermi's have good dp performance and the geforce fermi's won't?
    DFI P965-S/core 2 quad q6600@3.2ghz/4gb gskill ddr2 @ 800mhz cas 4/xfx gtx 260/ silverstone op650/thermaltake xaser 3 case/razer lachesis

  14. #1789
    I am Xtreme
    Join Date
    Dec 2008
    Location
    France
    Posts
    9,060
    Donate to XS forums
    Quote Originally Posted by jayhall0315 View Post
    If you are really extreme, you never let informed facts or the scientific method hold you back from your journey to the wrong answer.

  15. #1790
    Xtreme Addict
    Join Date
    Apr 2007
    Posts
    1,870
    Quote Originally Posted by ajaidev View Post
    In small words: If Nvidia included full DP performance it would be better for humanity...
    Heh, yes that would matter if Nvidia was an arm of the Red Cross.

  16. #1791
    Xtreme Addict
    Join Date
    Oct 2004
    Posts
    1,838
    i guess dp performance is why teslas are so crazy expensive.
    DFI P965-S/core 2 quad q6600@3.2ghz/4gb gskill ddr2 @ 800mhz cas 4/xfx gtx 260/ silverstone op650/thermaltake xaser 3 case/razer lachesis

  17. #1792
    Xtreme Member
    Join Date
    Dec 2009
    Posts
    184
    Quote Originally Posted by grimREEFER View Post
    i guess dp performance is why teslas are so crazy expensive.
    no? It's not like it's a completely different chip.

  18. #1793
    Xtreme Addict
    Join Date
    Apr 2007
    Posts
    1,870
    Quote Originally Posted by grimREEFER View Post
    i guess dp performance is why teslas are so crazy expensive.
    No, they're crippling DP performance on Geforces to avoid cannibalizing Tesla sales. They are based on the same chip.

  19. #1794
    Xtreme Addict
    Join Date
    Aug 2005
    Location
    Germany
    Posts
    2,247
    Quote Originally Posted by neliz View Post
    And I thought G70/71 would be the first 512-bit GPU (turned out it was 2x256) boy.. did NV fool me there

    edit: on the Jen-Hsun Huang thing.. ugh...m.. mmh.. I am
    http://www.facebook.com/profile.php?...00000590746752
    "Jen-Hsun just whipped up some delicious Spitfire Roasted Chicken in Cafe World! · Play Cafe World"



    maybe i can get him to join my mafia in mafia wars!
    1. Asus P5Q-E / Intel Core 2 Quad Q9550 @~3612 MHz (8,5x425) / 2x2GB OCZ Platinum XTC (PC2-8000U, CL5) / EVGA GeForce GTX 570 / Crucial M4 128GB, WD Caviar Blue 640GB, WD Caviar SE16 320GB, WD Caviar SE 160GB / be quiet! Dark Power Pro P7 550W / Thermaltake Tsunami VA3000BWA / LG L227WT / Teufel Concept E Magnum 5.1 // SysProfile


    2. Asus A8N-SLI / AMD Athlon 64 4000+ @~2640 MHz (12x220) / 1024 MB Corsair CMX TwinX 3200C2, 2.5-3-3-6 1T / Club3D GeForce 7800GT @463/1120 MHz / Crucial M4 64GB, Hitachi Deskstar 40GB / be quiet! Blackline P5 470W

  20. #1795
    Registered User
    Join Date
    Sep 2009
    Posts
    69
    Quote Originally Posted by ajaidev View Post
    Yes but DP has a lot of advantages over SP for calculations in projects like medical research, math and physics. Its sad that nvidia went this way, we bought 4*5850 setup specially to contribute something.

    This move by Nvidia has created a void for DP performance due to which developers will prefer to release SP app's rather than more robust and better DP app's The number of people who contribute for the cause is ever growing and i am sure that the united processing power of these people can trump all the combined power of every tesla ever released.

    In small words: If Nvidia included full DP performance it would be better for humanity...
    wow, thats cartoonishly evil from nvidia

  21. #1796
    Registered User
    Join Date
    Jul 2009
    Posts
    34
    Quote Originally Posted by Teemax View Post
    What exactly is your point with that chart?
    2560x1600 has twice the details as 1080p, and the article supported the benefits of higher resolution which was my point.

    Are you lost?
    You've asked for the source in the previous post. It's not exactly quoting ISF themselves, but you can take it as is, and if you don't believe it ask anyone of ISF guys over at AVS.

    My response was to Nedjo's post, where he claimed that on a PC he played Crysis with X360 gamepad which allowed him not to care so much about the framerate due to it not being so responsive compared to m&kb combination. Then I "suggested" buying PDP to perfectly complement that experience. It was a slight OT remark, while we were already at it.

    It has twice the pixels, yes, but as I've already pointed out, you can just go with the higher pixel count and think you are getting the best possible IQ compared to other solutions for gaming. And of course, not everyone has the same needs and priorities when it comes to choosing a monitor.
    Inform yourself about motion resolution, 30" 60Hz LCD monitor simply can't resolve not even half of it's pixel real estate in motion, which matters most (obviously, static picture looks great, no doubt about that). And that's only one thing, we are not even talking contrast, dynamic range, black levels, color accuracy, bunch of stuff. Dymanic range is probably the most important aspect of image quality assessments when it comes to gaming. Seems to me that you're judging the quality of a monitor only by the pixel pitch, which is ridiculous and doesn't have a support in reality.

    As I've already said, feel free to think/buy whatever your heart desires.
    Last edited by m.fox; 01-22-2010 at 11:11 AM.

  22. #1797
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    you guys really think cutting dp perf on retail cards is a big deal?
    i havent seen a single really useful cuda or opencl or direct compute app... i mean something everybody or most people will actually benefit from...
    and dp perf doesnt matter at all in games... so really, both nvidia and ati could disable dp on their retail cards altogether... i couldnt care less

    neliz, so the FC2 numbers were a best case scenario? cause in there fermi was close to a 5970 and notably faster than a 5870.
    if the average boost is just 10-30% for the fastest fermi part, then a 5870@1ghz and with 2gb mem will be able to sell for 400$+ easily vs a slightly faster fermi with 1.5gb... so ati can maintain their price point, they just have to up the clocks and increase the memory, both shouldnt hurt their margins much, if at all...

    bad news for us consumers :/
    price/perf wont improve much if at all in this year...

  23. #1798
    Xtreme Addict
    Join Date
    May 2007
    Location
    'Zona
    Posts
    2,346
    Quote Originally Posted by saaya View Post
    you guys really think cutting dp perf on retail cards is a big deal?
    People were stating how awesome it was going to be to buy a GF100 and have that massive DP power.
    People assumed Nvidia would leave full DP on the Geforce parts.

    Quote Originally Posted by saaya View Post
    fully functional? thats 250$ a piece... thats just about good enough to not lose money on a 500$ card isnt it?
    I dunno but I would assume that would be total yield... not for a specific bin.
    Remember, that would just be for the silicon, not including other board costs which Charlie is calculating to be ~$100(not re-reading his article), not including the cuts for AIBS and distros.

    Think about that for a sec, they can't even price the salvage part against the 5870 without losing money, if those numbers are right.
    $499 and $599 would be the minimum MSRP in March. If they "launch" a 512CC part, I'm thinking it would be $649-$699MSRP, at a minimum.
    Last edited by LordEC911; 01-22-2010 at 01:20 PM.
    Originally Posted by motown_steve
    Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.

    Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.

  24. #1799
    Xtreme Mentor
    Join Date
    Jul 2008
    Location
    Shimla , India
    Posts
    2,631
    Quote Originally Posted by saaya View Post
    you guys really think cutting dp perf on retail cards is a big deal?

    It is certainly bad news for people who like to fold on gpu's. I recall how a developer was talking about milkyway@home and DP being better and more logical than it being on SP.

    If DP is used in all folding app's the efficiency and accuracy would only increase. I would think not many people will buy tesla's just for folding and i know of many people who use the consumer part to fold, these kind of people are at a lose.

    As to start with GF100's SP score is not that great anyways and the DP seemed very future proof and certainly inviting. For games it would not matter but for folding it would certainly and for a company which talks so much about "GPU-CPU is future" this is bit of a downer

    Future is DP and the Quadruple precision floating point format there is no way around that fact for CPU processing or GPU-CPU processing
    Coming Soon

  25. #1800
    Xtreme Addict
    Join Date
    Dec 2007
    Posts
    1,030
    I come to this thread and all I can read is

    *whine whine whine*
    Are we there yet?

Page 72 of 82 FirstFirst ... 226269707172737475 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •