Page 57 of 82 FirstFirst ... 7475455565758596067 ... LastLast
Results 1,401 to 1,425 of 2036

Thread: The GT300/Fermi Thread

  1. #1401
    Xtreme Enthusiast
    Join Date
    Jan 2008
    Posts
    743
    Quote Originally Posted by BenchZowner View Post
    You guys need to stop daydreaming.
    First of all you people have to realize that both nVIDIA's and AMD's current base architectures ( both the 5xxx series and the GTX 3xx series are based on previous designs, what AMD started with the 2900 series and nVIDIA with the 8800GTX ) have reached/are reaching their ceiling.
    They won't be able to add more units to the cards to put out a refresh.
    If at all, nVIDIA might be able to add another 64 "SPs" and AMD about 160 "shader units".

    And those numbers would be barely viable even if TSMC or any other manufacturer gets the job done right at 40nm or even lower.

    Realistically both nVIDIA & AMD have to come out with a totally new architecture for their new generation cards, and that's not a simple thing, so no, unless the 6xxx series are just a refresh of the current GPU, it won't be out before Mid 2011.

    By the way somebody has to develop a vB add-on called "hide Fermi related threads", I'm really tired of some people, some flaming, some cheering, some doubting and some crying like poor babies.
    Grow up maybe ?
    There's an AMD roadmap somewhere around the news section list their 28nm next gen part projected at late Q2010 early Q1 2011. I think its safe to assume they've been working this new architecture for the last few years. Just normal product cycle stuff.
    So no he isn't exactly dreaming. Optimistic for it to come sooner? sure.

  2. #1402
    Xtreme Addict
    Join Date
    Apr 2006
    Posts
    2,462
    Quote Originally Posted by BenchZowner View Post
    By the way somebody has to develop a vB add-on called "hide Fermi related threads", I'm really tired of some people, some flaming, some cheering, some doubting and some crying like poor babies.
    Grow up maybe ?
    No offense, but you should stop caring. Everytime you stop by in this thread (which you say doesn't interest you at all) you complain about flamewars, fanboys etc even if all those participating in this thread don't complain at all.

    If it's going on your nerves then just stay out. What's the problem?
    Notice any grammar or spelling mistakes? Feel free to correct me! Thanks

  3. #1403
    Xtreme Addict
    Join Date
    Sep 2008
    Location
    Downunder
    Posts
    1,313
    Quote Originally Posted by FischOderAal View Post
    If it's going on your nerves then just stay out. What's the problem?
    It's a daunting task to avoid a single thread without some form of filtering.

  4. #1404
    Xtreme Member
    Join Date
    Jan 2009
    Location
    Huyamba
    Posts
    316
    Quote Originally Posted by FischOderAal View Post
    For you. There aren't many that would take a slower yet more expensive single-GPU-card over a cheaper yet faster Dual-GPU-card.

    If money was no problem for me I'd go with the first option. SLI/CF just plain sucks.
    I didn't say the price doesn't matter for "me", it's just that when you qualify the perfomance of the cards and compare them you should compare the same cards like 1vs1 or 2vs2 gpus ... Is my english so bad I have to repeat this simple idea every time? sheesh
    i7 950@4.05Ghz HeatKiller 3.0
    EVGA E762 EK WB | 12Gb OCZ3X1600LV6GK
    Razer Tarantula |Razer Imperator | SB X-Fi PCIe
    480GTX Tri SLi EK WBs | HAF X | Corsair AX1200
    ____________________________________________
    Loop1: Double_MCP655(EK Dual Top) - MoRa3Pro_4x180 - HK3.0 - EKFB_E762
    Loop2: Koolance_MCP655(EK Top) - HWLabsSR1_360 - EK_FC480GTX(3x)

  5. #1405
    Xtreme Mentor
    Join Date
    Jul 2008
    Location
    Shimla , India
    Posts
    2,631
    Quote Originally Posted by Teemax View Post
    Or they could simply beeline for HD6870...
    If AMD plans the release of HD6870 one year after HD5870, then HD6870 would be as much a competitor for Fermi as Fermi would for HD5870.
    A 6xxx series would be a bit much for a shrink. But a moniker of 5890 would suite a 5870 shrunk quite well, wvwn if its not a shrunk and hogs around 250W for 1.1-1.2Ghz its quite a deal given it should be much cheaper to GTX380.

    Also FUD reported this this is something i was thinking would be going on at ATi.

    ATi has a smaller core and that means it can push it at higher speeds where as Nvidia has a much bigger core that can not be pushed to very high speeds without crossing the TDP limit IMO.

    Quote Originally Posted by BenchZowner View Post
    Realistically both nVIDIA & AMD have to come out with a totally new architecture for their new generation cards, and that's not a simple thing, so no, unless the 6xxx series are just a refresh of the current GPU, it won't be out before Mid 2011.
    Not really ATi can actually put 5970 in one chip and increase the 256bit lane to 384bit and use faster GDDR5. But the 6xxx is suppose to be a new arc. straight from ATi's mouth as for Nvidia its more of a evolution than anything else the difference between what G200 and GF100 can do is huge. The R600 was a very good design from the start its expansion to the R700 quite dull and the same thing is true with the Evergreen.

    But this does not mean that the evergreen can not do all the things Fermi does, because it very well can the only real problem are 64bit operations in FMA, flexibility of cache and bandwidth available.
    Last edited by ajaidev; 01-18-2010 at 06:03 AM.
    Coming Soon

  6. #1406
    Xtreme Addict
    Join Date
    Jan 2008
    Location
    Puerto Rico
    Posts
    1,374
    Worst NDA ever if you ask me!!!
    ░█▀▀ ░█▀█ ░█ ░█▀▀ ░░█▀▀ ░█▀█ ░█ ░█ ░░░
    ░█▀▀ ░█▀▀ ░█ ░█ ░░░░█▀▀ ░█▀█ ░█ ░█ ░░░
    ░▀▀▀ ░▀ ░░░▀ ░▀▀▀ ░░▀ ░░░▀░▀ ░▀ ░▀▀▀ ░

  7. #1407
    Xtreme Addict
    Join Date
    May 2004
    Posts
    1,755
    Quote Originally Posted by FischOderAal View Post
    No offense, but you should stop caring. Everytime you stop by in this thread (which you say doesn't interest you at all) you complain about flamewars, fanboys etc even if all those participating in this thread don't complain at all.

    If it's going on your nerves then just stay out. What's the problem?
    QFT!!!
    Crosshair IV Formula
    Phenom II X4 955 @ 3.7G
    6950~>6970 @ 900/1300
    4 x 2G Ballistix 1333 CL6
    C300 64G
    Corsair TX 850W
    CM HAF 932

  8. #1408
    Xtreme Member
    Join Date
    Dec 2009
    Posts
    184
    Quote Originally Posted by Blacky View Post
    Worst NDA ever if you ask me!!!
    Wait for the next GF100 NDA!

    You will know about it even longer...

    And then come the disappointing benchmarks.. but there will still be people that want to buy it..... but they can't.. because the street date will be under NDA and variable too...

    seriously, this Green carrot is getting quite stale indeed.

  9. #1409
    Xtreme Enthusiast
    Join Date
    Jun 2005
    Posts
    960
    Quote Originally Posted by W1zzard View Post
    if nvidia is smart (they probably are) they put some spares on their gpu which is basically extra pieces of hardware that can replace pieces where defects are in the silicon. for example you could imagine having a 5th GPC cluster that can replace one with defects. if you do the proper math you can compute spare designs that are statistically going to increase your per-die yield even though such measures increase the die area
    And you think than only Nvidia can resort to that kind of techniques?
    If one of them is doing something like that, you can bet the other is too.

  10. #1410
    One-Eyed Killing Machine
    Join Date
    Sep 2006
    Location
    Inside a pot
    Posts
    6,340
    Quote Originally Posted by ajaidev View Post
    Not really ATi can actually put 5970 in one chip and increase the 256bit lane to 384bit and use faster GDDR5.
    That's going to be even harder than Fermi to manufacture...

    As for a "1.2GHz 24/7 aircooled 5870/5890"... simply put no way.
    Ain't gonna happen.
    Coding 24/7... Limited forums/PMs time.

    -Justice isn't blind, Justice is ashamed.

    Many thanks to: Sue Wu, Yiwen Lin, Steven Kuo, Crystal Chen, Vivian Lien, Joe Chan, Sascha Krohn, Joe James, Dan Snyder, Amy Deng, Jack Peterson, Hank Peng, Mafalda Cogliani, Olivia Lee, Marta Piccoli, Mike Clements, Alex Ruedinger, Oliver Baltuch, Korinna Dieck, Steffen Eisentein, Francois Piednoel, Tanja Markovic, Cyril Pelupessy (R.I.P. ), Juan J. Guerrero

  11. #1411
    Xtreme Guru
    Join Date
    Aug 2007
    Posts
    3,562
    Quote Originally Posted by saaya View Post
    but you saw it with the overhead projects, not the 3 monitors with bezel, right?
    Nope. They were showing a car racing game (can't remember which one) on three 24" Acer 120Hz monitors.

    wha wha whaaaa? didnt jensen just tell the BBC in an interview that they started mass production of GF100 cards? How can they mass produce something if the spec isnt finalized?
    Just because the heatsink and PCB isn't finalized doesn't mean the chips can't be produced.

    As for "mass production", if you ask five people at NVIDIA about actual production, two will say it is in mass production, two will say it isn't and the final person will just shrug their shoulders.



    Quote Originally Posted by saaya View Post
    mmhhh sounds to me like its not really a propper benchmark standard tho... they used MS DX11 toolkit, well yeah, just like EVERY DX11 benchmark and game uses the MS DX!! toolkit... that doesnt mean its an objective unoptimized benchmark... in that case they should have compared fermi to gtx285 like they did in the other tests...
    Unfortunately, there are no "objective benchmarks". However, developers do use the benchmarks within toolkits for validate their code so having good performance in those is a must.


    Quote Originally Posted by saaya View Post
    oh and how about HAWX, did they bench HAWX or did nvidia only offer slides for the HAWX perf?
    No, HawX was a simple slide which was supposed to illustrate how the new ROP layout and design decreases overhead when using higher instances of AA.

    Quote Originally Posted by Solus Corvus View Post
    AT gets 14fps more on a 5870 @ 2560x1600 in FC2. With SKYMTL's numbers at 2560x1600 fermi has a 39% lead over 5870. Sub AT's numbers for 5870 and it's a 6% lead.

    What a mess. I can't wait to see this in an independent reviewer's hands so we can get real apples-to-apples tests in a variety of games. I'm curious to see some 8xAA numbers as well.
    FC2 is very CPU limited and I'm pretty sure they use either a 3.8Ghz or 4Ghz i7 which in my tests could equate a 15-18% improvement in the Ranch Small benchmark over a chip clocked at 3.2Ghz.
    Last edited by SKYMTL; 01-18-2010 at 06:47 AM.

  12. #1412
    Xtreme Legend
    Join Date
    Jan 2003
    Location
    Stuttgart, Germany
    Posts
    929
    Quote Originally Posted by Piotrsama View Post
    And you think than only Nvidia can resort to that kind of techniques?
    If one of them is doing something like that, you can bet the other is too.
    i know for a fact that ati does it, i just mentioned it because there is more to yields than just die size

  13. #1413
    Registered User
    Join Date
    Dec 2009
    Location
    France
    Posts
    3
    Shows some courage from Nvidia to make such a chip, since it's really high on micro polygon and IQ, wich is a good evolution imo, and since shading evolution won't be as demanding in the future tech (i mean, less than the big step we made the past 5 years). Offset mapping will slowly be replaced by real displacement and Fermi will be really good at this, but i think the HD 5870 will beat it in quite some games (old engine tech mostly). Again, real risky move for Nvida, but i like this way a lot. As always, real innovation comes at a cost somewhere.

  14. #1414
    Xtreme Member
    Join Date
    Dec 2009
    Posts
    184
    Quote Originally Posted by SKYMTL View Post
    As for "mass production", if you ask five people at NVIDIA about actual production, two will say it is in mass production, two will say it isn't and the final person will just shrug their shoulders.
    I can give you for a fact that MP doesn't start for at least another month.

  15. #1415
    I am Xtreme
    Join Date
    Dec 2008
    Location
    France
    Posts
    9,060
    Quote Originally Posted by Tim View Post
    Any comments on Charlie's article? Info discussed there sounds quite bad to me.
    He is correct about a lot of stuff IMO.
    Sure, Fermi will deliver, but at what price?
    20% higher gaming performance with 60% bigger die and power consumption? And manufacturing costs compared to 5970...
    This is GeForce 5800 2.0 if you ask me.
    I blame the professional market and Nvidia trying to compete with CPUs due to not having their own.
    Quote Originally Posted by Mk View Post
    well if this the gtx360 then impressed but if its the gtx380 then meh its fine but i expected more with all the propaganda around it
    Oh, come on. This was a PR session, so obviously it was a top card... They brought the best they could offer, even if they had a single card manufactured that could push this performance, lol. Not like they are going to release it tomorrow.
    Last edited by zalbard; 01-18-2010 at 07:03 AM.
    Donate to XS forums
    Quote Originally Posted by jayhall0315 View Post
    If you are really extreme, you never let informed facts or the scientific method hold you back from your journey to the wrong answer.

  16. #1416
    Xtreme Addict
    Join Date
    Jan 2008
    Location
    Puerto Rico
    Posts
    1,374
    Quote Originally Posted by STEvil View Post
    Me.
    +1!
    ░█▀▀ ░█▀█ ░█ ░█▀▀ ░░█▀▀ ░█▀█ ░█ ░█ ░░░
    ░█▀▀ ░█▀▀ ░█ ░█ ░░░░█▀▀ ░█▀█ ░█ ░█ ░░░
    ░▀▀▀ ░▀ ░░░▀ ░▀▀▀ ░░▀ ░░░▀░▀ ░▀ ░▀▀▀ ░

  17. #1417
    I am Xtreme
    Join Date
    Dec 2008
    Location
    France
    Posts
    9,060
    Quote Originally Posted by W1zzard View Post
    sweclockers: According to sources at CNET Networks chassis and power supply manufacturers is at least top model a real thirsty graphics card. The data suggest that the power consumption of ports around 250 W - 25 percent more than the GeForce GTX 285th

    sounds as good as confirmed by fudo

    i measured 5870 ref design, stock clocks as 212W in furmark. so if you extrapolate 30% performance advantage of gf100 over 5870 you get around 280W. and ati did their homework regarding power consumption
    So GF100 will consume over 300W in Furmark then? Not like ATI is showing unfair power consumption numbers and NVIDIA is truly accurate in this department.
    Quote Originally Posted by BeyondSciFi View Post
    GF100

    Gpixels/s - 22.4
    Gtextels/s - 44.8
    Mtriangles/s - 2800
    Bandwidth - 214.7
    RAM - 1536MB

    Radeon 5870

    Gpixels/s - 27.2
    Gtextels/s - 68.0
    Mtriangles/s - 850
    Bandwidth - 143.1
    Ram - 1024MB

    Summery: GF100 should slower then the Radeon 5870 in most games except where tessellation and/or physics (on the gpu) is used, and except at high resolutions with AA, maybe, probably. However, GF100 will be faster for the HPC market as it has 716.8 double precise GFLOPS, while the Radeon 5870 has 544 double precise GFLOPS. This is probably why Nvidia has been touting the GF100's "compute" performance instead of its gaming performance.
    Hope that's not true. I expect it to be a bit faster than 5870 after all. Nice geometrical performance, though... But it won't really be needed if it can't cope with today's textures (not to mention uber high res texture packs!)
    Donate to XS forums
    Quote Originally Posted by jayhall0315 View Post
    If you are really extreme, you never let informed facts or the scientific method hold you back from your journey to the wrong answer.

  18. #1418
    Xtreme Addict
    Join Date
    Nov 2007
    Posts
    1,195
    Quote Originally Posted by NCspecV81 View Post
    orly?

    hmmm can i say epic fail to author or will it be too rude

  19. #1419
    Xtreme Mentor
    Join Date
    Feb 2007
    Location
    Oxford, England
    Posts
    3,433
    Quote Originally Posted by SKYMTL View Post
    FC2 is very CPU limited and I'm pretty sure they use either a 3.8Ghz or 4Ghz i7 which in my tests could equate a 15-18% improvement in the Ranch Small benchmark over a chip clocked at 3.2Ghz.
    he showed it, it's an amd quad core chip @ 3.3
    "Cast off your fear. Look forward. Never stand still, retreat and you will age. Hesitate and you will die. SHOUT! My name is…"
    //James

  20. #1420
    Xtreme Addict
    Join Date
    Oct 2004
    Posts
    1,838
    anandtech mentioned that they dont use the ranch demo to bench fc 2, because its not exactly the same every time.
    DFI P965-S/core 2 quad q6600@3.2ghz/4gb gskill ddr2 @ 800mhz cas 4/xfx gtx 260/ silverstone op650/thermaltake xaser 3 case/razer lachesis

  21. #1421
    One-Eyed Killing Machine
    Join Date
    Sep 2006
    Location
    Inside a pot
    Posts
    6,340
    You need to pay attention...
    They left A.I. ( Artificial Intelligence ) enabled, thus the benchmark becomes CPU limited and differences between differently clocked or even maker CPUs arise.
    Considering that in a sum of games the 5870 needs an i7 clocked at 3.7GHz to be CPU-limit-free, and there you have it.

    I know I'm going to have a hell of a time if what I saw and my other info is wrong, but here it comes:
    The GTX 380 will be nearing the 5970 in most games, and will have a significant advantage over the 5870.
    Regarding the price... well, if you're a realist and live under no illusions, then it's easy to say that it will be priced similarly to the 5970, normally 50 bucks lower or so.
    Coding 24/7... Limited forums/PMs time.

    -Justice isn't blind, Justice is ashamed.

    Many thanks to: Sue Wu, Yiwen Lin, Steven Kuo, Crystal Chen, Vivian Lien, Joe Chan, Sascha Krohn, Joe James, Dan Snyder, Amy Deng, Jack Peterson, Hank Peng, Mafalda Cogliani, Olivia Lee, Marta Piccoli, Mike Clements, Alex Ruedinger, Oliver Baltuch, Korinna Dieck, Steffen Eisentein, Francois Piednoel, Tanja Markovic, Cyril Pelupessy (R.I.P. ), Juan J. Guerrero

  22. #1422
    Xtreme Addict
    Join Date
    Jul 2002
    Location
    [M] - Belgium
    Posts
    1,744
    Quote Originally Posted by BenchZowner View Post
    By the way somebody has to develop a vB add-on called "hide Fermi related threads", I'm really tired of some people, some flaming, some cheering, some doubting and some crying like poor babies.
    Grow up maybe ?
    if you are serious... FF+GreaseMonkey+Edit this script: http://userscripts.org/scripts/show/59016


    Belgium's #1 Hardware Review Site and OC-Team!

  23. #1423
    Xtreme Mentor
    Join Date
    Jan 2009
    Location
    Oslo - Norway
    Posts
    2,879
    "thermonuclear meltdown!", is the only and the latest tag on this tag-crazy-thread. .


    This looks really good, and if Nvidia is not bluffing, this can make my waiting worth.
    If I'm not mistaking, the current info is indicating that we are looking at the same performance as HD5970 in a single card.
    Last edited by Sam_oslo; 01-18-2010 at 07:58 AM. Reason: Typos

  24. #1424
    Xtreme Mentor
    Join Date
    Feb 2007
    Location
    Oxford, England
    Posts
    3,433
    Quote Originally Posted by BenchZowner View Post
    You need to pay attention...
    They left A.I. ( Artificial Intelligence ) enabled, thus the benchmark becomes CPU limited and differences between differently clocked or even maker CPUs arise.
    Considering that in a sum of games the 5870 needs an i7 clocked at 3.7GHz to be CPU-limit-free, and there you have it.

    I know I'm going to have a hell of a time if what I saw and my other info is wrong, but here it comes:
    The GTX 380 will be nearing the 5970 in most games, and will have a significant advantage over the 5870.
    Regarding the price... well, if you're a realist and live under no illusions, then it's easy to say that it will be priced similarly to the 5970, normally 50 bucks lower or so.

    ok i missed the AI thing....


    I agree with yo on the second point... like i've said it's a lot like the last series of cards,
    "Cast off your fear. Look forward. Never stand still, retreat and you will age. Hesitate and you will die. SHOUT! My name is…"
    //James

  25. #1425
    Xtreme Guru
    Join Date
    Aug 2007
    Posts
    3,562
    Quote Originally Posted by BenchZowner View Post
    You need to pay attention...
    They left A.I. ( Artificial Intelligence ) enabled, thus the benchmark becomes CPU limited and differences between differently clocked or even maker CPUs arise.
    Considering that in a sum of games the 5870 needs an i7 clocked at 3.7GHz to be CPU-limit-free, and there you have it.
    FINALLY someone hit the nail on the head....

    I just wish I had said that myself.

    However, it begs the question: if the HD 5870 is CPU-bound in the tests, why isn't the GF100 as well?
    Last edited by SKYMTL; 01-18-2010 at 08:14 AM.

Page 57 of 82 FirstFirst ... 7475455565758596067 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •