Page 13 of 42 FirstFirst ... 31011121314151623 ... LastLast
Results 301 to 325 of 1035

Thread: The official GT300/Fermi Thread

  1. #301
    Xtreme Enthusiast
    Join Date
    Apr 2005
    Location
    Gold Coast, Australia
    Posts
    542
    Quote Originally Posted by Chumbucket843 View Post
    there are 4 papers on nvidia's website.
    pg 19 of "fermi looking beyond graphics"
    that doesnt sound very specific to me...
    Intel Core 2 Extreme QX9650
    Gigabyte ep45-ud3lr
    Sapphire HD6970
    Team Xtreem 2*1gb 1300
    1TB Western Digital

  2. #302
    Xtreme Mentor
    Join Date
    Oct 2005
    Posts
    2,788
    At some point, if not now, features intended to
    boost compute performance may compromise the chip’s competitive position as an affordable graphics
    processor.
    Whatever, as long as it's still faster than the 5870, it will be entrenched as the fastest gaming GPU and the fastest GPGPU.
    Asus Rampage II Gene | Core i7 920 | 6*2GB Mushkin 998729 | BFG GTX280 OCX | Auzentech X-Fi Forte | Corsair VX550
    —Life is too short to be bound by the moral, ethical and legal constraints imposed on us by modern day society.

  3. #303
    Xtreme Addict
    Join Date
    Apr 2007
    Posts
    2,128
    Quote Originally Posted by 003 View Post
    Whatever, as long as it's still faster than the 5870, it will be entrenched as the fastest gaming GPU and the fastest GPGPU.
    ...until HD5890 arrrives, that is if ATI can work wonders. If Nvidia is being slow, and ATI being fast.. ATI could hold the H5890 in it's sleeve and pull it out just to make GF100 look ugly.

  4. #304
    Xtreme Addict
    Join Date
    May 2007
    Location
    'Zona
    Posts
    2,346
    Quote Originally Posted by Nvidia
    At some point, if not now, features intended to boost compute performance may compromise the chip’s competitive position as an affordable graphics processor. At that juncture, the architectures may have to diverge — especially if the professional market grows larger than the consumer market.
    That doesn't sound like they are doing that now. They are just stating that it may happen in the future but we will see.
    Edit- Thanks. I only knew about the one whitepaper.
    Last edited by LordEC911; 10-02-2009 at 04:19 PM.
    Originally Posted by motown_steve
    Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.

    Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.

  5. #305
    Registered User
    Join Date
    May 2005
    Posts
    3,691
    Quote Originally Posted by Calmatory View Post
    ...until HD5890 arrrives, that is if ATI can work wonders. If Nvidia is being slow, and ATI being fast.. ATI could hold the H5890 in it's sleeve and pull it out just to make GF100 look ugly.
    Somehow I don't think a 5890 is ATi's answer to fermi, I'd think they're more hopeful for the 5870x2 to handle that task than the 5890.
    Quote Originally Posted by Leon2ky
    "dammit kyle what's with the 30 second sex lately?" "Sorry sweetie, I overclocked my nuts and they haven't been stable since"
    Quote Originally Posted by trinibwoy View Post
    I don't think his backside has internet access.
    Quote Originally Posted by n00b 0f l337 View Post
    Hey I just met you
    And this is crazy
    But I'm on bath salts
    And your face looks tasty

  6. #306
    Xtreme Addict
    Join Date
    Apr 2007
    Posts
    2,128
    Quote Originally Posted by DilTech View Post
    Somehow I don't think a 5890 is ATi's answer to fermi, I'd think they're more hopeful for the 5870x2 to handle that task than the 5890.
    True. Though depends how much ATI can squeeze out from HD5890. Definitely the HD5870 X2 will be the real answer, which beats the single card offering from Nvidia. But business wise it might not be that great product due to high costs.

  7. #307
    Registered User
    Join Date
    May 2005
    Posts
    3,691
    We'll know for sure where everything falls into place when NVidia launches. We can guesstimate 5870x2 performance by looking at CF 5870 results, assuming ATi don't have to downclock at all for the 5870x2, which at present we do not know.
    Quote Originally Posted by Leon2ky
    "dammit kyle what's with the 30 second sex lately?" "Sorry sweetie, I overclocked my nuts and they haven't been stable since"
    Quote Originally Posted by trinibwoy View Post
    I don't think his backside has internet access.
    Quote Originally Posted by n00b 0f l337 View Post
    Hey I just met you
    And this is crazy
    But I'm on bath salts
    And your face looks tasty

  8. #308
    Xtreme Mentor
    Join Date
    Oct 2005
    Posts
    2,788
    Quote Originally Posted by Calmatory View Post
    ...until HD5890 arrrives, that is if ATI can work wonders. If Nvidia is being slow, and ATI being fast.. ATI could hold the H5890 in it's sleeve and pull it out just to make GF100 look ugly.
    GT300 is not going to be faster than the 5870 by such a negligible amount that a 5890 would pull ahead. There will be a 5870X2, but then again there will also be a dual GPU GT300.
    Asus Rampage II Gene | Core i7 920 | 6*2GB Mushkin 998729 | BFG GTX280 OCX | Auzentech X-Fi Forte | Corsair VX550
    —Life is too short to be bound by the moral, ethical and legal constraints imposed on us by modern day society.

  9. #309
    Xtreme Addict
    Join Date
    Mar 2006
    Location
    Saskatchewan, Canada
    Posts
    2,207
    http://www.dailytech.com/article.aspx?newsid=16401

    ORNL to Use NVIDIA Fermi to Build Next Gen Super Computer

    NVIDIA announced its new Fermi architecture at its GPU Technology Conference recently. The new architecture was designed from the ground up to enable a new level of supercomputing using GPUs rather than CPUs. At the conference, Oak Ridge National Laboratory (ORNL) associate lab director for Computing and Computational Sciences, Jeff Nichols, announced that ORNL would be building a next generation supercomputer using the Fermi architecture.

    The new supercomputer is expected to be ten times faster than today's fastest supercomputer. Nichols said that Fermi would enable substantial scientific breakthroughs that would have been impossible without the technology.

    Looks like NV has customers already for Fermi.

  10. #310
    Xtreme Addict
    Join Date
    Jul 2007
    Posts
    1,488
    Quote Originally Posted by 003 View Post
    GT300 is not going to be faster than the 5870 by such a negligible amount that a 5890 would pull ahead. There will be a 5870X2, but then again there will also be a dual GPU GT300.
    We don't even know how fast the GT300 is, much less how fast a 5890 will be, and you are already calling this one for NV? Lol...ok.

  11. #311
    Registered User
    Join Date
    Dec 2008
    Posts
    26
    http://www.ixbt.com/news/all/index.shtml?12/46/02 Translated At the end of this article it says that card in Huang hands was fake
    natyralnoe yvelichenie chlena na gratis.pp.ru

  12. #312
    Xtreme Member
    Join Date
    Jul 2009
    Location
    Madrid, Spain
    Posts
    169
    I'd kill for just a number from any bench leakage now
    WORKSTATION || TJ10B-W | i7-3930K C2 | 4x8GB DDR3-2400 CL10 | 2xGTX TITAN 6GB SLI | P1000W || 30" 2560x1600@60hz
    HTPC || GD08B | i7-920 D0 | 3x4GB 2000 CL9 | HD5870 1GB | X25-M 80GB | X-750 || 75" 1920x1080@4x200hz
    NOTEBOOK || P170EM | i7-3820QM | 2x8GB DDR3 1600 CL9 | GTX680M 4GB | HyperX 3K 240GB || 17,3" 1920x1080@60hz
    ULTRABOOK || W130EW | i7-3620QM | 2x8GB DDR3 1600 CL9 | HD4000 | HyperX 3K 240GB || 13,3" 1366x768@60hz

  13. #313
    Xtreme Member
    Join Date
    Sep 2008
    Posts
    449
    Quote Originally Posted by 003 View Post
    GT300 is not going to be faster than the 5870 by such a negligible amount that a 5890 would pull ahead. There will be a 5870X2, but then again there will also be a dual GPU GT300.
    If Nvidia is already on planning on making a dual G300 die gpu solution this should indicate that a single g300 will not completely obliterate hd 5870 in real world gaming. I'm guessing average 15-20% with highs of 30% faster in some games.

    Also 40nM and 550mm^2 die size?
    G300 x2 dual die solution will either have to be dual pcb or a single pcb thats longer then even the new 5870x2. GTX295v2 (single pcb) was hard enough despite the die shrink going to 55nm (576mm^2 >>> ~452mm^2). How nvidia is going to put two 550mm^2 dies together to form a single gpu solution is not something I want to think about on a technical level.

    Quote Originally Posted by DilTech View Post
    Somehow I don't think a 5890 is ATi's answer to fermi, I'd think they're more hopeful for the 5870x2 to handle that task than the 5890.
    5870 reference pcb is pretty much designed with a higher power draw (higher clocked) chip in mind.
    Last edited by LiquidReactor; 10-02-2009 at 07:02 PM.
    --lapped Q9650 #L828A446 @ 4.608, 1.45V bios, 1.425V load.
    -- NH-D14 2x Delta AFB1212SHE push/pull and 110 cfm fan -- Coollaboratory Liquid PRO
    -- Gigabyte EP45-UD3P ( F10 ) - G.Skill 4x2Gb 9600 PI @ 1221 5-5-5-15, PL8, 2.1V
    - GTX 480 ( 875/1750/928)
    - HAF 932 - Antec TPQ 1200 -- Crucial C300 128Gb boot --
    Primary Monitor - Samsung T260

  14. #314
    Xtreme Cruncher
    Join Date
    May 2009
    Location
    Bloomfield
    Posts
    1,968
    Quote Originally Posted by LiquidReactor View Post
    If Nvidia is already on planning on making a dual G300 die gpu solution this should indicate that a single g300 will not completely obliterate hd 5870 in real world gaming. I'm guessing average 15-20% with highs of 30% faster in some games.

    Also 40nM and 550mm^2 die size?
    G300 x2 dual die solution will either have to be dual pcb or a single pcb thats longer then even the new 5870x2. GTX295v2 (single pcb) was hard enough despite the die shrink going to 55nm (576mm^2 >>> ~452mm^2). How nvidia is going to put two 550mm^2 dies together to form a single gpu solution is not something I want to think about on a technical level.
    no. nvidia doesnt want to have their gx2 card out late like they did with th 295 so they will have the fastest graphics so 5870x2 doesnt steal their thunder. the die is big because their is a lot of double precision units. i wouldn't be laughing at 3 billion transistors on a single chip. thats not an easy thing to do. the reason the 295 was a dual pcb card is because the memory bus size. that probably wont happen this time because the bus is smaller.

  15. #315
    Xtreme Member
    Join Date
    Jan 2006
    Posts
    209
    I think we are missing the point here, just as most people missed the point of the GTX 280 vs. the Radeon 4870 battle. We shouldn't really care too much about the configuration of GPU cores, as this is besides the point. But more on that later, now let us do a quick numerical recap if we may.

    The GTX 280 has 933 GFLOPS of overall computing power in single precision.

    The Radeon 4870 has 1200 GFLOPS of overall computing power in single precision.

    So, going by GFLOPS along, the Radeon 4870 should be the faster card in pretty much all gaming situations save the ones specially coded for Nvidia architecture (and possible driver errors). As we have seen from the dozens of reviews of these cards so far, in the vast majority of games including synthetic gaming benchmarks, we see the situation is reversed with the GTX 280 being the most consistent winner. But how is this possible? The more or less obvious answer is that gaming performance is not solely based on raw GFLOPS output. So if the GFLOPS is the not the number we should be looking at, then what? Well, I'm sure at least some of my colleagues here might attest to looking at actual game benchmarks. This is perfectly fine if, and only if, you want to compare GPU power in JUST that specific situation and NOT taking those results as OVERALL levels of performance. The reason for this is simple; there are many different gaming engines out there. Given this multiplicity of choices, even an average of say 10 or 20 of the current most popular games out there would still not be a totally accurate representation of the differing levels of performance between the GPUs. As some of us might know, in statistics, the average of a set is an artificial number which may not represent the initial conditions of that set. Basically, there is too much individual variation to use a simple average of an arbitrarily chosen group of games to call that results absolute. Thankfully, there is a way of finding the differing (if different) output between GPUs which is not arbitrary or somewhat subjective. The way to do this is by comparing another group of numbers, namely, the texture and pixel fillrates. Here is how the GTX 280 and Radeon 4870 stack up:

    The GTX 280 has a texture fillrate of 48.1 GT/s and a pixel fillrate of 19.2 GP/s.

    The Radeon 4870 has a texture fillrate of 30.0 GT/s and a pixel fillrate of 12.0 GP/s.

    As we can see, the GTX 280 has a significant output advantage over the Radeon 4870. This advantage seems to manifest itself as higher gaming performance in most game engines, and seemingly reflected in various benchmarks. It should be noted that how a certain output is produced is beside the point. The number of Shaders, TMUs, ROPs and the speed they operated at is JUST a way of getting their output; texture and pixel fillrates. Think of it in terms of internal combustion engines, just as how the number of cylinders, valves, displacement, RPM, etc., of an engine are just means of getting desired results, i.e. torque and horsepower. And to continue on the relevant note of next generation GPUs, lets try to compare the flagships of the GT300 (probably the GTX 380) and R800 (the now known Radeon 5870).

    The Radeon 5870 has a texture fillrate of 68.0 GT/s and a pixel fillrate of 27.2 GP/s.

    The GTX 380 has a texture fillrate of 83.2 GT/s and a pixel fillrate of 31.2 GP/s.
    (If the rumored specs from Wikipedia are to be believed: http://en.wikipedia.org/wiki/Compari...rce_300_Series)

    So, it seems the GTX 380 may be faster than the Radeon 5870 (with mature drivers, if not earlier). The GTX 380 (using the rumored specs) has 22% more texture fillrate and 14% more pixel fillrate. Given that the difference in performance is not too extreme, I suspect the GTX 380 will beat the Radeon 5870 in a majority of games and benchmarks but not necessarily all.
    Last edited by BeyondSciFi; 10-03-2009 at 06:39 AM.

    Accept nothing, challenge everything. ~ Anonymous

  16. #316
    Xtreme Member
    Join Date
    Jun 2008
    Location
    New Jersey
    Posts
    208
    Quote Originally Posted by BeyondSciFi View Post
    I think we are missing the point here, just as most people missed the point of the GTX 280 vs. the Radeon 4870 battle. We shouldn't really care too much about the configuration of GPU cores, as this is besides the point. But more on that later, now let us do a quick numerical recap if we may.

    The GTX 280 has 933 GFLOPS of overall computing power in single precision.

    The Radeon 4870 has 1200 GFLOPS of overall computing power in single precision.

    So, going by GFLOPS along, the Radeon 4870 should be the faster card in pretty much all gaming situations save the ones specially coded for Nvidia architecture (and possible driver errors). As we have seen from the dozens of reviews of these cards so far, in the vast majority of games including synthetic gaming benchmarks, we see the situation is reversed with the GTX 280 being the most consistent winner. But how is this possible? The more or less obvious answer is that gaming performance is not solely based on raw GFLOPS output. So if the GFLOPS is the not the number we should be looking at, then what? Well, I'm sure at least some of my colleagues here might attest to looking at actual game benchmarks. This is perfectly fine if, and only if, you want to compare GPU power in JUST that specific situation and NOT taking those results as OVERALL levels of performance. The reason for this is simple; there are many different gaming engines out there. Given this multiplicity of choices, even an average of say 10 or 20 of the current most popular games out there would still not be a totally accurate representation of the differing levels of performance between the GPUs. As some of us might know, in statistics, the average of a set is an artificial number which may not represent the initial conditions of that set. Basically, there is too much individual variation to use a simple average of an arbitrarily chosen group of games to call that results absolute. Thankfully, there is a way of finding the differing (if different) output between GPUs which is not arbitrary or somewhat subjective. The way to do this is by comparing another group of numbers, namely, the texture and pixel fillrates. Here is how the GTX 280 and Radeon 4870 stack up:

    The GTX 280 has a texture fillrate of 48.1 GT/s and a pixel fillrate of 19.2 GP/s.

    The Radeon 4870 has a texture fillrate of 30.0 GT/s and a pixel fillrate of 12.0 GP/s.

    As we can see, the GTX 280 has a significant output advantage over the Radeon 4870. This advantage seems to manifest itself as higher gaming performance in most game engines, and seemingly reflected in various benchmarks. It should be noted that how a certain output is produced is beside the point. The number of Shaders, TMUs, ROPs and the speed they operated at is JUST a way of getting their output; texture and pixel fillrates. Think of it in terms of internal combustion engines, just as how the number of cylinders, valves, displacement, RPM, etc., of an engine are just means of getting desired results, i.e. torque and horsepower. And to continue on the relevant note of next generation GPUs, lets try to compare the flagships of the GT300 (probably the GTX 380) and R800 (the now known Radeon 5870).

    The Radeon 5870 has a texture fillrate of 68.0 GT/s and a pixel fillrate of 27.2 GP/s.

    The GTX 380 has a texture fillrate of 83.2 GT/s and a pixel fillrate of 31.2 GP/s.
    (If the rumored specs from Wikipedia are to be believed: http://en.wikipedia.org/wiki/Compari...rce_300_Series)

    So, it seems the GTX 380 may be faster then the Radeon 5870 (with mature drivers, if not earlier). The GTX 380 (using the rumored specs) has 22% more texture fillrate and 14% more pixel fillrate. Given that the difference in performance is not too extreme, I suspect the GTX 380 will beat the Radeon 5870 in a majority of games and benchmarks but not necessarily all.
    I would have to agree. Good job with the speculation and research. Looking at the rumored specs and comparing them percentage wise, in increases from both series(ATi4k and GTX2), the GTX380 should have a pretty handy lead (assuming few architectural changes). Unfortunately the architecture has changed so such an assumption will likely be inaccurate.

  17. #317
    Xtreme Mentor
    Join Date
    Feb 2007
    Location
    Oxford, England
    Posts
    3,433
    " I suspect the GTX 380 will beat the Radeon 5870 in a majority of games and benchmarks but not necessarily all." .... that would suck for nvidia, they NEED To pull out by quite a bit if they trade blows even in 1 or 2 games.. then its in target for 5890... and assuming ati (having smaller dies) will be easier/cheaper to make, there going to be better price/perf again

    tbh i dont think its nvidia's fault.. as much as tmsc's... im pretty sure if tmsc didnt heave yield problems nvidia would be better equipped right now. but same could be said for ATI? but meh
    "Cast off your fear. Look forward. Never stand still, retreat and you will age. Hesitate and you will die. SHOUT! My name is…"
    //James

  18. #318
    Xtreme Addict
    Join Date
    Jul 2008
    Location
    SF, CA
    Posts
    1,294
    Quote Originally Posted by Jamesrt2004 View Post
    tbh i dont think its nvidia's fault.. as much as tmsc's... im pretty sure if tmsc didnt heave yield problems nvidia would be better equipped right now. but same could be said for ATI? but meh
    at this point it's clear that globalfoundries was the right move for AMD and almost proven as a general rule that market-leading semiconductor groups should control their own processes.
    [SIGPIC][/SIGPIC]
    [PURE] AWESOME

  19. #319
    Xtreme Addict
    Join Date
    Nov 2004
    Posts
    1,692
    Quote Originally Posted by russian boy View Post
    http://www.ixbt.com/news/all/index.shtml?12/46/02 Translated At the end of this article it says that card in Huang hands was fake
    Yes it is a fake :

    http://www.fudzilla.com/content/view/15798/1/

    Intel Core i7-3770K
    ASUS P8Z77-I DELUXE
    EVGA GTX 970 SC
    Corsair 16GB (2x8GB) Vengeance LP 1600
    Corsair H80
    120GB Samsung 840 EVO, 500GB WD Scorpio Blue, 1TB Samsung Spinpoint F3
    Corsair RM650
    Cooler Master Elite 120 Advanced
    OC: 5Ghz | +0.185 offset : 1.352v

  20. #320
    Xtreme Enthusiast
    Join Date
    Jan 2007
    Posts
    579
    there wouldn be dual gt300 unless nvidia know the 5870x2 will be faster then the single gt300.

  21. #321
    Xtreme Addict
    Join Date
    Dec 2008
    Location
    Sweden, Linköping
    Posts
    2,034
    Quote Originally Posted by Jamesrt2004 View Post
    " I suspect the GTX 380 will beat the Radeon 5870 in a majority of games and benchmarks but not necessarily all." .... that would suck for nvidia, they NEED To pull out by quite a bit if they trade blows even in 1 or 2 games.. then its in target for 5890... and assuming ati (having smaller dies) will be easier/cheaper to make, there going to be better price/perf again

    tbh i dont think its nvidia's fault.. as much as tmsc's... im pretty sure if tmsc didnt heave yield problems nvidia would be better equipped right now. but same could be said for ATI? but meh
    I've thaught about AMD having smaller dies for their GPUs but I don't think it necessarily meens better yields...

    While Nvidia have the uberhuge die, they have lower clock frequencies but looking at AMD we see the opposite, a small die and really high frequencies. I might actually go as far as saying the yielddifference between AMD and Nvidias topmodels at negligable?
    SweClockers.com

    CPU: Phenom II X4 955BE
    Clock: 4200MHz 1.4375v
    Memory: Dominator GT 2x2GB 1600MHz 6-6-6-20 1.65v
    Motherboard: ASUS Crosshair IV Formula
    GPU: HD 5770

  22. #322
    Banned
    Join Date
    Feb 2009
    Posts
    165
    Quote Originally Posted by malik22 View Post
    there wouldn be dual gt300 unless nvidia know the 5870x2 will be faster then the single gt300.
    I think we pretty much know the 5870x2 will be faster just by specs alone.

    Judging by revealed specs and BeyondSciFi's post, we can assume this is going to be a repeat of the previous gen performance ( GTX260 = 4870, GTX280 = 20-30% faster than 4870 )

    Yes, there are arch changes in the GT300. However, most of it is for non gaming applications from what's been revealed. The gaming side of it is almost like the 5800 series: Double everything ( Though in the GTX380's case, not everything is doubled )

  23. #323
    Xtreme Mentor
    Join Date
    Oct 2005
    Posts
    2,788
    Quote Originally Posted by Vit^pr0n View Post
    Though in the GTX380's case, not everything is doubled
    But don't forget. RV870 has had the ROPs and TMUs doubled to 32 and 80, respectively.

    GT200 already has 32 ROPs and 80 TMUs. GT300 will have 48 ROPs and 128 TMUs, and have SPs increased by 2.13x, and have SP efficiency increase in the process.
    Asus Rampage II Gene | Core i7 920 | 6*2GB Mushkin 998729 | BFG GTX280 OCX | Auzentech X-Fi Forte | Corsair VX550
    —Life is too short to be bound by the moral, ethical and legal constraints imposed on us by modern day society.

  24. #324
    Xtreme Addict
    Join Date
    Jul 2009
    Posts
    1,023
    Quote Originally Posted by malik22 View Post
    there wouldn be dual gt300 unless nvidia know the 5870x2 will be faster then the single gt300.
    there won't be a dual GF100 anyway due to power usage, maybe dual GF100b but that's a long way off.

  25. #325
    Xtreme Mentor
    Join Date
    Oct 2005
    Posts
    2,788
    Quote Originally Posted by Helloworld_98 View Post
    there won't be a dual GF100 anyway due to power usage, maybe dual GF100b but that's a long way off.
    No, nvidia has already stated there will be. It most likely will be dual PCB. There's nothing wrong with that, the 7950GX2 and the original GTX295 were both dual PCB as well.

    If power is an issue (which it shouldn't be), there is always the possibility of having three power connectors rather than two, or slightly scaling down the chip, as with the GTX295.
    Asus Rampage II Gene | Core i7 920 | 6*2GB Mushkin 998729 | BFG GTX280 OCX | Auzentech X-Fi Forte | Corsair VX550
    —Life is too short to be bound by the moral, ethical and legal constraints imposed on us by modern day society.

Page 13 of 42 FirstFirst ... 31011121314151623 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •