Page 108 of 123 FirstFirst ... 85898105106107108109110111118 ... LastLast
Results 2,676 to 2,700 of 3051

Thread: The Fermi Thread - Part 3

  1. #2676
    Xtremely Bad Overclocker
    Join Date
    Jan 2005
    Location
    East Blue
    Posts
    3,596
    does anybody already has any watercooled impressions with these cards (avoiding the bad f-word - beware of dave ). It's a last hope for me that there is a way to have fun with these cards - if watercooling works good and you can overclock it a bit with it, that might be a last chance. for all-day it was out for me when I saw the power consumption - I'll prefer to stick with my 285 GTX system which has in total 275W peak consumption (entire system, sound and monitor while folding). I would double that up easily with .... -
    | '12 IvyBridge - "ticks different"... | AwardFabrik IvyBridge round I by SoF | AwardFabrik IvyBridge round II by angoholic & stummerwinter
    | '11 The SandyBridge madness... | AwardFabrik / Team LDK OC-Season 2011/2012 Opening Event
    | '10 Gulftown LaunchDay OC round up @ASUS RIIE | 3DM05 2x GPU WR LIVE @Cebit 2010 @ASUS MIIIE | SandyBridge arrived @ASUS P8P67

    | '09 Foxconn Avenger | E8600 | Foxconn A79A-S | Phenom II 940 BE | LaunchDay Phenom II OC round up
    | '08 7.438s 1m LN2 | AMD 1m WR LN2 | 2nd AOCM | Phenom II teasing
    | '07 100% E2140 | 106.5% E2160 | 100% E4500 | 103% E4400 | 5508 MHZ E6850 | 7250 MHZ P4 641 126.5% by SoF and AwardFabrik Crew all on Gigabyte DS3P c? and LN2...
    | '06 3800+ X2 Manchester 0531TPEW noHS 3201MHZ c? | 3200+ Venice noHS 3279MHZ c? | Opteron 148 0536CABYE 3405MHZ c? all on Gigabyte K8NXP-SLI compressorcooled

    | '05 3500+[NC], 3000+[W], 2x 3200+[W], 3500+[NC], 3200+[V] 0516GPDW

    Quote Originally Posted by saaya
    sof pulled a fermi on all of us !!!

  2. #2677
    Xtreme Addict
    Join Date
    Aug 2002
    Posts
    1,202
    I see a lot of unhappy folk mostly ATi boys/nVidia haters. I've got the card and i can tell you it doesn't get that noisy unless you run Furmark (90C) but under normal load ie while playing games the cars usually hovers between 70-75C and the fan is defo a lot quieter than the 4870X2/5970 or even the 4890.

    My reference card can do 825/2000 easy on stock cooling and vgpu without any extra fan blowing over it in my Cosmos S.

    default 06 run at 825/2000.


  3. #2678
    Xtreme Enthusiast
    Join Date
    Mar 2010
    Location
    Istanbul
    Posts
    606
    Quote Originally Posted by QuadDamage View Post
    I see a lot of unhappy folk mostly ATi boys/nVidia haters. I've got the card and i can tell you it doesn't get that noisy unless you run Furmark (90C) but under normal load ie while playing games the cars usually hovers between 70-75C and the fan is defo a lot quieter than the 4870X2/5970 or even the 4890.

    My reference card can do 825/2000 easy on stock cooling and vgpu without any extra fan blowing over it in my Cosmos S.

    default 06 run at 825/2000.

    well said.. can you try to run higher clocks(i heard it has 0.4 ns chips same as 5870s) for memories while gpu clocks stays stock speed ?

  4. #2679
    Xtreme Addict
    Join Date
    Aug 2002
    Posts
    1,202
    the card crashes at 875Mhz almost instantly in '06, at 850Mhz it crashes in the second test but it should do 835/840Mhz core. I haven't pushed the memory yet. I'll do that now.

  5. #2680
    Xtreme Addict
    Join Date
    Feb 2006
    Location
    Potosi, Missouri
    Posts
    2,296
    A shame that for benchmarking it is really no faster then Crossfired 4870s.


    Quote Originally Posted by QuadDamage View Post
    I see a lot of unhappy folk mostly ATi boys/nVidia haters. I've got the card and i can tell you it doesn't get that noisy unless you run Furmark (90C) but under normal load ie while playing games the cars usually hovers between 70-75C and the fan is defo a lot quieter than the 4870X2/5970 or even the 4890.

    My reference card can do 825/2000 easy on stock cooling and vgpu without any extra fan blowing over it in my Cosmos S.

    default 06 run at 825/2000.


  6. #2681
    Xtreme Member
    Join Date
    Nov 2006
    Location
    Brazil
    Posts
    257
    Quote Originally Posted by QuadDamage View Post
    I see a lot of unhappy folk mostly ATi boys/nVidia haters. I've got the card and i can tell you it doesn't get that noisy unless you run Furmark (90C) but under normal load ie while playing games the cars usually hovers between 70-75C and the fan is defo a lot quieter than the 4870X2/5970 or even the 4890.

    My reference card can do 825/2000 easy on stock cooling and vgpu without any extra fan blowing over it in my Cosmos S.

    default 06 run at 825/2000.

    Using those clocks please run Vantage Extreme or Crysis Warhead AA4x at high resolution.

    If you can measure power consumption and temperatures that would be nice tho.

  7. #2682
    Xtreme Addict
    Join Date
    Jul 2007
    Location
    Germany
    Posts
    1,592
    Quote Originally Posted by QuadDamage View Post
    the card crashes at 875Mhz almost instantly in '06, at 850Mhz it crashes in the second test but it should do 835/840Mhz core. I haven't pushed the memory yet. I'll do that now.
    What brand did you get?

  8. #2683
    Xtreme Addict
    Join Date
    Aug 2002
    Posts
    1,202
    nVidia reference board.

    EDIT : Here's the stock untouched GTX480 BIOS. Nibitor doesn't seem to work but maybe someone can figure out the way to mod it.

    http://www.megaupload.com/?d=1SIHTCTP

    edit2:

    Vantage default at 825/2000 with the highest recorded temps.

    Last edited by QuadDamage; 03-28-2010 at 05:56 AM.

  9. #2684
    I am Xtreme
    Join Date
    Dec 2008
    Location
    France
    Posts
    9,060
    Quote Originally Posted by Macadamia View Post
    IMO Fermi's scalability potential is upwards, not downwards.
    Should be decent both ways. And we won't see an example of scalability upwards for a looooong time...
    Quote Originally Posted by annihilat0r View Post
    I thought the best review was Anand's.
    Same. No offence to anybody else.
    Quote Originally Posted by Blacky View Post
    Fermi reminds me of something, the reborn of FX 5800
    Hahaha! I've never seen that, hilarious!
    Last edited by zalbard; 03-28-2010 at 05:42 AM.
    Donate to XS forums
    Quote Originally Posted by jayhall0315 View Post
    If you are really extreme, you never let informed facts or the scientific method hold you back from your journey to the wrong answer.

  10. #2685
    Xtreme Member
    Join Date
    Nov 2006
    Location
    Brazil
    Posts
    257
    Quote Originally Posted by QuadDamage View Post
    nVidia reference board.

    EDIT : Here's the stock untouched GTX480 BIOS. Nibitor doesn't seem to work but maybe someone can figure out the way to mod it.

    http://www.megaupload.com/?d=1SIHTCTP

    edit2:

    Vantage default at 825/2000 with the highest recorded temps.

    Nice! Cheers!

    Another thing, can you post a screenshot of device manager showing which IRQ number your GTX 480 is using, like the one below:



    I'm curious if finally NVIDIA programmed the driver to use Message-signaling interrupts instead of pin-based interrupts.
    Last edited by japamd; 03-28-2010 at 06:09 AM.

  11. #2686
    Xtreme Member
    Join Date
    Apr 2008
    Posts
    239
    Grillforce GTX480 FTW!

  12. #2687
    Xtreme Addict
    Join Date
    Apr 2007
    Location
    canada
    Posts
    1,886
    epic post .... keep those fermi joke comming i love me to death

  13. #2688
    Xtreme Addict
    Join Date
    Jan 2005
    Posts
    1,970
    Quote Originally Posted by jaredpace View Post


    [*IMG]http://i42.tinypic.com/15ez287.jpg[/IMG]
    Awesome...

    I'm gonna love mine! Looks tasty and well-done!! (ooh bad pun!)

  14. #2689
    Xtreme Mentor
    Join Date
    Jul 2008
    Location
    Shimla , India
    Posts
    2,631


    Coming Soon

  15. #2690
    Xtreme Addict
    Join Date
    Mar 2006
    Location
    Saskatchewan, Canada
    Posts
    2,207

    Oh Man!!

    Quote Originally Posted by Behemot View Post
    This is nice manipulation with facts, you know? If you say Fermi is completely new, than you must say Evergreen is completely new architeture, too. AMD did the major changes with RV7x0 generation (memory hub and other stuff), NVIDIA with GF10x (cache and other stuff).

    But the truth is, the shaders and other parts which have direct impact on 3D performance are the same till R600/G80. So this talk about "hard to make drivers especially for GF100" is total nonsense.

    If they want to make special driver, nobody is holding 'em. It is so easy to name the DLL libraries differently and copy the different ones if GF100 is detected, isn't it?
    Evergreen is completely new? You can't be serious, even AMD says their completely new generation of cards comes after r8xx. I don't think anyone here will agree with you that the changes from g200 to fermi are vastly greater than the changes from r7xx to r8xx.

    Evergreen is more of a testiment to the scalability of r6xx architecture. They use similar ways to obtain performance, hence, why AMD dropped support for new driver of anything pre r600 last year. The recent improvements that came with drivers 10.3 also improved r7xx performance as well show this.

    Fermi shaders are far different than g80 tech. Its their first fold into directx11 from mostly a directx 10. The jump from direct x 10.1 to 11 is alot less steep compared to 10 to 11.

    Quote Originally Posted by LordEC911 View Post
    So you are saying Nvidia had working GF100 silicon in May? I'm a Charlie supporter when his sources/rumors seem to support info I heard from a different source. Nothing more or less. If you want to look back, I actually disagreed with Charlie quite a few times over the last year or so.

    You also missed what I am trying to say... Evergreen is a new architecture, in the sense of drivers, the same Fermi is. Did AMD/ATi have working silicon before their Sept launch, more than likely, did Nvidia have working silicon before the end of 2009, more than likely, I never said any different. The point is, to say that AMD/ATi should have already optimized the performance from Evergreen before it's launch, or shortly therefore after, is wrong. It will take months to correct and optimize, sort of like how it took +6 months for G80 to get stable drivers in certain games(though not the same).

    I think the SLI driver performance shows how much time Nvidia really had to prepare GF100. The performance scaling is quite good and somewhat surprising, though that might be only my opinion.

    So Nvidia dropping supporting for pre-G200 is of no consequence to you? We will see...
    Man you didn't get my point and your original point did not address my first post at all, hence my explanation.

    I will make it as simple as possible because we are honestly agreeing on the same thing. You are saying that it can take alot of work to get optimizations and performance out of a new card and it is not easy. Right? R8xx has substantial changes that might take months to take advantage of, hence the recent driver improvements which bring up performance.

    I am saying the same thing. It takes time to bring out performance from new hardware.

    So I will makes make this as clear and as streamlined as possible.

    R8xx is a new card with some architecture changes to make it directx 11. The most important part of directx 11, a tesselation engine part of their previous architecture was already there. This is not a new architecture however, something AMD has already admitted.

    But other changes that make it directx 11 requires a few changes, hence, the added work needed for driver to allow the card to perform at optimum levels. Hence the 6 months of work that has payed off to get performance to current levels.

    Lets look at NV situation.

    gf100 is completely new, working silicon was made in late 2009 and has vast changes over g80-g200 based architecture. This is officially NV new architecture.

    Making drivers that perform well with this new architecture requires working silicon, hence NV has only had 4 month to work on this project.

    The starting point to get to that final point(optimized drivers) is alot farther back because the current drivers they have are for previous architecture that was for something alot different. As a result, it will take more time and effort compared to the AMD situation.

    This leads to my point towards adaivjev
    It will take longer than 4 months to get good drivers out of fermi, if it takes 6 months to get optimized drivers from evergreen.

    Especially when you take into consider the following points.
    Fermi is a totally new architecture, evergreen is not.
    AMD likely had working silicon longer than NV for their latest and greatest.
    Driver optimizations towards fermi are likely to be more dentrimental to pre fermi hardware, which NV can't ignore because it is the vast majoritity of its current base.

    Additionally to clarify my objective and alliances, I do not plan on getting a fermi based card at the present time. To justify the power consumption, they need atleast 15 percent more performance than what we currently see. This might come with later drivers, but I do not make purchases based on potential future driver performance. However the likelyhood of these improvements are high given the newness of fermi and the time given to prepare the current drivers.

    I did not get a 5xxx card because 40-50 percent improvement over the previous generation is not enough to justify an upgrade in cards just over a year old, especially since I don't game that much. Additionally, I am not a graphic snob with a superiority complex over consoles. With my current solution, I would need to purchase dual 5870s as even a 5970 really doesn't add much to the performance equation(although likely more consistency) which doesn't make sense considering the cost and how much I game(honestly no one games enough to justify 900(even more because i watercool) dollars on graphic card every years(good games that have great graphics just don't appear often enough).

    Fermi with say an extra 20% performance along with its current lead would mean a card that is 80 percent faster than previous gen. Something that might justify an upgrade, but at the present time, no, not even close, especially with the power consumption.
    Last edited by tajoh111; 03-28-2010 at 08:57 AM.
    Core i7 920@ 4.66ghz(H2O)
    6gb OCZ platinum
    4870x2 + 4890 in Trifire
    2*640 WD Blacks
    750GB Seagate.

  16. #2691
    Xtreme Cruncher
    Join Date
    May 2009
    Location
    Bloomfield
    Posts
    1,968
    Quote Originally Posted by Sn0wm@n View Post
    anyone else find this odd that R600 got beaten down by g80 ... yet a couple years down the line and its giving a fair fight to nvidia's next gen arch
    arch v. arch? not a valid comparison. check out the gtx480 running tessellation 3x faster. ATi got beat at their own demo.http://www.hardware.fr/articles/787-...e-gtx-480.html

    the loss of power efficiency is from process yields. they cant bin chips for the 480 at the clocks they would want and be able to launch at a reasonable date or price.

  17. #2692
    Xtreme Addict
    Join Date
    Jan 2004
    Posts
    1,313
    Computer history is whimsical and key pivotal moments seem to happen in August.
    2002: 9700pro came out. $399. Same 100fps in Quake3 as GF4. Great AA performance. Nothing special. DX9 wasn't released yet. XP was about to ship.
    2 years later, the elegance of the 9700pro bore fruit in Far Cry. ArtX, creators of GameCube, new design evolved into X800, X1800s and XBOX.

    But the most key pivotal time in computer history I can imagine in MY lifetime - 1995.
    - Win95 was launched catapulting Microsoft from millions into billions.
    - IE and Netscape Navigator.. the web had begun.
    - Playstation! No more blocky scrollers. 3D! Gaming opened to those beyond 10yr olds.
    - Voodoo and Quake were being polished off to launch in 1996.
    - and Pentium Pro was launched!

    Wait, "whats that?" you ask. Surely it can't be as big a deal as Win95!?
    It was power hungry. Huge die made on 500/350nm with novel offdie cache. Same 200Mhz max clock as Pentium.
    And Win95/apps sometimes ran slower than on Pentium.

    Pentium Pro
    First x86 server processor. First OOO architecture. RISC core running CISC, with pipelined FPU - starting idea for SSE. Intel's 4th 32bit processor, but first actually optimized for 32bit performance. And it was the seed that evolved into famous Intel cash-cows like Pentium2, Pentium3, PentiumM and 11 years later, Core2.

    IMHO, Fermi is NOT the great leap G80 was. Unlike the 9700pro and Pentium pro, Fermi is already MUCH faster from day1 and will only get faster. And DX11 games show amazing performance. All the unified cache, C++, GPGU etc was a bold leap forward by nVidia. They could have done like 5870, and just doubled up the G200 and saved themselves all the headaches.

    In the near future, DX11 games and GPGPU application will reveal and demonstrate more of Fermi's design advantages. And ofcourse it will be basis for future designs. I wouldn't be surprised if 5 years from now, Unreal5 or Doom6 use OpenCL/C++ and requires Fermi architecture, just like Vista/Win7 require DX9 (9700pro) for Aero.

    Although, A++ for architecture design, I still wont buy the product (GTX480). I just dont have $$$ and haven't even finished playing through pile of DX10 games. But, by the end of summer, kinks and availability should be fixed, and who knows, maybe even "200W low power" (as ironic as that sounds) versions. Maybe I'll make the Win7 plunge and pick one up. 5870 better for now, but Fermi definetly more future proof.
    Last edited by ***Deimos***; 03-28-2010 at 09:20 AM.

    24/7: A64 3000+ (\_/) @2.4Ghz, 1.4V
    1 GB OCZ Gold (='.'=) 240 2-2-2-5
    Giga-byte NF3 (")_(") K8NSC-939
    XFX 6800 16/6 NV5 @420/936, 1.33V

  18. #2693
    Xtreme Enthusiast
    Join Date
    Jul 2007
    Location
    Kuwait
    Posts
    616
    this thread now is called "bash Nvidia thread" there is nothing useful here any more.... Fermi is not news any more..... and all the things about GTX 470 & GTX 480 should be discussed in Nvidia Forum.
    I hope mods close this thread.

  19. #2694
    Xtreme Mentor
    Join Date
    Jul 2008
    Location
    Shimla , India
    Posts
    2,631
    Quote Originally Posted by ***Deimos*** View Post
    5870 better for now, but Fermi definetly more future proof.
    Agreed the fermi arc is very good but Nvidia's released card GTX 480/470 not so much.

    Nvidia needs a shrink like GTX 280 -> GTX 285 but also need full 512 SP's. A GTX 485 with 512 SP's will be much better than anything ATi has with a single GPU. But ATi is suppose to have a refresh soon and also will have a next gen part coming soon. Nvidia took 6-7 months for the GTX 280 to GTX 285 convert i hope it does not take that long.

    Fermi as a arc is really strong but Nvidia did not execute it well. I hope Nvidia goes with GF
    Coming Soon

  20. #2695
    Xtreme Addict
    Join Date
    Jan 2004
    Posts
    1,313
    I want to put an end to the "dead horse beating".

    R600 vs Fermi
    both: hot, new DX, late
    diff: Fermi is faster, not broken, amazing DX11 features/performance for future games, good MSRP (markup another story).

    fx vs Fermi
    both: hot, new DX
    diff: Fermi is faster, not broken, no questionable "optimization" (yet) and doesn't sound like turbine

    GeForce vs Fermi
    both: hot, new DX, foundation for new technology lineup.

    Basically,
    ALL new DX cards are always hotter. That's the trend. Get over it.
    Even RV870 was gotter than RV770. Its not somehow a new issue only Fermi has.

    Somehow, in the rosy glamourous faded memories, many people forget 9700pro was ridiculed for requiring extra power connector with ATI's classic Win2000 driver issues and at best performance on par with GF4. Ofcourse a year later, with issues fixed and DX9 launced, everybody was buying one.

    Quote Originally Posted by ajaidev View Post
    Agreed the fermi arc is very good but Nvidia's released card GTX 480/470 not so much.

    Nvidia needs a shrink like GTX 280 -> GTX 285 but also need full 512 SP's. A GTX 485 with 512 SP's will be much better than anything ATi has with a single GPU. But ATi is suppose to have a refresh soon and also will have a next gen part coming soon. Nvidia took 6-7 months for the GTX 280 to GTX 285 convert i hope it does not take that long.

    Fermi as a arc is really strong but Nvidia did not execute it well. I hope Nvidia goes with GF
    Die shrink will be too late. 512 SP not important. Its like 5% difference.

    There is 1, and only 1, critical "to-do".
    DEVELOPERS.

    If Fermi can get +20% in BattleForge, an "AMD game", image the performance if it was TWIMTBP!
    Developer support is more critical than ever. All the consoles are DX9. There's little motivation to do "hard work" to make DX11 games.
    It doesnt matter if its 480 SP or 512 SP, if nVidia isn't there to baby-hold and show 1-2-3 how to use it. Because if it never gets used, there's no benefit of having it.
    Last edited by ***Deimos***; 03-28-2010 at 09:49 AM.

    24/7: A64 3000+ (\_/) @2.4Ghz, 1.4V
    1 GB OCZ Gold (='.'=) 240 2-2-2-5
    Giga-byte NF3 (")_(") K8NSC-939
    XFX 6800 16/6 NV5 @420/936, 1.33V

  21. #2696
    Xtreme Mentor
    Join Date
    Jul 2008
    Location
    Shimla , India
    Posts
    2,631
    Quote Originally Posted by ***Deimos*** View Post


    Die shrink will be too late. 512 SP not important. Its like 5% difference.

    There is 1, and only 1, critical "to-do".
    DEVELOPERS.

    If Fermi can get +20% in BattleForge, an "AMD game", image the performance if it was TWIMTBP!
    Developer support is more critical than ever. All the consoles are DX9. There's little motivation to do "hard work" to make DX11 games.
    It doesnt matter if its 480 SP or 512 SP, if nVidia isn't there to baby-hold and show 1-2-3 how to use it. Because if it never gets used, there's no benefit of having it.
    More like 6.25% and BattleForge was not bad even under GTX 285. I would imagine TWIMTBP games would be reallly goood "Tessellation with only extreme settings"

    Developer support is needed yes but DX11 is said to be simpler to work with than DX9 and hopefully new consoles will come soon.

    EDIT: The fact that Nvidia's best card has only 480 SP's enabled from 512 SP's its a disabled card aka broken card.
    Coming Soon

  22. #2697
    Xtreme Member
    Join Date
    Sep 2008
    Posts
    449
    Quote Originally Posted by Macadamia View Post
    IMO Fermi's scalability potential is upwards, not downwards.

    We'll see soon, but considering the current ROPs are pretty inefficient (as you can see here Fermi has less fillrate than GTX285), don't expect the GTS 450 to handle 2560, and even in 1920 you might see bad omens.
    Are the Fermi numbers in those tests result of bad drivers or are they architecture related?
    --lapped Q9650 #L828A446 @ 4.608, 1.45V bios, 1.425V load.
    -- NH-D14 2x Delta AFB1212SHE push/pull and 110 cfm fan -- Coollaboratory Liquid PRO
    -- Gigabyte EP45-UD3P ( F10 ) - G.Skill 4x2Gb 9600 PI @ 1221 5-5-5-15, PL8, 2.1V
    - GTX 480 ( 875/1750/928)
    - HAF 932 - Antec TPQ 1200 -- Crucial C300 128Gb boot --
    Primary Monitor - Samsung T260

  23. #2698
    Xtreme Member
    Join Date
    Jul 2005
    Posts
    429
    Enough of all the childish picture posts. Fermi is still news and is still waiting for the hard launch April 12th so the thread is still useful.

    Check out overclock results 1165 core clock

    http://www.legitreviews.com/news/7697/
    PC1 EVGA Nvidia 790i Ultra | E8400 Retail @ 4.05GHz 1.35v | 8GB Mushkin Ascent @ DDR3-1680 | 2xBFG GTX 280 SLI @ stock | 30" Dell 3008WFP @ 2560x1600 XHD | XFI Fatality | 3x256GB Corsair(Samsung) SSD Raid0 & 1TB Samsung Backup | LG DVD / CDRW | NEC DVD DL 16x | CoolerMaster Stacker 830 8x120mm high 110 CFM per fan| 1000w Corsair SLI certified | Scythe Infinity (dremel mod to avoid caps on board)

    PC2 (Wife) ASUS P5WDH Bios 1101 | E6400 Retail @ 3.2 | 4GB Corsair 6400C4 | 1xBFG GTX 260 @ stock | 24" Dell 2405FPW @ 1920x1200 XHD | XFI Xtrememusic | 2x150G Raptor Raid0 & 1TB WD Backup | Pioneer DVD | CoolerMaster Stacker 830 | 850w PCP&C | Stock HSF

  24. #2699
    Xtreme Addict
    Join Date
    Apr 2008
    Location
    Lansing, MI / London / Stinkaypore
    Posts
    1,788
    Quote Originally Posted by LiquidReactor View Post
    Are the Fermi numbers in those tests result of bad drivers or are they architecture related?
    I don't think you'd screw up something that fundamental such as fillrate with release drivers.

    What drivers do optimize are probably shader code and texture formats, perhaps a bit of how games render here and there. Feature tests however should be extremely close to practical values from day 1.
    Quote Originally Posted by radaja View Post
    so are they launching BD soon or a comic book?

  25. #2700
    Xtreme Cruncher
    Join Date
    May 2009
    Location
    Bloomfield
    Posts
    1,968
    Quote Originally Posted by ajaidev View Post
    More like 6.25% and BattleForge was not bad even under GTX 285. I would imagine TWIMTBP games would be reallly goood "Tessellation with only extreme settings"

    Developer support is needed yes but DX11 is said to be simpler to work with than DX9 and hopefully new consoles will come soon.

    EDIT: The fact that Nvidia's best card has only 480 SP's enabled from 512 SP's its a disabled card aka broken card.
    well if you consider the 480 broken then an unknown number of 5870's and 4870's are broken. nvidia uses course grained redundancy so they turn of some sp's if they dont work. ATi adds redundant circuits to sp's so they can work if the other circuits dont. each has there pros and cons.

Page 108 of 123 FirstFirst ... 85898105106107108109110111118 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •