Page 78 of 109 FirstFirst ... 28687576777879808188 ... LastLast
Results 1,926 to 1,950 of 2723

Thread: The GT300/Fermi Thread - Part 2!

  1. #1926
    Xtreme Guru
    Join Date
    Aug 2007
    Posts
    3,562
    Quote Originally Posted by saaya View Post
    look at what? those are two different ways to illustrate a gpu, of course it looks very different
    Tue but the way information flows through the GF100 is quite different from the GT200.

    Also, at this point in time everything will be "evolutionary" versus "revolutionary" since the unified shader-based architecture will be with us for some time.

  2. #1927
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Posts
    591
    Quote Originally Posted by Sam_oslo View Post
    Of course everything between sky and earth is a kind evolution or tweak of that , but the "Dedicated L1 Load / Store cache" is closer to a revolutionary step in this round. This gets Fermi much closer to become a supercomputer, because it is the part that support generic C/C++ program (very much like a x86 CPU would do).
    Isn't that what ATI has had in their GPUs for the last two generations??

  3. #1928
    Xtreme Addict
    Join Date
    Jan 2004
    Posts
    1,313
    Quote Originally Posted by ajaidev View Post
    Oh ya one more thing i think that both GTX 480 and GTX 470 uses analog VRM setup and this is a very curious thing to do. The reason why i said curious is that when 5870 was launched with digital VRM i asked the advantages that it will have over analog system and i was told that digital VRM works good handling lower voltage than higher ones but analog VRM works better in handling higher voltage with stability.
    no. both digital VRM and traditional step-down buck converter, can output any voltage from 0 - 12V. For both digital, and analog component (ie cap, inductor, diode/MOSFET rectifier) selection determines max current, transient response (ie ripple current) etc.

    Caps are high and block heatsinks. AMD digital VRM is better for heatsinks. Multiple-phase solutions are easier to link up. And can better manage switching... ie reduce voltage/clocks/current if overheating.

    Its kind of like old school jumper/BIOS overlocking vs using setFSB in Windows - the later obviously easier and more convenient.

    Quote Originally Posted by Manicdan View Post
    i think AMD has made enough profit and will be happy to drop 5870 to 300$ the week of nvidias release.
    I've been wrong but I dont think so. TSMC 40nm is still far from "mainstream" and quite expensive. Only reason for AMD to drop down to $300, would be if Fermi launched at $300 - infinitesimally improbable.

    24/7: A64 3000+ (\_/) @2.4Ghz, 1.4V
    1 GB OCZ Gold (='.'=) 240 2-2-2-5
    Giga-byte NF3 (")_(") K8NSC-939
    XFX 6800 16/6 NV5 @420/936, 1.33V

  4. #1929
    Xtreme Mentor
    Join Date
    Jan 2009
    Location
    Oslo - Norway
    Posts
    2,879
    Quote Originally Posted by damha View Post
    Isn't that what ATI has had in their GPUs for the last two generations??
    ATi has a huge potential for GPGPU usage too, but they needs to provide a "working" API for programmers to access the GPU in a common and easy way. It should support generic codings like C/C++ too, and that's where CUDA helps Fermi to get ahead of ATi.

    ASUS P8P67 Deluxe (BIOS 1305)
    2600K @4.5GHz 1.27v , 1 hour Prime
    Silver Arrow , push/pull
    2x2GB Crucial 1066MHz CL7 ECC @1600MHz CL9 1.51v
    GTX560 GB OC @910/2400 0.987v
    Crucial C300 v006 64GB OS-disk + F3 1TB + 400MB RAMDisk
    CM Storm Scout + Corsair HX 1000W
    +
    EVGA SR-2 , A50
    2 x Xeon X5650 @3.86GHz(203x19) 1.20v
    Megahalem + Silver Arrow , push/pull
    3x2GB Corsair XMS3 1600 CL7 + 3x4GB G.SKILL Trident 1600 CL7 = 18GB @1624 7-8-7-20 1.65v
    XFX GTX 295 @650/1200/1402
    Crucial C300 v006 64GB OS-disk + F3 1TB + 2GB RAMDisk
    SilverStone Fortress FT01 + Corsair AX 1200W

  5. #1930
    Xtreme Member
    Join Date
    Jun 2008
    Location
    British Columbia, Canada
    Posts
    227
    Does anyone think dual GPU Fermi is possible at all (GTX470, and downed clock speeds like the 5970) without requiring a nuclear power plant and generating as much heat equivalent to the surface of the sun?
    Antec 900
    Corsair TX750
    Gigabyte EP45 UD3P
    Q9550 E0 500x8 4.0 GHZ 1.360v
    ECO A.L.C Cooler with Gentle Typhoon PushPull
    Kingston HyperX T1 5-5-5-18 1:1
    XFX Radeon 6950 @ 880/1300 (Shader unlocked)
    WD Caviar Black 2 x 640GB - Short Stroked 120GB RAID0 128KB Stripe - 540GB RAID1

  6. #1931
    I am Xtreme
    Join Date
    Dec 2008
    Location
    France
    Posts
    9,060
    Quote Originally Posted by Dark-Energy View Post
    Does anyone think dual GPU Fermi is possible at all (GTX470, and downed clock speeds like the 5970) without requiring a nuclear power plant and generating as much heat equivalent to the surface of the sun?
    Definitely.
    Donate to XS forums
    Quote Originally Posted by jayhall0315 View Post
    If you are really extreme, you never let informed facts or the scientific method hold you back from your journey to the wrong answer.

  7. #1932
    Xtreme Addict
    Join Date
    Apr 2007
    Posts
    1,870
    Quote Originally Posted by damha View Post
    Isn't that what ATI has had in their GPUs for the last two generations??
    No Fermi is the first mass market GPU architecture to have:

    #1: Parallel geometry setup
    #2: Generalized, coherent read/write caching

    Both are huge deals because of the engineering effort required and make a lot of things easier to do. Of course it doesn't mean squat if you just care about the fps that comes out the end.

  8. #1933
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Posts
    591
    Quote Originally Posted by Sam_oslo View Post
    ATi has a huge potential for GPGPU usage too, but they needs to provide a "working" API for programmers to access the GPU in a common and easy way. It should support generic codings like C/C++ too, and that's where CUDA helps Fermi to get ahead of ATi.
    My recommendation to ATI would be to adopt/adapt the CUDA api from nvidia and get on with it. Let's be honest, nvidia has the clout to raise a stink big enough for everyone to notice. Their connections run deep.

    Unless ATI is planning an API release secretly. I really hope they aren't, waste of resources.

    @triniboy: I'll check it out. This gives me something to research
    Last edited by damha; 03-05-2010 at 11:44 AM.

  9. #1934
    I am Xtreme
    Join Date
    Dec 2008
    Location
    France
    Posts
    9,060
    Quote Originally Posted by damha View Post
    My recommendation to ATI would be to adopt/adapt the CUDA api from nvidia and get on with it. Let's be honest, nvidia has the clout to raise a stink big enough for everyone to notice. Their connections run deep.
    They just need to help some devs with OpenCL, and advertise it some IMO. The potential is there.
    Donate to XS forums
    Quote Originally Posted by jayhall0315 View Post
    If you are really extreme, you never let informed facts or the scientific method hold you back from your journey to the wrong answer.

  10. #1935
    Xtreme Mentor
    Join Date
    Jan 2009
    Location
    Oslo - Norway
    Posts
    2,879
    Quote Originally Posted by damha View Post
    My recommendation to ATI would be to adopt/adapt the CUDA api from nvidia and get on with it. Let's be honest, nvidia has the clout to raise a stink big enough for everyone to notice. Their connections run deep.

    Unless ATI is planning an API release secretly. I really hope they aren't, waste of resources.
    I believe ATi could adapt CUDA easily, if they want to.

    But i agree with you, we need a common API for assessing a GPU (just like AMD and Intel CPU that csn run a sett og common instructions and programs).
    Somebody has to provide a common platform to take this further to the next step where everybody can have a personal supercomputer , but these guys (nVidia and ATi) are too busy fighting each other, and Intel will actually see both of the dead and defeated in this area, Because a great GPGPU platform can threaten the Intel dominance in those very suspensive supercomputer marked.

    ASUS P8P67 Deluxe (BIOS 1305)
    2600K @4.5GHz 1.27v , 1 hour Prime
    Silver Arrow , push/pull
    2x2GB Crucial 1066MHz CL7 ECC @1600MHz CL9 1.51v
    GTX560 GB OC @910/2400 0.987v
    Crucial C300 v006 64GB OS-disk + F3 1TB + 400MB RAMDisk
    CM Storm Scout + Corsair HX 1000W
    +
    EVGA SR-2 , A50
    2 x Xeon X5650 @3.86GHz(203x19) 1.20v
    Megahalem + Silver Arrow , push/pull
    3x2GB Corsair XMS3 1600 CL7 + 3x4GB G.SKILL Trident 1600 CL7 = 18GB @1624 7-8-7-20 1.65v
    XFX GTX 295 @650/1200/1402
    Crucial C300 v006 64GB OS-disk + F3 1TB + 2GB RAMDisk
    SilverStone Fortress FT01 + Corsair AX 1200W

  11. #1936
    Xtreme Addict
    Join Date
    Nov 2003
    Location
    NYC
    Posts
    1,592
    Quote Originally Posted by ***Deimos*** View Post
    I've been wrong but I dont think so. TSMC 40nm is still far from "mainstream" and quite expensive. Only reason for AMD to drop down to $300, would be if Fermi launched at $300 - infinitesimally improbable.
    There are two other reasons to drop the price on the 5870 to $300

    - If the 480GTX comes in at close to its current price/performance to deny any signficant marketshare. If the 480GTX comes in at 5-10% higher performance than the 5870 would the average upgrader still choose it if the 5870 was 25% cheaper (probably not)?

    - If AMD/ATI counters with a 5875 (or whatever 5870 rev2) at the same price point of the current card (to retain "performance crown"), the "old" 5870 wouldn't sell at all at the current price.

    I'm inclined to say that its very likely AMD/ATI would price cut, I'd probably bank on it being closer to $50 though (I would be quite happy if it was $100).

  12. #1937
    Banned
    Join Date
    Jan 2003
    Location
    EU
    Posts
    318
    Ati definietely has price flexibility right now.Dont forget that ATI prices have risen above launch msrp.They are cheaper to produce, they have them for months, and have had better yields from the start (6 months ago).
    Thing is, if they will not feel threaten by fermi, they probably wont :/.MSRP for 5850 on launch was 259$ ,it stands at 300+ now.

  13. #1938
    Xtreme Mentor
    Join Date
    Jan 2009
    Location
    Oslo - Norway
    Posts
    2,879
    Quote Originally Posted by trinibwoy View Post
    No Fermi is the first mass market GPU architecture to have:

    #1: Parallel geometry setup
    #2: Generalized, coherent read/write caching

    Both are huge deals because of the engineering effort required and make a lot of things easier to do. Of course it doesn't mean squat if you just care about the fps that comes out the end.
    Yep, exactly. These are very big deals in GPU evolution (if not revelation) right now. All GPUs have a HUGE amount of Gflps (compared to a CPU), but how do you control/program the beast to do something more useful than just gaming?

    That dedicated L1 cache plays a big role for making the the life much easier to program/control the beast. This makes it possible to have a unified read/write cache, which allows program correctness and is a key feature to support generic C/C++ programs.

    ASUS P8P67 Deluxe (BIOS 1305)
    2600K @4.5GHz 1.27v , 1 hour Prime
    Silver Arrow , push/pull
    2x2GB Crucial 1066MHz CL7 ECC @1600MHz CL9 1.51v
    GTX560 GB OC @910/2400 0.987v
    Crucial C300 v006 64GB OS-disk + F3 1TB + 400MB RAMDisk
    CM Storm Scout + Corsair HX 1000W
    +
    EVGA SR-2 , A50
    2 x Xeon X5650 @3.86GHz(203x19) 1.20v
    Megahalem + Silver Arrow , push/pull
    3x2GB Corsair XMS3 1600 CL7 + 3x4GB G.SKILL Trident 1600 CL7 = 18GB @1624 7-8-7-20 1.65v
    XFX GTX 295 @650/1200/1402
    Crucial C300 v006 64GB OS-disk + F3 1TB + 2GB RAMDisk
    SilverStone Fortress FT01 + Corsair AX 1200W

  14. #1939
    Xtreme Addict
    Join Date
    Jul 2005
    Posts
    1,646
    Quote Originally Posted by damha View Post
    My recommendation to ATI would be to adopt/adapt the CUDA api from nvidia and get on with it. Let's be honest, nvidia has the clout to raise a stink big enough for everyone to notice. Their connections run deep.

    Unless ATI is planning an API release secretly. I really hope they aren't, waste of resources.

    @triniboy: I'll check it out. This gives me something to research
    Horrible idea, CUDA is the next Glide. It will be replaced by something not ruled by a single hardware vendor.

  15. #1940
    Xtreme Enthusiast
    Join Date
    Apr 2006
    Posts
    939
    Quote Originally Posted by Levish View Post
    There are two other reasons to drop the price on the 5870 to $300

    - If the 480GTX comes in at close to its current price/performance to deny any signficant marketshare. If the 480GTX comes in at 5-10% higher performance than the 5870 would the average upgrader still choose it if the 5870 was 25% cheaper (probably not)?

    - If AMD/ATI counters with a 5875 (or whatever 5870 rev2) at the same price point of the current card (to retain "performance crown"), the "old" 5870 wouldn't sell at all at the current price.

    I'm inclined to say that its very likely AMD/ATI would price cut, I'd probably bank on it being closer to $50 though (I would be quite happy if it was $100).
    AMD won't cut the price unless nvidia can meet the demand, if there are only 8,000 480's the ticket price won't mean anything. AMD are already releasing a card with a price of $1,000, it will probably sell.

  16. #1941
    Xtreme Addict
    Join Date
    Jan 2008
    Posts
    1,463
    Fermi is a complete arch redesign, like g80 & r350. Focus on cache, compute, & gpgpu programmability. cypress is rv770 with alu's & rops doubled and scheduler & setup redesigned to effectively use new resources.
    Bring... bring the amber lamps.
    [SIGPIC][/SIGPIC]

  17. #1942
    Xtreme Member
    Join Date
    Aug 2004
    Location
    Bel Air, Maryland
    Posts
    143
    Is it just me or do the roof tiles on the GTX470 shot look odd? I just ran the bench and they looked much sharper and defined on my 5870.
    985/1250 gave me 29.8fps, by the way.
    Intel i7-2700k@ 4.7ghz (46x102)
    Asus P8Z68 Deluxe GEN3
    G.Skill 2x4gb RipjawX 2133 11-11-11-30
    GTX 680 1220/7000
    Corsair TX750W
    Razer Lachesis w/ Razer Pro|Pad
    1x160gb Seagate HDD, 2x1tb Seagate HDD
    LG 22" 226WTQ & BenQ G2400WD
    Windows 7 Ultimate x64 SP1

  18. #1943
    Xtreme Addict
    Join Date
    Jan 2008
    Posts
    1,176
    Quote Originally Posted by Soultaker52 View Post
    Is it just me or do the roof tiles on the GTX470 shot look odd? I just ran the bench and they looked much sharper and defined on my 5870.
    985/1250 gave me 29.8fps, by the way.
    Yes, they look nerfed.

  19. #1944
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Posts
    591
    Quote Originally Posted by Soultaker52 View Post
    Is it just me or do the roof tiles on the GTX470 shot look odd? I just ran the bench and they looked much sharper and defined on my 5870.
    985/1250 gave me 29.8fps, by the way.
    More than likely running the "quality" texture setting, not "high quality". Which is not all that surprising as MOST review sites keep that setting unchanged, resulting in worse IQ for nv cards but higher performance.

  20. #1945
    Xtreme Member
    Join Date
    Aug 2004
    Location
    Bel Air, Maryland
    Posts
    143
    Quote Originally Posted by Jowy Atreides View Post
    Yes, they look nerfed.
    Actually, I just went through it on free roaming camera to make sure, and the tiles look about the same. I guess in that area they just look like
    Intel i7-2700k@ 4.7ghz (46x102)
    Asus P8Z68 Deluxe GEN3
    G.Skill 2x4gb RipjawX 2133 11-11-11-30
    GTX 680 1220/7000
    Corsair TX750W
    Razer Lachesis w/ Razer Pro|Pad
    1x160gb Seagate HDD, 2x1tb Seagate HDD
    LG 22" 226WTQ & BenQ G2400WD
    Windows 7 Ultimate x64 SP1

  21. #1946
    Xtreme Cruncher
    Join Date
    May 2009
    Location
    Bloomfield
    Posts
    1,968
    Quote Originally Posted by jaredpace View Post
    Fermi is a complete arch redesign, like g80 & r350. Focus on cache, compute, & gpgpu programmability. cypress is rv770 with alu's & rops doubled and scheduler & setup redesigned to effectively use new resources.
    i wouldnt call it a complete overhaul. but their are some big changes.

  22. #1947
    Xtreme Addict
    Join Date
    May 2007
    Posts
    2,125
    It's not a G71->G80 jump, but it's not a G92->GT200 one either

  23. #1948
    Banned
    Join Date
    Jan 2010
    Posts
    101
    Not sure if this has been posted before. Card is supposedly the GTX470


  24. #1949
    Xtreme Addict
    Join Date
    May 2007
    Posts
    2,125
    Yeah, it's been posted

  25. #1950
    Xtreme Member
    Join Date
    Mar 2007
    Location
    Pilipinas
    Posts
    445
    It's been posted, but I just checked anandtech's reviews and the 5870 gets 38.5 fps on average I presume, in warhead @19x12 4xAA Enthusiast... granted anandtech's setup has a faster i7 (+130MHz?) but I don't think that's enough for a ~25% increase in frames. I was trying to look for a review with 8x AA but found none atm.

    edit:
    of course it depends on the part of the game that was ran, but I was thinking they used the built-in benchmarking tool
    Last edited by insurgent; 03-05-2010 at 03:54 PM.

Page 78 of 109 FirstFirst ... 28687576777879808188 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •