Page 22 of 42 FirstFirst ... 121920212223242532 ... LastLast
Results 526 to 550 of 1035

Thread: The official GT300/Fermi Thread

  1. #526
    Xtreme Member
    Join Date
    Nov 2005
    Location
    Cape Town - South Africa
    Posts
    261
    Bro, you do understand that PhysX can and does run on the CPU...! It doesn't require a Nvidia GPU... until Nvidia bought Ageia and decided to code idle GPU resources to speed up PhysX. Then decided to market THAT ability as revolutionary.
    Okay Intel is working on Larrabee which, as I understand it, will be more a GPGPU then just a GPU. Now why would a leading CPU maker get into the GPGPU market if they already have working 6 core 12 thread processor which will be availible for retail in a couple of months? Please keep in mind, I'm only asking to learn more.

    So, how often do you have idle GPU time when playing a modern game..? Show me a PhysX simulation that has 3500 objects swirling around a tornado on a single Nvidia card...
    It won't happen.. not even on 2 nVidia GPU's.. not even on THREE nVidia GPU's.. Go ahead, find a PhysX demo where they have any amount of real physical objects being moved around, specially that move as fluently as those in the Velocity engine demo.
    Not in a game environment, but it looks pretty impressive, given the fact that GPU is processing the graphics and physics.

    http://www.youtube.com/watch?v=r17UOMZJbGs

    http://www.youtube.com/watch?v=RuZQp...eature=related

    Speaking of CPU physics. What does the CPU test in Vantage test? The second CPU test. Now if the CPU is so much better then the GPU why does the CPU score jump considerably when you enable the PhysX driver when running a Nv card. Yes a Gulftown does pretty well with its 6 cores and 12 threads, but why does it need such a high core speed to do what a GTX295 can do at 650Mhz core speed?
    Last edited by VoodooProphetII; 10-06-2009 at 12:58 AM.

  2. #527
    Xtreme Addict
    Join Date
    Oct 2008
    Location
    The Curragh.
    Posts
    1,294
    If those really are the prices I think I'll be waiting until the next gen before I upgrade.
    Specially since DX11 is only now being adopted.

  3. #528
    Xtreme Mentor
    Join Date
    Feb 2007
    Location
    Oxford, England
    Posts
    3,433
    Quote Originally Posted by annihilat0r View Post
    I think GT300 will be around $500 and 5870x2 around $600
    been said x2 = 499 by ati theirselves.. so basically even if gt300 itll be beaten price/perf wise. then also

    in the terms of

    5870X2 > gtx380
    5850X2 ~ gtx380
    5890 ------------- no competiton and i suspect itll go in the same price point as 5870 atm.. and literally be best price/perf card.. so im holding out for it this time
    5870 ~ gtx360
    "Cast off your fear. Look forward. Never stand still, retreat and you will age. Hesitate and you will die. SHOUT! My name is…"
    //James

  4. #529
    Xtreme Enthusiast
    Join Date
    Dec 2006
    Location
    France
    Posts
    741
    Quote Originally Posted by annihilat0r View Post
    I think GT300 will be around $500 and 5870x2 around $600

  5. #530
    Xtreme Member
    Join Date
    Dec 2007
    Location
    Karachi, Pakistan
    Posts
    389
    500$ for HD5870X2 and 450$ for HD5850X2. So when they'll release the HD5890 they'll adjust the price of HD5870 to 350$ and price the HD5890 at $400?
    [SIGPIC][/SIGPIC]

  6. #531
    Xtreme Mentor
    Join Date
    Feb 2007
    Location
    Oxford, England
    Posts
    3,433
    Quote Originally Posted by ubuntu83 View Post
    500$ for HD5870X2 and 450$ for HD5850X2. So when they'll release the HD5890 they'll adjust the price of HD5870 to 350$ and price the HD5890 at $400?
    you forget the price is 379, 5890 will go at the same 379, and i'd expect 5870 to drop to around 320 expect a similar pricecut for the 5850 a this time also
    "Cast off your fear. Look forward. Never stand still, retreat and you will age. Hesitate and you will die. SHOUT! My name is…"
    //James

  7. #532
    Xtreme Addict
    Join Date
    Aug 2007
    Location
    Istantinople
    Posts
    1,574
    Wow, if 5870x2 is $500 then GT300 will have to be priced in the $400-500 range, and I don't think that'll be easy for Nvidia.
    Has anyone really been far even as decided to use even go want to do look more like?
    INTEL Core i7 920 // ASUS P6T Deluxe V2 // OCZ 3G1600 6GB // POWERCOLOR HD5970 // Cooler Master HAF 932 // Thermalright Ultra 120 Extreme // SAMSUNG T260 26"

  8. #533
    Xtreme Addict
    Join Date
    Jul 2007
    Location
    Alberta, Canada
    Posts
    1,264
    I'm still expecting 2 5870s to out perform the X2 so ATI doesn't need as strong of a lead with the 5870x2 as they otherwise would if Nvidia could manage a dual card out of the gate ( which all indications suggest this is beyond unlikely if not impossible and will remain so for the considerable future - eg die shrink ) I personally expect the high end card to be $499 USD MSRP at launch. If it can trail the 5870x2 in most games that have decent multi gpu scaling, then it will get my attention as it is sure to face roll it in games without good profiles / scaling engines.
    Last edited by Chickenfeed; 10-06-2009 at 04:32 AM.
    Feedanator 7.0
    CASE:R5|PSU:850G2|CPU:i7 6850K|MB:x99 Ultra|RAM:8x4 2666|GPU:980TI|SSD:BPX256/Evo500|SOUND:2i4/HS8
    LCD:XB271HU|OS:Win10|INPUT:G900/K70 |HS/F:H115i

  9. #534
    Xtreme Mentor
    Join Date
    Feb 2007
    Location
    Oxford, England
    Posts
    3,433
    Quote Originally Posted by Chickenfeed View Post
    I'm still expecting 2 5870s to out perform the X2 so ATI doesn't need as strong of a lead with the 5870x2 as they otherwise would if Nvidia could manage a dual card out of the gate ( which all indications suggest this is beyond unlikely if not impossible and will remain so for the considerable future - eg die shrink ) I personally expect the high end card to be $499 USD MSRP at launch. If it can trail the 5870x2 in most games that have decent multi gpu scaling, then it will get my attention as it is sure to face roll it in games without good profiles / scaling engines.
    it will beat the x2, but the x2 card will be cheaper then 2 separate cards by quite a margin
    "Cast off your fear. Look forward. Never stand still, retreat and you will age. Hesitate and you will die. SHOUT! My name is…"
    //James

  10. #535
    Xtreme Addict
    Join Date
    Apr 2007
    Posts
    2,128
    Quote Originally Posted by VoodooProphetII View Post
    Speaking of CPU physics. What does the CPU test in Vantage test? The second CPU test. Now if the CPU is so much better then the GPU why does the CPU score jump considerably when you enable the PhysX driver when running a Nv card. Yes a Gulftown does pretty well with its 6 cores and 12 threads, but why does it need such a high core speed to do what a GTX295 can do at 650Mhz core speed?
    Because GPU's by their design are VERY FAST at dealing with simple opreations, and they are very parallel. If the core can be run easily in parallel then it will be very fast. Usually the code is very simple though. One GPU "shader core" is simplier than FPU core in CPU.

    Problems arise when the GPU has to start to deal with memory mapping, stack management, hardawre interrupts, be compatible from 16 to 64 bits ISA, carry on the legacy stuff, have huge caches and very agile branch predictors and prefetchers. Oh, add that to a GPU and voila, you've got a CPU.

    ...why aren't we using GPU for general purpose central processing if they're so awesome?

    And as people liked to compare GPU flops and CPU flops.. Someone write a Queens benchmark for GPU and we will speak. It's a very branchy benchmark, and basically GPUs don't like that kind of a code.
    Last edited by Calmatory; 10-06-2009 at 06:21 AM.

  11. #536
    Xtreme Member
    Join Date
    Nov 2005
    Location
    Cape Town - South Africa
    Posts
    261
    Quote Originally Posted by Calmatory View Post
    Because GPU's by their design are VERY FAST at dealing with simple opreations, and they are very parallel. If the core can be run easily in parallel then it will be very fast. Usually the code is very simple though. One GPU "shader core" is simplier than FPU core in CPU.

    Problems arise when the GPU has to start to deal with memory mapping, stack management, hardawre interrupts, be compatible from 16 to 64 bits ISA, carry on the legacy stuff, have huge caches and very agile branch predictors and prefetchers. Oh, add that to a GPU and voila, you've got a CPU.

    ...why aren't we using GPU for general purpose central processing if they're so awesome?

    And as people liked to compare GPU flops and CPU flops.. Someone write a Queens benchmark for GPU and we will speak.
    Thanks Calmatory. I just asked out of interest. Thanks again.

  12. #537
    Registered User
    Join Date
    Aug 2009
    Posts
    1
    Quote Originally Posted by VoodooProphetII View Post
    Okay Intel is working on Larrabee which, as I understand it, will be more a GPGPU then just a GPU. Now why would a leading CPU maker get into the GPGPU market if they already have working 6 core 12 thread processor which will be availible for retail in a couple of months? Please keep in mind, I'm only asking to learn more.
    Because at certain tasks - i.e. certain mathematical operations, a GPU will blow a CPU into the weeds. With a CPU, those operations are expressed in software and combine generic operations on the CPU to achieve the desired effect.

    On a GPU, those operations are in the silicon on the chip. In very basic terms, you simply give them the data you want to process, and set them off.

    It's the same reason why 3D rendering on a CPU sucks - very complex, and needs to pull together LOADS of instructions to make it happen. With a GPU, Direct3D and OpenGL operations are expressed on the silicon.

    So, there is still a market for CPUs. Most of a game's logic will run on a CPU, as will the OS and applications such as MS Word. GPUs cannot do things like that.

    As much as they are being put to more general purpose tasks, it is only by the grace of the 3D mathematics they use being suited to other tasks, that we are seeing this hoo-hah over GPGPU.

  13. #538
    Xtreme Addict
    Join Date
    Mar 2007
    Location
    United Kingdom
    Posts
    1,597
    It would be interesting to see when (or if) ATi release the 2GB version of the HD5870, as IMHO that is the only reason people who are not convinced the GT300 is coming soon are not purchasing (or preording) HD5870's. I for one see a 1GB as pointless now. I am currently on a 1GB GTX280 and already have VRAM limited situations. This GT300 with 2+GB OF DDR5 sounds like a breath of fresh air, I just hope it lives up to the expectations, if not then the new Radeon cards do sound fantastic.
    There is no denying though that on paper and from what we have heard about the computing aspectsl, the GT300 is sounding like another "VooDoo2 or Geforce 8 GTX".
    John
    Stop looking at the walls, look out the window

  14. #539
    Xtreme Cruncher
    Join Date
    May 2009
    Location
    Bloomfield
    Posts
    1,968
    article on the handicapped versions of fermi. they will have decreased DP performance and gddr3.

    http://www.xbitlabs.com/news/video/d...ics_Cards.html

  15. #540
    Xtreme Addict
    Join Date
    Aug 2007
    Location
    Istantinople
    Posts
    1,574
    Any ideas when they (crippled Fermi) will be out? Because without them Nvidia can't compete with ATI on the mid-performance range while being profitable. I can't imagine a GTX 260 or 275 will do much good against the new Junipers without a serious price slash
    Has anyone really been far even as decided to use even go want to do look more like?
    INTEL Core i7 920 // ASUS P6T Deluxe V2 // OCZ 3G1600 6GB // POWERCOLOR HD5970 // Cooler Master HAF 932 // Thermalright Ultra 120 Extreme // SAMSUNG T260 26"

  16. #541
    Xtreme Addict
    Join Date
    Jul 2007
    Posts
    1,488
    Quote Originally Posted by Chumbucket843 View Post
    article on the handicapped versions of fermi. they will have decreased DP performance and gddr3.

    http://www.xbitlabs.com/news/video/d...ics_Cards.html
    This is interesting. I wonder how it will clock.

  17. #542
    Banned
    Join Date
    Jun 2008
    Location
    Mi
    Posts
    1,063
    Quote Originally Posted by annihilat0r View Post
    Wow, if 5870x2 is $500 then GT300 will have to be priced in the $400-500 range, and I don't think that'll be easy for Nvidia.
    Yep, thats what we've been saying...

  18. #543
    Xtreme Cruncher
    Join Date
    May 2009
    Location
    Bloomfield
    Posts
    1,968
    Quote Originally Posted by annihilat0r View Post
    Wow, if 5870x2 is $500 then GT300 will have to be priced in the $400-500 range, and I don't think that'll be easy for Nvidia.
    how are high margins bad? they will obviously have a crippled version that will compete with the 5870 and competitively priced of course. if you want the fastest graphics on the planet you must realise there is a diminishing return for how much money you spend in relation to speed.

  19. #544
    Xtreme Member
    Join Date
    May 2009
    Posts
    334
    Sounds intresting, I really think nvidia is on to something here.

    http://www.fudzilla.com/content/view/15832/34/
    Project TJ07 WeeMaan edition in progress...

    i7 920 @ 4,4ghz, P6T Deluxe, Corsair Dominator 6Gb, HD5870, OCZ Vertex 60gb + Samsung F1 1tb

    Heatkiller copper, EK 5870, Thermochill 120.4, DDC 3.2

  20. #545
    I am Xtreme
    Join Date
    Dec 2007
    Posts
    7,750
    all these features were seeing with CUDA, i think they really need to start making some IGPs that do everything. would be a great market. (use my cheap worthless weak northbridge GPU to run physics, virus scans, video decoding/encoding, video playback, folding, and other random things that CPUs dont do so well)

  21. #546
    Xtreme Addict
    Join Date
    Apr 2004
    Posts
    1,640
    Quote Originally Posted by Manicdan View Post
    all these features were seeing with CUDA, i think they really need to start making some IGPs that do everything. would be a great market. (use my cheap worthless weak northbridge GPU to run physics, virus scans, video decoding/encoding, video playback, folding, and other random things that CPUs dont do so well)
    You mean like, say, Ion?
    DFI LANParty DK 790FX-B
    Phenom II X4 955 BE (1003GPMW) @ 3.8GHz (19x200) w/1.36v
    -cooling: Scythe Mugen 2 + AC MX-2
    XFX ATI Radeon HD 5870 1024MB
    8GB PC2-6400 G.Skill @ 800MHz (1:2) 5-5-5-15 w/1.8v
    Seagate 1TB 7200.11 Barracuda
    Corsair HX620W


    Support PC gaming. Don't pirate games.

  22. #547
    Xtreme Member
    Join Date
    Mar 2009
    Location
    Miltown, Wisconsin
    Posts
    353
    Since were seeing cpu limited senarios on the 5870, who thinks that gt300 may be seriously cpu bottlenecked. Doesnt matter how fast the gpu is if the cpu cant keep up, this could kill any performance gains from the architecture. Its almost to the point that cpus need a dedicated co-processor.

  23. #548
    I am Xtreme
    Join Date
    Dec 2007
    Posts
    7,750
    Quote Originally Posted by Cybercat View Post
    You mean like, say, Ion?
    was thinking like 5x stronger though, and not nvidia bs like disabling physx when ati card present. let their products become a standard, not an option

  24. #549
    Xtreme Addict
    Join Date
    Mar 2006
    Location
    Saskatchewan, Canada
    Posts
    2,207
    The more I think about it, the more it makes sense that hemlock is actually a 5850x2.

    The 5890 is not closed to confirmed at this point and yeilds haven't been the best at 40nm. The pricing of a 4870x2 at launch was 550 dollars. The price of a 4870 1gb was 280-300 dollars.so doubling the price you get somewhere near the 550 price point. Look at the price of the 5850, its $259 and the price of the hemlock is under 500. If the launch price of hemlock was somewhere around 479-499, wouldn't it make more sense to use 5850s if we look at past pricing?

    It makes no sense if the 5870 is 379 and the double version of that is a little over a hundred bucks more, since it a more expensive chip to make than the 4870(larger die, pcb, worse yields), yet it dual version cost less.

    Additionally, a dual 5850 would be 40 percent faster than a 5870 and would fall in line with the rest of AMD line up quite nicely. Also gains from using 5870 in cf over using 5850 in cf are marginal most of the time to a max of 20 percent better in rare instances. AMD could make a 5850x2 and launch it as hemlock.
    Last edited by tajoh111; 10-06-2009 at 09:37 PM.

  25. #550
    Xtreme Enthusiast
    Join Date
    Feb 2009
    Posts
    800
    Quote Originally Posted by tajoh111 View Post
    The more I think about it, the more it makes sense that hemlock is actually a 5850x2.

    The 5890 is not closed to confirmed at this point and yeilds haven't been the best at 40nm. The pricing of a 4870x2 at launch was 550 dollars. The price of a 4870 1gb was 280-300 dollars.so doubling the price you get somewhere near the 550 price point. Look at the price of the 5850, its $259 and the price of the hemlock is under 500. If the launch price of hemlock was somewhere around 479-499, wouldn't it make more sense to use 5850s if we look at past pricing?

    It makes no sense if the 5870 is 379 and the double version of that is a little over a hundred bucks more, since it a more expensive chip to make than the 4870(larger die, pcb, worse yields), yet it dual version cost less.

    Additionally, a dual 5850 would be 40 percent faster than a 5870 and would fall in line with the rest of AMD line up quite nicely. Also gains from using 5870 in cf over using 5850 in cf are marginal most of the time to a max of 20 percent better in rare instances. AMD could make a 5850x2 and launch it as hemlock.
    5870x2 exists and you know that.

    http://www.xtremesystems.org/forums/...d.php?t=235302

    I think 5850x2 is for countering nvidia when GT300 is launched, just in case 5870x2 is too expensive.

    EDIT:
    Quote Originally Posted by Chumbucket843 View Post
    article on the handicapped versions of fermi. they will have decreased DP performance and gddr3.

    http://www.xbitlabs.com/news/video/d...ics_Cards.html
    Are they trying to let AMD win the consumer GPGPU market?
    Last edited by blindbox; 10-06-2009 at 10:52 PM. Reason: Add a few more lines

Page 22 of 42 FirstFirst ... 121920212223242532 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •