Page 70 of 109 FirstFirst ... 20606768697071727380 ... LastLast
Results 1,726 to 1,750 of 2723

Thread: The GT300/Fermi Thread - Part 2!

  1. #1726
    Xtreme Addict
    Join Date
    Dec 2007
    Posts
    1,030
    Quote Originally Posted by Piotrsama View Post
    A price cut is a price cut nobody will complain or care what the reason was.
    Yeah, that's why nobody complained when nVidia cut GT200 prices for about 100$ when RV770 came out. Especially those guys that bought them for 550$, they even laughed at it!
    Are we there yet?

  2. #1727
    Xtreme Addict
    Join Date
    Apr 2007
    Posts
    1,870
    Quote Originally Posted by damha View Post
    I want to play games, but you have CUDA. Games, CUDA, games, CUDA, games, CUDA. How are they related again?
    http://www.prnewswire.com/news-relea...-85999182.html

    Working closely with engineers at NVIDIA, developer Avalanche Studios has incorporated support for NVIDIA CUDA technology which helps to deliver a higher level of visual fidelity within the game's environments. CUDA-enhanced features in Just Cause 2 include incredible in-game effects, with rivers, lakes and oceans beautifully rendered with realistic rising swells, flowing waves and ripples, while advanced photographic Bokeh lens techniques add an additional cinematic quality to the look and feel of the game. Just Cause 2 has also been optimised to take full advantage of NVIDIA 3D Vision technology on compatible hardware, creating an incredibly immersive 3D experience.
    Looks like Just Cause 2 will use CUDA directly for water simulation and post processing.

  3. #1728
    Xtreme Addict
    Join Date
    Dec 2007
    Posts
    1,030
    Are we there yet?

  4. #1729
    Xtreme Addict
    Join Date
    Nov 2007
    Posts
    1,195
    Quote Originally Posted by mapel110 View Post
    unigine got famous cause of nvidia seems they can't find anything else to make 480 look good so they keep benching same demo until 480 reachs perfection

  5. #1730
    Xtreme Enthusiast
    Join Date
    Nov 2008
    Location
    Sweden
    Posts
    621
    @SkItZo
    Sorry about that. It was quite amusing when I got that message in Server 2003, though
    Main Rig: Phenom II X6 1055T 95W @3562 (285x12.5) MHz, Corsair XMS2 DDR2 (2x2GB), Gigabyte HD7970 OC (1000 MHz) 3GB, ASUS M3A78-EM,
    Corsair F60 60 GB SSD + various HDDs, Corsair HX650 (3.3V/20A, 5V/20A, 12V/54A), Antec P180 Mini


    Notebook: HP ProBook 6465b w/ A6-3410MX and 8GB DDR3 1600

  6. #1731
    Xtreme Addict
    Join Date
    Apr 2006
    Posts
    2,462
    Quote Originally Posted by trinibwoy View Post
    Looks like Just Cause 2 will use CUDA directly for water simulation and post processing.
    I never even heard of Just Cause 1

    Proprietary standards just need to die, quickly.

    @the 3DMark-screen: That's hilarious I never knew there were ppl that used Dualcores (except in notebooks).

    Quote Originally Posted by eric66 View Post
    unigine got famous cause of nvidia seems they can't find anything else to make 480 look good so they keep benching same demo until 480 reachs perfection
    When will that be? After a shrink to 32 nm? That's nothing unheard in the industry (R600 ).

    God I should go to sleep. Five o'clock in the morning in Germany *yawn*
    Notice any grammar or spelling mistakes? Feel free to correct me! Thanks

  7. #1732
    Xtreme Addict
    Join Date
    Jan 2004
    Posts
    1,313
    Anybody here notice that Super Mario's head must be made of adamantium to keep bashing those bricks.. at least you'd think he'd use some asparin instead of all the shrooms *caugh* steroids. And whats up with the polka dot sky high place.. my tingling spidey sense is guessing LSD was involved.

    Can somebody here make a Skynet/Fermi poster ۩♪♫☺☻♀♂█▓▒▒░

    24/7: A64 3000+ (\_/) @2.4Ghz, 1.4V
    1 GB OCZ Gold (='.'=) 240 2-2-2-5
    Giga-byte NF3 (")_(") K8NSC-939
    XFX 6800 16/6 NV5 @420/936, 1.33V

  8. #1733
    Xtreme Addict
    Join Date
    Apr 2007
    Posts
    1,870
    Quote Originally Posted by FischOderAal View Post
    I never even heard of Just Cause 1
    Oh, then it must certainly be an irrelevant POS if it slipped under your radar.

    Proprietary standards just need to die, quickly.
    Yep, if there are open alternatives. Which there aren't. There are millions of CUDA capable card owners out there who will benefit from these additions who won't care much that it's proprietary.

  9. #1734
    Xtreme Addict
    Join Date
    Jul 2005
    Posts
    1,646
    Quote Originally Posted by ***Deimos*** View Post
    Anybody here notice that Super Mario's head must be made of adamantium to keep bashing those bricks.. at least you'd think he'd use some asparin instead of all the shrooms *caugh* steroids. And whats up with the polka dot sky high place.. my tingling spidey sense is guessing LSD was involved.

    Can somebody here make a Skynet/Fermi poster ۩♪♫☺☻♀♂█▓▒▒░
    Mario used his fist to break blocks though.

  10. #1735
    Xtreme Addict
    Join Date
    Mar 2006
    Location
    Saskatchewan, Canada
    Posts
    2,207
    Quote Originally Posted by trinibwoy View Post
    Oh, then it must certainly be an irrelevant POS if it slipped under your radar.



    Yep, if there are open alternatives. Which there aren't. There are millions of CUDA capable card owners out there who will benefit from these additions who won't care much that it's proprietary.
    If it something be advertised for NVIDIA and anything related to them. I.e Cuda and physX, fermi it should die. This would normally sound sarcastic but this is AMDz.....I mean xtremesystems.

    The purpose of this thread was to contain the trolling around the forums and to reduce the amount of Fermi threads. But now all the fermi trolling is concentrated to make the so called "trolling party" and in addition, there seems to be anything related to AMD at all, to be posted in the news section. From every single 5970 being released(5970 black edition, 5970 sapphire edition, 5970 ares edition, 5970 haxxors edition) to AMD contests(MSI 58xx total prize value for three prizes 800 dollars), to someone posting something related to a currency which just happened to be AMD abbreviated, interviews with AMD people(catalyst maker x2), blogs, drivers and etc.

    It just feels like an AMD forum, which normally isn't bad. But this forum already has an AMD section and most of this stuff could go in there or tech talk or extreme competition(which is also for contests). I started feeling sorry for Nvidia a long time ago with all the trolling against them. Sure they rename alot but they are not doing some terribly underhanded like what Intel did against AMD.

    Anything like a negative rumor against NV seems to be posted instantly and its hard to not feel sorry for them. E.g things like charlie articles and somehow even positive like their recent financial results have even been trolled or them giving away a game.

    NV could cure cancer and it would somehow be trolled.
    Last edited by tajoh111; 03-03-2010 at 11:53 PM.
    Core i7 920@ 4.66ghz(H2O)
    6gb OCZ platinum
    4870x2 + 4890 in Trifire
    2*640 WD Blacks
    750GB Seagate.

  11. #1736
    Xtreme Addict
    Join Date
    May 2003
    Location
    Peoples Republic of Kalifornia
    Posts
    1,541
    Anyone notice that nVidia now has "Geforce 300 Series" drivers posted up on their site? (if this has been posted already, please ignore)

    "If the representatives of the people betray their constituents, there is then no resource left but in the exertion of that original right of self-defense which is paramount to all positive forms of government"
    -- Alexander Hamilton

  12. #1737
    Registered User
    Join Date
    Feb 2010
    Location
    NVIDIA HQ
    Posts
    76
    NVIDIA has stated numerous times that they are willing to license PhysX to anyone, including AMD/ATI. I'm not sure how "easily" ATI could implement it though.

    How are games and CUDA related? PhysX is an API with a HAL that supports CUDA. In addition to C for CUDA, you get C++ and OpenCL support. Devs are slow to adapt these technologies, but it is happening. Everything else being equal*, are you going to go with the card that has support for these extras, or for the card that doesn't? I don't see these features as a deal breaker, but they are certainly nice to have.

    Batman AA whetted by tongue for PhysX, but the Super Sonic Sled Demo is what really sold it to me. It's a feature that can add a ton of realism to games. It frees up a developer from having to make a physics engine for their title, less work for devs = more likely to see it put into a game.

    * According to the perf numbers I've seen, things aren't equal. The point of DX11 is a massive improvement to geometric realism through tessellation. And AMD's architecture doesn't hold a candle to NVIDIA's in terms of geometry perf. 2.03x to 6.47x faster in high geometry (this more of an abstract, there's more than just geometry perf to get a game running) than the 5870, are the numbers I've seen. I'm eager to see how a big DX11 title performs (Sorry ATI, DiRT2 just wasn't cutting it for me). The beauty of tessellation is that devs can build their geometry for hardware performance that doesn't yet exist, and smoothly scale it down to what's out, without a load of extra work (just move the slider). It will be a little while before we see a game that was built around DX11 tessellation, rather than just having it slapped into the game.

    Anyone wanna loan me some ATI cards to bench against on the 26th? :P


    Amorphous

    Quote Originally Posted by damha View Post
    Good points. Big assumption here that it will beat it by 20%. In the average case you might be looking at a much smaller difference, if at all.



    If nvidia is getting physX through to you, it is not getting through to me. Sure it's something to say you have that others don't, but think about it. ATi can just easily allow physX on their GPUs but nvidia won't allow it. ATI won't do it now because they want to see it go down, maybe if it picks up later they will license it. It will take a few years for something to emerge as the dominant "physics" api. If there is a lesson we can learn from history it is that the first to buy raffle tickets aren't always the ones to win the raffle.

    CUDA. Ok you have CUDA. I want to play games, but you have CUDA. Games, CUDA, games, CUDA, games, CUDA. How are they related again? Let's just say I hope they have that "15-20%" advantage because they will definitely need it.

    I admit, I am not an "average" computer user. I don't see things the way regular users and I cannot recall the last time I struggled with CCC. I love all the features in CCC, I love the way its laid out. I also use nvidia drivers at work, I don't upgrade them as often but I do use them a bit differently at work. At work my primary concern is not gaming: stability, ease of use, multidisplay support. I like the features they have for multidisplay, but they are not well thought out. Nvidia drivers give more control over display settings, color/contrast/brightness/gamma, but no control for video playback, no deinterlacing options. At least with the release I have now.

    I don't use dual GPU so I cannot comment on that, but to be fair I will say no clear advantage to nvidia or ATI in the driver department from my perspective and from my personal usage.
    NVIDIA Forums Administrator

  13. #1738
    Registered User
    Join Date
    Feb 2010
    Location
    NVIDIA HQ
    Posts
    76
    NVIDIA is the biggest backer of OpenCL, which can certainly be used to run PhysX-like physics processing.

    It takes a LONG time to develop and get out a set of open standards. NVIDIA didn't want to wait to get a good set of physics tools out to devs and into games. With the development of CUDA, NVIDIA acquired Ageia and ported Ageia PhysX into CUDA (in only a few days, I'm told).

    At the same time, NVIDIA chairs Khronos, the guys that do, most notably, OpenGL, OpenCL and other open-source standards. In short, NVIDIA doesn't care if your physics are PhysX or OpenCL based, either is good for them, they both help sell more graphics cards. The 100M+ NVIDIA GPUs that support CUDA also support OpenCL.

    Quote Originally Posted by trinibwoy View Post
    Yep, if there are open alternatives. Which there aren't. There are millions of CUDA capable card owners out there who will benefit from these additions who won't care much that it's proprietary.
    The current GeForce 300 GPUs you'll see around don't use the Fermi nanoarchitecture that the GF100 based GTX 470 and GTX 480 will. Mid-range solutions that use the Fermi arch are likely to follow a few months after the March 26th launch of the GTX 470 and 480s.

    Quote Originally Posted by Andrew LB View Post
    Anyone notice that nVidia now has "Geforce 300 Series" drivers posted up on their site? (if this has been posted already, please ignore)
    NVIDIA Forums Administrator

  14. #1739
    Xtreme Member
    Join Date
    Jan 2009
    Posts
    261
    Here's more rumors and talks of the mystical Firmy from Fudzilla:
    http://www.fudzilla.com/content/view/17922/1/
    http://www.fudzilla.com/content/view/17921/1/

    For resume:
    - GTX 480: around 300w (HD 5970 level); GTX 470: 220w.
    - GTX 480 "should be faster than HD 5870 in some cases". GTX 470 (I hope it's not the GTX 480) will be 20-30% faster than GTX 285.
    - Partners still don't have final design yet. They only know the dimensions and cooling design currently.

    BTW, "launch" date is still March 26th (3 weeks), I hope.

  15. #1740
    Xtreme Member
    Join Date
    Sep 2008
    Posts
    449
    Ok...fermi is definetly smaller then even g200-b3 chip (gtx 285 55nm etc)

    GTX 285 IHS is 1.75in flat or 4.45cm.



    --lapped Q9650 #L828A446 @ 4.608, 1.45V bios, 1.425V load.
    -- NH-D14 2x Delta AFB1212SHE push/pull and 110 cfm fan -- Coollaboratory Liquid PRO
    -- Gigabyte EP45-UD3P ( F10 ) - G.Skill 4x2Gb 9600 PI @ 1221 5-5-5-15, PL8, 2.1V
    - GTX 480 ( 875/1750/928)
    - HAF 932 - Antec TPQ 1200 -- Crucial C300 128Gb boot --
    Primary Monitor - Samsung T260

  16. #1741
    Xtreme Member
    Join Date
    Jan 2009
    Posts
    261
    Ok...fermi is definetly smaller then even g200-b3 chip (gtx 285 55nm etc)

    GTX 285 IHS is 1.75in flat or 4.45cm.
    Correction: the IHS is smaller, likely due to the smaller memory bus.

    Unless you open up the IHS or x-ray it, there's no telling how big the die is.

  17. #1742
    Xtreme Member
    Join Date
    Jul 2009
    Location
    Madrid (Spain)
    Posts
    352
    Quote Originally Posted by Amorphous View Post
    NVIDIA has stated numerous times that they are willing to license PhysX to anyone, including AMD/ATI. I'm not sure how "easily" ATI could implement it though.

    <snip>
    Amorphous, the biggest problem is not how easily it would be for ATI to implement PhysX, but the fact of supporting and helping to standardize a competitor's propietary API, be it PhysX, CUDA, or whatever. I suppose you realize it, don't you? Of course NVIDIA would love AMD/ATI embracing PhysX/CUDA (the sw API) because being only compatible with NVIDIA hw is one of the reasons why its use is not widely supported by mainstream sw developers (because of compatibility for the features they should invest resources into). But I think you understand very well that if AMD are anything else than simply dumb, they have to do anything in their hands to avoid those competitor's propietary API's to become widespread, and supporting it by making their hw compatible is not the best thing to do for them.

    NVIDIA will always have the possibility to do what they want with their propietary API's, and taking competitive advantage from it. Look at the case of EAX and Creative Labs. AMD won't support something like it unless they have absolutely no other possibility.

    And in its current situation of hw vendor specific API's, I think CUDA/PhysX don't have a real place in mainstream market (and that for PhysX means anywhere). Of course, non mainstream HPC market (scientifical research, stadistical analysis, data mining, some applications in engineering, and other kind of data crunching and complex systems simulation applications) is another completely different thing, and I think in that ground CUDA (and generalizing, NVIDIA) is becoming THE big player for the moment (let's see how much advantage they can take until Intel is able to compete in that market, and in which measure they are able to do it). HW compatibility is important for a mainstream product from POV of the cost-results of implementing a given feature, and being vendor specific doesn't help in that. The less when you think that there are other wider solutions (OpenCL, Direct Compute) which in a short/medium term will be used by equivalent middlewares (Havok, Bullet for example in the case of real time physics simulation libraries).

    That said, I'm not really convinced about the usefulness of GPGPU computing for videogames in PC's. For example, you mentioned B:AA. In that game, the physics effects that use CUDA can be perfectly done in a current quad core CPU (maybe in one with less cores also) with a multithreaded physics library. Most of them can be faked without much of a difference to be run even with less hw. In order to take real advantage of the computing power of a GPU you should aim to make effects orders of magnitude more complex, and that would bee a too hard hit for the GPU computation budget given to graphics. I've been taking a look at the rendering pipeline in CryEngine 3, for example, and there's simply no place where you could fit any non graphics related workload for the GPU. And that's after discarding a lot of visual effects that simply don't fit too, and taking the more rough but performant approaches for some others. And on the other hand, you hardly have anything intensive to do for the CPU apart from physics and AI computations, for which is enough (except, I insist, if you aim to do some much more complex computations that would make a huge impact on the leaving GPU budget for graphics).

    Maybe the situation is different in a videoconsoles background, though, or maybe will change in the near (or far) future. Maybe I'm simply wrong, that's nothing else than a very light, superficial, analysis of mine. Anyway, if GPGPU computing it's going to become widespread in videogames, it will probably happen when there's a sw solution available which is not dependant on a given hw vendor (maybe except exclusive titles for a videoconsole with hw of that hw vendor of course).

  18. #1743
    I am Xtreme
    Join Date
    Dec 2008
    Location
    France
    Posts
    9,060
    Donate to XS forums
    Quote Originally Posted by jayhall0315 View Post
    If you are really extreme, you never let informed facts or the scientific method hold you back from your journey to the wrong answer.

  19. #1744
    Xtreme Member
    Join Date
    Sep 2008
    Posts
    449
    Quote Originally Posted by Teemax View Post
    Correction: the IHS is smaller, likely due to the smaller memory bus.

    Unless you open up the IHS or x-ray it, there's no telling how big the die is.
    see

    http://www.xtremesystems.org/forums/...postcount=1620

    basically die size has to be smaller then 55nm g200b3s
    --lapped Q9650 #L828A446 @ 4.608, 1.45V bios, 1.425V load.
    -- NH-D14 2x Delta AFB1212SHE push/pull and 110 cfm fan -- Coollaboratory Liquid PRO
    -- Gigabyte EP45-UD3P ( F10 ) - G.Skill 4x2Gb 9600 PI @ 1221 5-5-5-15, PL8, 2.1V
    - GTX 480 ( 875/1750/928)
    - HAF 932 - Antec TPQ 1200 -- Crucial C300 128Gb boot --
    Primary Monitor - Samsung T260

  20. #1745
    color red illidan's Avatar
    Join Date
    Mar 2005
    Posts
    1,485
    naked 470 and 480




  21. #1746
    Xtreme Addict
    Join Date
    Apr 2007
    Posts
    1,870
    Quote Originally Posted by LiquidReactor View Post
    see

    http://www.xtremesystems.org/forums/...postcount=1620

    basically die size has to be smaller then 55nm g200b3s
    If the die is larger then there will already be more contact with the IHS so no need for a larger IHS. Besides the size of the IHS is governed by the package size, not the die size.

  22. #1747
    Xtreme Member
    Join Date
    Jan 2009
    Posts
    261
    Quote Originally Posted by LiquidReactor View Post
    see

    http://www.xtremesystems.org/forums/...postcount=1620

    basically die size has to be smaller then 55nm g200b3s
    Thank you for quoting yourself without any explanation

    As said by trinibwoy, the IHS size is the size of the GPU package and does not neccessarily reflect the die size.

    GT200 needs a big package because of its 512-bit memory bus. GF100 likely can get away with a smaller package with a 25% reduction in bus width.

  23. #1748
    Xtreme Member
    Join Date
    Oct 2004
    Posts
    171
    heise.de (in german) has new information about the GTX470

    shader@1255 MHz
    RAM@1600 MHz

    Vantage-X 7511 (compare GTX285/HD5850/HD5870: 6002/6430/8730)
    Vantage-P 17156 (compare HD5850/HD5870: 14300/17303)

    Unigine benchmark
    4xAA [fps]: 29/22/27 (GTX470/HD5850/HD5870)
    8xAA [fps]: 20/19/23 (GTX470/HD5850/HD5870)

    http://www.heise.de/newsticker/meldu...lt-946411.html

  24. #1749
    Xtreme Guru
    Join Date
    Aug 2007
    Posts
    3,562
    Just to clear things up about CUDA + Just Cause 2.

    OpenCL and DX Compure Shaders are accelerated THROUGH the CUDA architecture standards. As such, the effects in Just Cause 2 will likely be accelerated on AMD hardware through their Stream architecture as well if they use OpenCL or Compute Shaders.

    Everyone has to remember that CUDA is an all-encompassing term for the architecture that accelerates all types of APIs on NVIDIA GPUs (OpenCL, PhysX, folding, Compute Shaders, ray tracing, Adobe Photoshop GPU acceleration, Flash acceleration, etc., etc.). It is WAS proprietary but is now acting as a type of umbrella term for everything GPU accelerated. Much like Stream on AMD products.

  25. #1750
    Banned
    Join Date
    Jul 2009
    Posts
    510
    whats with the pictuires with white outs and black outs that makes no sense whatsoever. remeber leaked HD5 series pics? were any of them whited out?? nope why the secrecy? maybe because this isnt what it will actually be? i just dont get it

Page 70 of 109 FirstFirst ... 20606768697071727380 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •