Page 7 of 42 FirstFirst ... 4567891017 ... LastLast
Results 151 to 175 of 1035

Thread: The official GT300/Fermi Thread

  1. #151
    Xtreme Enthusiast
    Join Date
    Mar 2009
    Location
    Toronto ON
    Posts
    566
    Quote Originally Posted by DilTech View Post
    If hardware isn't required, then you know as well as I do there's a good chance the developers won't code the games with ATi's tesselator in mind, right?

    We'll know soon enough.
    There isn't ATI tesselator. It's DX11 tessellation, ATI had tessellation before but had to change it for DX11.

    Unless there is going to be some instruction in the TWIMTBP games like "remove tessellation when ATI cards are is detected". There should not be any problem.

    Very strange that some people would actually approve of such restriction if it could possibly work even though the AMD is first to supply the developers with DX11 hardware and assistance.
    Core i7-4930K LGA 2011 Six-Core - Cooler Master Seidon 120XL ? Push-Pull Liquid Water
    ASUS Rampage IV Black Edition LGA2011 - G.SKILL Trident X Series 32GB (4 x 8GB) DDR3 1866
    Sapphire R9 290X 4GB TRI-X OC in CrossFire - ATI TV Wonder 650 PCIe
    Intel X25-M 160GB G2 SSD - WD Black 2TB 7200 RPM 64MB Cache SATA 6
    Corsair HX1000W PSU - Pioner Blu-ray Burner 6X BD-R
    Westinghouse LVM-37w3, 37inch 1080p - Windows 7 64-bit Pro
    Sennheiser RS 180 - Cooler Master Cosmos S Case

  2. #152
    Xtreme Addict
    Join Date
    Nov 2005
    Posts
    1,084
    Nvidia Fermi - Arriving in Q1 2010

    Following Nvidia CEO Jen Hsung Huang's keynote speech, details about Nvidia's next gen architecture Fermi are finally available, putting rest to months of speculation.

    We reported most key specifications previously, but now we have most of our gaps filled.

    First thing worth pointing out is that Nvidia sees clear potential in High Performance Computing and GPU Stream Computing - perhaps even more than gaming - and believe there is multi-billion dollar potential in the HPC industry, which is currently dominated by much more expensive and less powerful CPUs. As a result, Fermi is the closest a GPU has ever come to resembling a CPU, complete with greater programmability, leveled cache structure and significantly improved double precision performance. As such, today's event and whitepaper concentrates more on stream computing with little mention of gaming.

    That said - GF100 is still a GPU - and a monster at that. Packing in 3 billion transistors @ 40nm, GF100 sports 512 shader cores (or CUDA cores) over 16 shader clusters (or Streaming Microprocessors, as Nvidia calls them). Each of these SMs contain 64KB L1 cache, with a unified 768KB L2 cache serving all 512 cores. 48 ROPs are present, and a 384-bit memory interface mated to GDDR5 RAM. On the gaming side of things, DirectX 11 is of course supported, though Tesselation appears to be software driven through the CUDA cores. Clock targets are expected to be around 650 / 1700 / 4800 (core/shader/memory). It remains to be seen how close to these targets Nvidia can manage.

    Of course, at 3 billion transistors, GF100 will be massive and hot. Assuming similar transistor density to Cypress at the same process (RV770 had a higher density than GT200), we are approaching 500 mm2. In terms of DirectX/OpenGL gaming applications, we expect GF100 to end up comfortably faster than HD 5870, something Nvidia confirms (though they refuse to show benchmarks at this point). However, it is unknown as to where GF100 performs compared to Hemlock.

    Products based on the Fermi architecture will only be available on retail stores in Q1 2010 - which is a rather long time away. This length delay and yields/costs could be two major problems for Nvidia. While there is no doubt Fermi/GF100 is shaping up to be a strong architecture/GPU, it will be costlier to produce than Cypress. We have already heard horror stories about the 40nm yields, which if true, is something Nvidia will surely fix before the product hits retail. However, this does take time, and Nvidia's next-gen is thus 3-6 months away. By then, AMD will have an entire range of next-gen products, most of them matured, and would perhaps be well on their way to die shrinks, especially for Cypress, which would might end up being half a year old at the time. There is no information about pricing either, although we can expect the monster that is GF100 to end up quite expensive. More economical versions of Fermi are unknown at this point too, which might mean the mainstream Juniper will go unchallenged for many months.

    If you are in the market for a GPU today - we don't see any point in holding out for Nvidia's GF100. However, if you are satisfied with your current GPU or looking forward to much improved stream computing - Fermi/GF100 might just be what you are after.

    In the meantime, we can expect price cuts and entry-level 40nm products from Nvidia. With all that has transpired today (being HD 5850 release day as well), there's one conclusion - ATI Radeon HD 5850 does seem like the GPU to get. If you are on a tighter budget, Juniper might have something for you soon.
    http://vr-zone.com/articles/nvidia-f....html?doc=7786

    Ups, sorry. Already up here :p
    Quote Originally Posted by Shintai View Post
    And AMD is only a CPU manufactor due to stolen technology and making clones.

  3. #153
    Banned
    Join Date
    Jun 2008
    Location
    Mi
    Posts
    1,063
    Not sure if been posted: AnandTech: Nvidia's Fermi...

    Looks like Nvidia is trying to gain a foothold in the CPU market, somewhat conceding on the gaming end of the business.

  4. #154
    Xtreme Addict
    Join Date
    Apr 2008
    Location
    Texas
    Posts
    1,663
    I hope to goodness that we can use 1 5870 for gaming and one GT300 for GPGPU on a Lucid Hydra enabled motherboard under Windows 7 next year. GT300 looks like it will be insane for GPGPU and OK for gaming. I hope Nvidia drops that driver BS they implemented when an ATI card is detected and lets us use our systems freely.
    Core i7 2600K@4.6Ghz| 16GB G.Skill@2133Mhz 9-11-10-28-38 1.65v| ASUS P8Z77-V PRO | Corsair 750i PSU | ASUS GTX 980 OC | Xonar DSX | Samsung 840 Pro 128GB |A bunch of HDDs and terabytes | Oculus Rift w/ touch | ASUS 24" 144Hz G-sync monitor

    Quote Originally Posted by phelan1777 View Post
    Hail fellow warrior albeit a surat Mercenary. I Hail to you from the Clans, Ghost Bear that is (Yes freebirth we still do and shall always view mercenaries with great disdain!) I have long been an honorable warrior of the mighty Warden Clan Ghost Bear the honorable Bekker surname. I salute your tenacity to show your freebirth sibkin their ignorance!

  5. #155
    Banned
    Join Date
    Jun 2008
    Location
    Mi
    Posts
    1,063
    Quote Originally Posted by Mechromancer View Post
    I hope to goodness that we can use 1 5870 for gaming and one GT300 for GPGPU on a Lucid Hydra enabled motherboard under Windows 7 next year. GT300 looks like it will be insane for GPGPU and OK for gaming.

    ....I hope Nvidia drops that driver BS they implemented when an ATI card is detected and lets us use our systems freely.
    How will nvidia's software detect..?

  6. #156
    Xtreme Enthusiast
    Join Date
    Oct 2006
    Location
    Quebec, Canada
    Posts
    589
    Quote Originally Posted by Xoulz View Post
    How will nvidia's software detect..?
    Easy


    If
    ATICatalyst=1
    Then
    Disable Physx and all features

    It's not wizardry for them to detect a software and disable features if it's still present. The problem they could face is if someone switches from ATI to Nvidia and didn't uninstall CCC correctly.... then all his nice features will be disabled on the Nvidia card, which kills Nvidia's plan
    i7 2600K @ 4.6GHz/Maximus IV Extreme
    2x 4GB Corsair Vengeance 1866
    HD5870 1GB PCS+/OCZ Vertex 120GB +
    WD Caviar Black 1TB
    Corsair HX850/HAF 932/Acer GD235HZ
    Auzentech X-Fi Forte/Sennheiser PC-350 + Corsair SP2500

  7. #157
    Xtreme Addict
    Join Date
    Aug 2005
    Location
    Germany
    Posts
    2,247
    Quote Originally Posted by Mad1723 View Post
    Easy


    If
    ATICatalyst=1
    Then
    Disable Physx and all features

    It's not wizardry for them to detect a software and disable features if it's still present. The problem they could face is if someone switches from ATI to Nvidia and didn't uninstall CCC correctly.... then all his nice features will be disabled on the Nvidia card, which kills Nvidia's plan
    i think he means when using nvidia and ati graphics cards with a lucid motherboard.
    1. Asus P5Q-E / Intel Core 2 Quad Q9550 @~3612 MHz (8,5x425) / 2x2GB OCZ Platinum XTC (PC2-8000U, CL5) / EVGA GeForce GTX 570 / Crucial M4 128GB, WD Caviar Blue 640GB, WD Caviar SE16 320GB, WD Caviar SE 160GB / be quiet! Dark Power Pro P7 550W / Thermaltake Tsunami VA3000BWA / LG L227WT / Teufel Concept E Magnum 5.1 // SysProfile


    2. Asus A8N-SLI / AMD Athlon 64 4000+ @~2640 MHz (12x220) / 1024 MB Corsair CMX TwinX 3200C2, 2.5-3-3-6 1T / Club3D GeForce 7800GT @463/1120 MHz / Crucial M4 64GB, Hitachi Deskstar 40GB / be quiet! Blackline P5 470W

  8. #158
    Xtreme Addict
    Join Date
    Nov 2004
    Posts
    1,692
    I know one thing, before I buy ANY new card, ATI or nVidia, I'm sitting back and waiting till GT300 is launched. Then I will buy, with my head, and a little bit with my heart. I want my next purchase to last as long as my G80, so I will be doing lots of thinking and weighing pros and cons of each high end card. ATI's launch of the 5870 excited me, but I hope GT300 will exite me even more. 3-6 months is a while though.
    Last edited by Tim; 10-01-2009 at 05:34 AM.

    Intel Core i7-3770K
    ASUS P8Z77-I DELUXE
    EVGA GTX 970 SC
    Corsair 16GB (2x8GB) Vengeance LP 1600
    Corsair H80
    120GB Samsung 840 EVO, 500GB WD Scorpio Blue, 1TB Samsung Spinpoint F3
    Corsair RM650
    Cooler Master Elite 120 Advanced
    OC: 5Ghz | +0.185 offset : 1.352v

  9. #159
    Xtreme Member
    Join Date
    Apr 2006
    Posts
    393
    Quote Originally Posted by Tim View Post
    I know one thing, before I buy ANY new card, ATI or nVidia, I'm sitting back and waiting till GT300 is launched. Then I will buy, with my head, and a little bit with my heart. I want my next purchase to last as long as my G80, so I will be doing lots of thinking and weighing pros and cons of each high end card. ATI's launch of the 5870 excited me, but I hope GT300 will exite me even more. 3-6 months is a while though.
    +1

    I don't understand why some people are so happy Fermi won't be available for several months. All it means is that you can't buy your AMD cards for a cheaper price.

    Whatever suits you guys.

  10. #160
    I am Xtreme
    Join Date
    Dec 2007
    Posts
    7,750
    from the anand article
    Double precision floating point (FP64) performance is improved tremendously. Peak 64-bit FP execution rate is now 1/2 of 32-bit FP, it used to be 1/8 (AMD's is 1/5). Wow.
    does this mean LRB is going to have a tough time keeping up in the DP float? (if i remember correctly, thats where LRB was much stronger than traditional GPUs)

  11. #161
    Xtreme Member
    Join Date
    Jul 2009
    Location
    Madrid (Spain)
    Posts
    352
    So, now I'm starting to make an idea about this Fermi chips. This new architecture brings some architectural changes about GPGPU over GT200, but graphic rendering wise it seems there's not much apart from DX11 support and more processing units.

    I like the GPGPU thing. And they're doing some very interesting things. I think that right now NVIDIA is one generation ahead AMD regarding GPU computing.

    The other side of the coin is graphic rendering performance. More or less same architecture, 512 CP (let's start naming them by their new name ), 48 ROP, ~225GB/s memory bandwidth... it seems that more or less the improvement GTX285->GTX380 will be proportional to HD4890->HD5870, or what is more or less the same, GTX280->GTX380 ~= HD4870->HD5870. Maybe a slightly higher to GeForce parts.

    If it's so, the situation is going to be more or less the same than last generation, with the aggravating for NVIDIA that the 3-4 months late comparing to Radeon parts, are going to make them compete against a product in a more advanced stage of its life cycle (so AMD will have got their initial income with the product, they will can price more aggresively their cards). So probably NVIDIA will find an even more hostile and aggresive pricing environment than last time (even when last time was infernal for them).

    This obvious focus in GPGPU side of things it's going to pay off on the long term, in my opinion, but it's going to give them a good amount of headaches on the short term, I think.

    Quote Originally Posted by RaZz! View Post
    i think he means when using nvidia and ati graphics cards with a lucid motherboard.
    What's the difference? Even if the Lucid Hydra operates over the graphics driver levels, intercepting API calls and balancing the calls amongst the installed cards, those cards will need their drivers to work. Videocards are not going to work by magic paths even with Hydra.

  12. #162
    Xtreme Enthusiast
    Join Date
    Dec 2006
    Location
    France
    Posts
    741
    Quote Originally Posted by tajoh111 View Post
    It doesn't take much assuming to see that CPU cost more to develop than GPU and NV spent a whole lot of money in 2008 for a GPU company.

    Similarly you don't know how much they spent on cuda or ion for research and development and yet you put it in your argument.
    I will give you an historic of our conversation and you will see who put things he doesn't even know but "he's assuming" because "it' doesn't need a genius to know" :

    I said :
    With a launch at Q1 that let's a lots of time for AMD to make a move.
    You supposed :
    Knowing AMD they might just completely forfeit the high end and fight at 250 and below for the next year if this card turns out to be 50-60% percent faster than the 5870, like they did against intel lately or 88xx generation to an extent.If the gtx 380 is able to somehow beat r800, it might just completely abandon it altogether, as I doubt it would sell well at all, as 3870x2 bombed and it was still beating the 8800 ultra.]This was lots to do with NV just being a stronger brand due to marketing.
    Your assumption in this post is that GT300 is G80-like so history will reproduce himself. You begun melting AMD strategy and ATI strategy, you begun praising Nvidia ("stonger brand due to marketing").

    I said :
    AMD launch it card before this time, what AMD is doing now : extrapolate GT300 performance and cost.
    Performance? GTX285 SLI is like 30% faster than 5870 in average. GTX380 may be more like 50% to 60% faster than 5870 in average. Maybe even less.
    Cost? 40% more transistors than RV870 and 384bits istead of 256bits. 600$? More?
    Diltech speaks about GTX395 but in Nvidia history multigpu cards were launched very late (More than 6 months in average).
    Basically AMD have 3 months to sell DX11 card with the help of Windows 7.
    I post only fact. And my assumption are basically in the Anandtech's article about Fermi. Like Diltech or...you i do a guessing about GT300 performance and use history to point that a GTX395 model may come late.
    AMD having 3 months to sell card is a giving fact, no?

    You said :
    AMD R and D budget is tiny compared to Intel and NV(especially Intel), you can over estimate your rivals performance by 1000% and it will do nothing if you don't have the r and d budget to get something going to match that estimate.

    With so many losing quarters in the past(except a couple quarters lately), I can imagine AMD graphic division was working on a shoestring budget, especially when AMD itself is so in the hole. Thankfully the research ATI put into r600 before the AMD and ATI merger paid off to some extent with r7xx and possibly to an extent r8xx as it turned out r6xx turned out to be a very scalable architecture. However research for the next big thing I can imagine being lacking for AMD and if this thing performs 50-60% faster than rv870, then AMD will need to come out with something new and not just a bigger chip with more shaders as returns have started to decrease with more shaders.

    It will take either a big chip from AMD(which seems to be against their design philosophy) or a new architecture. I think a new architecture is not coming any time soon because of budget issues.

    What AMD did with the R8xx is stretch the limits of the design(which NV did with g80->g200) that began with r600, it's all you can do when your company doesn't have the money to design a new architecture.
    Basically you spoke a lot to just say AMD have no money so they have no R&D budget, so they can't design a new architecture.

    You added reponding LordEC911 :
    [..]AMD won't be coming out with anything spectacular anytime soon because of the shoestring budget they have been working with because of so many bad quarters.[...]NV has been a much more profitable company overall and has probably been working on something pretty complex for the last 4 years as todays news confirms.
    AMD no money they can nothing, Nvidia have money so etc...

    I posted this :
    Ok i search like you and i find real numbers!
    -2006 AMD R&D : 1.205 Billions$
    -2006 ATI R&D :458 Millions$
    with 167 Millions$ spent Q1'06+Q2'06 and 291 Millions$ for Q3'06+Q4'06
    -So 2006 AMD+ATI :1.663 Billions$
    -2006 Nvidia R&D : 554 Millions$

    -2007 AMD+ATI R&D : 1.847 Billions$
    -2007 Nvidia R&D : 692 Millions$

    -2008 AMD+ATI R&D : 1.848 Billions$
    -2008 Nvidia R&D : 856 Millions$
    Real numbers pointing out that AMD use a lot maybe more money on R&D than Nvidia. So your main argument AMD has no R&D money go to trash.

    You took defensive stance and became "Mr Assumption" :
    If we look at those numbers AMD spent 2006-2007 spent 11% more and between 2007-2008 they didn't increase spending at all. Compare this to NV who spent 2006-2007 spent 25 percent more and 23.7% more

    Not to mention AMD likely spent alot of money getting to 55nm and 40nm to first plus all the money they spent on DDR5 and DDR4 research. NV waited for all this to happen so they didn't have to spent much on research and getting there as much.

    I can imagine since its AMD was running the show for the most part, I can see alot more money spent on their CPU then their GPU side, especially considering how behind they were during the conroe years, and looking at simple economics, getting that side on the better side of profitable was alot more important than getting it gpu side going.
    You prove your math skills and you them to try to show that 25% of 700M$ is better than 11% of 1.65B$...
    You try to explain AMD and Nvidia expense with your "Assumption-O-Maker".

    I don't deny i made assumption but i use fact to do it.
    You posted nearly zero fact since the beginning of this discussion!

  13. #163
    Registered User
    Join Date
    May 2005
    Posts
    3,691
    Quote Originally Posted by Tim View Post
    I know one thing, before I buy ANY new card, ATI or nVidia, I'm sitting back and waiting till GT300 is launched. Then I will buy, with my head, and a little bit with my heart. I want my next purchase to last as long as my G80, so I will be doing lots of thinking and weighing pros and cons of each high end card. ATI's launch of the 5870 excited me, but I hope GT300 will exite me even more. 3-6 months is a while though.
    Bingo, some else gets it... Even when the GTX-380 comes out I'm going to have a hard time convincing myself to upgrade, and possibly won't until we see how well AvP performs on said parts. I still say the 8800GTX was the longest running video card period, and you can count the amount of games it can't max on just your fingers.

    The fact that even CryTek is going consoles now is a very bad sign.

    Quote Originally Posted by Farinorco View Post
    So, now I'm starting to make an idea about this Fermi chips. This new architecture brings some architectural changes about GPGPU over GT200, but graphic rendering wise it seems there's not much apart from DX11 support and more processing units.

    I like the GPGPU thing. And they're doing some very interesting things. I think that right now NVIDIA is one generation ahead AMD regarding GPU computing.

    The other side of the coin is graphic rendering performance. More or less same architecture, 512 CP (let's start naming them by their new name ), 48 ROP, ~225GB/s memory bandwidth... it seems that more or less the improvement GTX285->GTX380 will be proportional to HD4890->HD5870, or what is more or less the same, GTX280->GTX380 ~= HD4870->HD5870. Maybe a slightly higher to GeForce parts.

    If it's so, the situation is going to be more or less the same than last generation, with the aggravating for NVIDIA that the 3-4 months late comparing to Radeon parts, are going to make them compete against a product in a more advanced stage of its life cycle (so AMD will have got their initial income with the product, they will can price more aggresively their cards). So probably NVIDIA will find an even more hostile and aggresive pricing environment than last time (even when last time was infernal for them).

    This obvious focus in GPGPU side of things it's going to pay off on the long term, in my opinion, but it's going to give them a good amount of headaches on the short term, I think.



    What's the difference? Even if the Lucid Hydra operates over the graphics driver levels, intercepting API calls and balancing the calls amongst the installed cards, those cards will need their drivers to work. Videocards are not going to work by magic paths even with Hydra.
    It's not that the focus is on GPGPU, it's that the only info they're giving right now is GPGPU info because they don't want to ruin their partners business by showing off any graphics performance and out-right killing the sales of their current video cards.

    Also, about it bein similar in jump to the hd4870-5870, there's more than a few differences. This one is an entirely new architecture, and NVidia said they were not happy with their shader efficiency in the G80 and GTX-280(which says something, because on paper they beat the 4870 with 30% of the shaders). If they found a way to make them even more efficient than they were with the GTX-280 then a full 2x performance should come easy.

    I will say the gpgpu focus may just pay off though, especially with native C++ support. If they can get some 3d companies on board to accelerate rendering using them I can see companies like Pixar having Tesla farms.

    Finally, the bad news for intel, this thing should be a beast for Ray Tracing, as that's essentially still gpgpu work.
    Quote Originally Posted by Leon2ky
    "dammit kyle what's with the 30 second sex lately?" "Sorry sweetie, I overclocked my nuts and they haven't been stable since"
    Quote Originally Posted by trinibwoy View Post
    I don't think his backside has internet access.
    Quote Originally Posted by n00b 0f l337 View Post
    Hey I just met you
    And this is crazy
    But I'm on bath salts
    And your face looks tasty

  14. #164
    Xtreme Enthusiast
    Join Date
    Feb 2005
    Posts
    970
    Well why stop at gt300? lol Estimates put gt300 3-6 months away. i'll suggest that in another 3-6 months after that, their will be something well worth your time waiting for. so really, you might as well wait another 6-12 months, unless what you really want to say is "i'd rather own an nv card". if so, just grow some balls and say it.

    .
    Last edited by flippin_waffles; 10-01-2009 at 09:14 AM.

  15. #165
    Xtreme Addict
    Join Date
    Oct 2004
    Posts
    1,838
    wait a minute, if nvidia can do something complex like tesselation via cuda, what the hell is gonna stop gt300 from supporting every future api via cuda?
    Last edited by grimREEFER; 10-01-2009 at 09:19 AM.
    DFI P965-S/core 2 quad q6600@3.2ghz/4gb gskill ddr2 @ 800mhz cas 4/xfx gtx 260/ silverstone op650/thermaltake xaser 3 case/razer lachesis

  16. #166
    Registered User
    Join Date
    May 2005
    Posts
    3,691
    flippin, perhaps people don't mind the wait thanks to the fact that even though the 5870 is presently the fastest single gpu there's no games besides crysis the other cards don't maul and crysis can't be run at high resolution with AA on the ATi cards without choking anyway. Some of us, myself included, would like to play through it again at ultra high resolution with maxed out AA and the realism mods, but right now that's a mere pipe dream and isn't possible.

    In other words, most people don't see a need TO upgrade. I'm not interested in any upcoming PC games until AvP anyway, and the GTX-380 will definitely be out before that shows up.
    Last edited by DilTech; 10-01-2009 at 09:20 AM.
    Quote Originally Posted by Leon2ky
    "dammit kyle what's with the 30 second sex lately?" "Sorry sweetie, I overclocked my nuts and they haven't been stable since"
    Quote Originally Posted by trinibwoy View Post
    I don't think his backside has internet access.
    Quote Originally Posted by n00b 0f l337 View Post
    Hey I just met you
    And this is crazy
    But I'm on bath salts
    And your face looks tasty

  17. #167
    Xtreme Enthusiast
    Join Date
    Oct 2006
    Location
    Quebec, Canada
    Posts
    589
    Quote Originally Posted by grimREEFER View Post
    what the hell is gonna stop gt300 from supporting every future api via cuda?
    The fact that shaders change considerably each time a new API comes out, that there are new requirements for precision and calculation capabilities, new compression algorithms, bigger textures... lots of stuff changes and it wouldn't be efficient to keep it as is, the performance hit of having programmable shaders doing specialized shaders stuff would probably be pretty high.

    Then again, it could be possible, I'm not a specialist in any, I could be wrong.
    i7 2600K @ 4.6GHz/Maximus IV Extreme
    2x 4GB Corsair Vengeance 1866
    HD5870 1GB PCS+/OCZ Vertex 120GB +
    WD Caviar Black 1TB
    Corsair HX850/HAF 932/Acer GD235HZ
    Auzentech X-Fi Forte/Sennheiser PC-350 + Corsair SP2500

  18. #168
    Xtreme Member
    Join Date
    Jul 2009
    Location
    Madrid (Spain)
    Posts
    352
    Quote Originally Posted by grimREEFER View Post
    wait a minute, if nvidia can do something complex like tesselation via cuda, what the hell is gonna stop gt300 from supporting every future api via cuda?
    Not via CUDA but via shaders. I haven't took a more or less in-depth view of DX11, but as far as I have seen, I think there are 2 new types of shaders that allow to program the tesselation, Domain and Hull Shaders (in addition to the previous Pixel, Vertex and Geometry Shaders). That's not something especif to NVIDIA, but the way DX11 is defined.

    Anyway, via CUDA (i.e., via GPGPU, be it CUDA, OpenCL, ATI Stream or what you want) you can effectively program an entire rendering process from scratch, using whatever aproach you want, and modeling the rendering pipeline to your convenience.

    The downside? You would be programming everything to be run on general compute processors. The reason why even today we use Direct3D/OpenGL with their mostly fixed pipeline it's because the hw implements part of the tasks with hw especif to them (there are units to map the 2D textures to vertices of the 3D meshes, to apply filters, to proyect the 3D data to a 2D bitmap created by the fustrum of the camera, and so on). All this work is (logically) done much faster with hw with the especific mission to do it (TMUs and ROPs basically).

    But yeah, I think the future of the 3D graphics will be on completely programmable pipelines, and the especific hw units to do particular tasks will disappear. When there's enough power to allow it, of course. That's the general direction with computers. The more power, the more we tend to flexibility.

  19. #169
    Xtreme Enthusiast
    Join Date
    Feb 2005
    Posts
    970
    Quote Originally Posted by DilTech View Post
    flippin, perhaps people don't mind the wait thanks to the fact that even though the 5870 is presently the fastest single gpu there's no games besides crysis the other cards don't maul and crysis can't be run at high resolution with AA on the ATi cards without choking anyway. Some of us, myself included, would like to play through it again at ultra high resolution with maxed out AA and the realism mods, but right now that's a mere pipe dream and isn't possible.

    In other words, most people don't see a need TO upgrade. I'm not interested in any upcoming PC games until AvP anyway, and the GTX-380 will definitely be out before that shows up.
    That's a strange argument DilTech. What you seem to be suggesting, is that there currently are no games that the 5870 can't handle besides crysis, so why bother upgrading until gt300. At which time it'll be worth upgrading to gt300 because you think it will be able to maul crysis like a redheaded stepchild.

    Interestingly, what nv has shown in it's recent presentation raises doubts as to whether they even have working silicon, so how would anyone outside of the inner circles of nv have any idea how it will perform. And then add to that, there is much speculation that gt300 is designed more for gpgpu than it is for 3d rendering.

    And besides, if the only thing you are interested in upgrading your graphics for is crysis, then there is always 2x or 3x 5850 which will likely come in cheaper than gt300.

    And there is also Eyefinity which IMO is a much more compelling reason to upgrade, and if you are really want the ultimate immersion in your gaming, there is no alternative here. Have a look at [H]'s video review. Now that is something worth getting excited about, and it requires the horsepower generated by the 5800 series.

    http://www.hardocp.com/article/2009/...hnology_review

    And then there is DX11 with compute shaders, tesellation etc., and the fact that all of the top developers are glowing about the possibilities it brings. The next generation 3d engines are being written specifically for dx11 hardware. Hell, nv is still struggling to get dx10.1 out the door, all while bribing certain developers to disable that feature until it has finally been able to impliment it in it's own hardware. Funny that. Three cheers for TWIMTBP!!

    Anyway, I'd say there are many more reasons to upgrade now, than there was with the G80 launch.
    Last edited by flippin_waffles; 10-01-2009 at 09:49 AM.

  20. #170
    Xtreme Member
    Join Date
    Apr 2006
    Location
    los angeles
    Posts
    387
    Quote Originally Posted by DilTech View Post
    flippin, perhaps people don't mind the wait thanks to the fact that even though the 5870 is presently the fastest single gpu there's no games besides crysis the other cards don't maul and crysis can't be run at high resolution with AA on the ATi cards without choking anyway. Some of us, myself included, would like to play through it again at ultra high resolution with maxed out AA and the realism mods, but right now that's a mere pipe dream and isn't possible.

    In other words, most people don't see a need TO upgrade. I'm not interested in any upcoming PC games until AvP anyway, and the GTX-380 will definitely be out before that shows up.

    im still on my 8800gt and i play all of the games i like fine.. but my other games, folding and seti would like a gt300 very much
    Seti@Home Optimized Apps
    Heat
    Quote Originally Posted by aNoN_ View Post
    pretty low score, why not higher? kingpin gets 40k in 3dmark05 and 33k in 06 and 32k in vantage performance...

  21. #171
    Registered User
    Join Date
    May 2005
    Posts
    3,691
    flippin, eyefinity, while seemingly interesting doesn't matter to me as I run a 1080p plasma tv. I have no intention of buying 2 more for eyefinity, and even if I did it wouldn't work because eyefinity requires an active displayport for 3 monitors to work. Besides, I refuse to play with monitor borders interfering with my view.

    Also, when did I say no games the 5870 can't handle... Most games an old 8800GTX can max out just fine, same with a 4850... That's why I said there's presently not much of a need to upgrade. DX11 presently only makes a difference in one title, being the lame "Battleforge" battle card monster game(avg review score of 7.3), and by the time the big one(AvP) comes out there, which will be february, both brands will have their cards on the table.

    The only games the previous gen can't max out fine for the most part is Stalker: Clear Sky, Crysis, and ArmA2. Arma2 at my resolution drops down below 20 without even fully maxing out everything, S: clear sky averages at ~30fps maxed out with 4xAA at my resolution on a 5870(i.e. too low), and Crysis still isn't playable at my resolution with AA. Basically, the same things last gen can't do the 5870 can't do either, as such, what's the reason to buy a 5870? On top of that, ati's problem with dx10 and HDTV's.

    Now, let's compare that to the G80 launch, where there was plenty of titles the previous gen couldn't max out at high resolution, and the 8800GTX had no problem being 2x+ faster than everything but the 7950GX2, which it STILL was always faster by a good margin. This was when oblivion was still a big title(and the G80 was at times 3x faster than the previous gen in said title). Now, compare that to the 5870 launch, where there's not much the last gen couldn't do that it can do....

    See why I said waiting isn't an issue for most?
    Quote Originally Posted by Leon2ky
    "dammit kyle what's with the 30 second sex lately?" "Sorry sweetie, I overclocked my nuts and they haven't been stable since"
    Quote Originally Posted by trinibwoy View Post
    I don't think his backside has internet access.
    Quote Originally Posted by n00b 0f l337 View Post
    Hey I just met you
    And this is crazy
    But I'm on bath salts
    And your face looks tasty

  22. #172
    I am Xtreme
    Join Date
    Dec 2007
    Posts
    7,750
    my opinion is any card since DX10 is going to last until we see the next consoles released. sure you can go stronger to get more eye candy, but every game will look damn fine at 1680x1050 a little AF and possibly some AA until a new console arrives. a good selling game has to work on old hardware otherwise they wouldnt sell. and the biggest incetive is usually consoles.

    so there is no NEED to upgrade for another 3 years, but we all want to, and at each persons price limits and performance expectations, we will all be buying different things at different times.

  23. #173
    Registered User
    Join Date
    May 2009
    Location
    Amsterdam
    Posts
    45

    nvidia for the lose

    I used to be quite curious regarding nvidia's new products, but now they disabled physics on systems with an amd gpu and a dedicated nvidia physics gpu, I will be boycotting them in in whatever way i can.

    I think this latest gripe shows exactly where nvidia's priorities are. When they can make money by screwing over their own customers, they will.
    it's quite shameless really, and only not a very big deal because at this stage physics is not a very big deal.

    I hope the physics franchise fails in every single way.

  24. #174
    Xtreme Enthusiast
    Join Date
    Feb 2005
    Posts
    970
    DilTech, no soup for you. First, I understand that you'd be tickled pink to convince as many people as possible to wait for nv's silicon to finally be ready, whenever that may be ( judging from what Charlie has to say, 6 months isn't a guarantee either. And yeah, his track record on nv is an order of magnitude more accurate than anything nv has said). That argument you are using is the oldest in the book, and it's maybe time to update your way of thinking. The fact of the matter is, there are MUCH more compelling reasons to upgrade now than there was for either G80, or GT200. True, GT200 did flop because ATi 4800 hit the sweet spot and was a worthwhile upgrade for a minimal investment. 5800 gets you dx11, Eyefinity, and the best performing card on the market.
    And as for gaming on a 1080p plasma, how is that a PC again? where is the immersion in that? you might as well be running a console! lol Yeah, Eyefinity is where immersion is at, and that is reason enough to pick up a 5800 series card. Probably last a good 3 years without the need to upgrade.

    The only advantage and answer to Eyefinity that nv has, is that there is no way to even come close to producing and representing the immense level of immersion through a video over the internet. Marketing it will be tough, but the real enthusiasts will know cool this is.
    So while I don't doubt you have no intention of placing a 5800 series in your console, I think most will definitely have reason to NOT wait.
    Last edited by flippin_waffles; 10-01-2009 at 10:39 AM.

  25. #175
    Xtreme Addict
    Join Date
    Jul 2009
    Posts
    1,023
    Quote Originally Posted by flippin_waffles View Post
    The fact of the matter is, there are MUCH more compelling reasons to upgrade now than there was for either G80, or GT200. True, GT200 did flop because ATi 4800 hit the sweet spot and was a worthwhile upgrade for a minimal investment. 5800 gets you dx11, Eyefinity, and the best performing card on the market.
    And as for gaming on a 1080p plasma, how is that a PC again? where is the immersion in that? you might as well be running a console! lol Yeah, Eyefinity is where immersion is at, and that is reason enough to pick up a 5800 series card. Probably last a good 3 years without the need to upgrade.
    I don't think eyefinity is really a major plus for it, since you need a DP monitor for it and to make it worth while by using 3 monitors, same model, it's going to cost you £1200+, and then you also have to pay another £310 for another card for CF to make sure you get good performance.

    also gaming on a 1080p plasma, probably better than an LCD monitor since you get a bigger screen, better contrast and you don't really get pixelation.

Page 7 of 42 FirstFirst ... 4567891017 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •