Page 26 of 42 FirstFirst ... 162324252627282936 ... LastLast
Results 626 to 650 of 1035

Thread: The official GT300/Fermi Thread

  1. #626
    Xtreme Enthusiast
    Join Date
    Dec 2008
    Posts
    752
    As has been stated, its not exclusively for gaming. Its for bigger projects.

    Deal with it.

    If it is also a very good competitor in gaming...thats just a bonus from nvidias perspective.

  2. #627
    Xtreme Addict
    Join Date
    Oct 2004
    Posts
    1,356
    Quote Originally Posted by orangekiwii View Post
    As has been stated, its not exclusively for gaming. Its for bigger projects.

    Deal with it.

    If it is also a very good competitor in gaming...thats just a bonus from nvidias perspective.
    "It's not that our card isn't that great, it's just built for other things!"

  3. #628
    Xtreme Addict
    Join Date
    Dec 2007
    Posts
    1,030
    Quote Originally Posted by Sly Fox View Post
    "It's not that our card isn't that great, it's just built for other things!"
    If it plays games the way they are meant to be played </irony>, who cares if it was designed with gpgpu in mind from ground up?

    I mean, if they can really adress gpgpu power to Anti-Virus solutions, it's a very big market they are entering...
    Are we there yet?

  4. #629
    Xtreme Addict
    Join Date
    Nov 2007
    Location
    Illinois
    Posts
    2,095
    Quote Originally Posted by Chumbucket843 View Post
    come on. everyone knows this will beat the 5870. nvidia confirmed this.
    Haha.
    E7200 @ 3.4 ; 7870 GHz 2 GB
    Intel's atom is a terrible chip.

  5. #630
    Xtreme Member
    Join Date
    Jul 2009
    Location
    Madrid (Spain)
    Posts
    352
    Quote Originally Posted by Chumbucket843 View Post
    CUDA is a computing architecture. the API for CAL and CUDA are both just C with extensions so they are really both proprietary. the API must have a compiler for both architectures. CUDA is not something you just port to another gpu. there would be really no point because they are the same thing.
    I'm obviously talking about the CUDA API, not about the CUDA architecture. The CUDA architecture is something internal, as it's the ISA, and access is granted through other intermediate layers, so nothing to standardize here. It's their propietary API what they want to see used, nobody uses "architectures" directly...

    Quote Originally Posted by orangekiwii View Post
    As has been stated, its not exclusively for gaming. Its for bigger projects.

    Deal with it.

    If it is also a very good competitor in gaming...thats just a bonus from nvidias perspective.
    That's exactly the problem, IMO (and it's extensible to the previous gen too).

    NVIDIA is trying to reach a new (emerging) market, what they call the HPC market, with their architecture. The problem is that this new market isn't there yet, so they can't split their R+D and chip manufacturing in 2 different architectures and/or product lines, so they have to make 3D rendering chips (the current market) that are good for the HPC market.

    That eats transistors, developement of the architecture, and so. So the resultant chip isn't specialiced to the 3D rendering market, and has difficulties to compete with other products that are (in an efficiency performance/features to cost manner).

    I'm starting to think that's the main reason of the weak performance to size ratio of the past generation, and I'm starting to think that we are going to see a repeat in this one.

  6. #631
    Xtreme Addict
    Join Date
    Dec 2007
    Posts
    1,030
    Quote Originally Posted by Farinorco View Post
    I'm obviously talking about the CUDA API, not about the CUDA architecture. The CUDA architecture is something internal, as it's the ISA, and access is granted through other intermediate layers, so nothing to standardize here. It's their propietary API what they want to see used, nobody uses "architectures" directly...



    That's exactly the problem, IMO (and it's extensible to the previous gen too).

    NVIDIA is trying to reach a new (emerging) market, what they call the HPC market, with their architecture. The problem is that this new market isn't there yet, so they can't split their R+D and chip manufacturing in 2 different architectures and/or product lines, so they have to make 3D rendering chips (the current market) that are good for the HPC market.

    That eats transistors, developement of the architecture, and so. So the resultant chip isn't specialiced to the 3D rendering market, and has difficulties to compete with other products that are (in an efficiency performance/features to cost manner).

    I'm starting to think that's the main reason of the weak performance to size ratio of the past generation, and I'm starting to think that we are going to see a repeat in this one.
    That could be a way to describe what's going on with nV lately, using a hybrid solution in order to explore both markets (gaming and gpgpu) with one product.

    If they stand in both markets, I bet the gpgpu market will have it's own line of chips and the gaming market will have it's own too. Maybe we are far from that, maybe one or two generations for that to happen... It depends on many factors, but the most relevant of all, imo, is Fermi's success.
    Are we there yet?

  7. #632
    Xtreme Member
    Join Date
    Dec 2008
    Location
    Sweden
    Posts
    450
    Quote Originally Posted by Luka_Aveiro View Post
    I mean, if they can really adress gpgpu power to Anti-Virus solutions, it's a very big market they are entering...
    Yeah, I would really consider Nvidia if my AV was 50% faster. Or not!

    Edit: What I mean is that the market to sell a GPU to run AV faster is really small. Not to say insignificant!
    Last edited by marten_larsson; 10-08-2009 at 01:42 PM.

  8. #633
    Xtreme Member
    Join Date
    Apr 2008
    Posts
    239
    You'd have to use a crappy resource hogging AV program anyway to need GPGPU acceleration, don't see the point in it.

  9. #634
    Registered User
    Join Date
    Sep 2007
    Posts
    15
    everyone knows that Nvidia Fermi will Beat ATI Evergreen but the important part is how much would it cost.
    OS : Mac OS X Snow Leopard 10.6.1 (10B504) [Vanilla Kernel/Chameleon-RC3/DSDT/10.6 Retail/Windows 7 RTM 64-bit]
    Motherboard : EVGA 141-BL-E760-A1 Classified | Processor : Intel® Core i7 950 Nehalem 3.06GHz [OC'4.2GHz] [Liquid Cooling]
    Video Card : 2x EVGA GeForce GTX 285 2GB [SLI] [QE/CI] | Memory : CORSAIR DOMINATOR 12GB [6 x 2GB] DDR3 1600Mhz CL7
    Hard Drive : SAMSUNG Spinpoint F1 [4x 1TB] [4TB] | SAMSUNG Spinpoint F1 [2x 500GB] [1TB] | DVD Drive : Dual Pioneer DVR-216DBK
    WiFi : LINKSYS WMP300N [Wireless-N] | Case/PSU : Silverstone TJ07 MurderMod/CORSAIR 1000HX 1000W | RC : HighPoint-RocketRAID 2300
    Audio : RME HDSPe AIO | MIDI Controller : M-AUDIO Axiom 61 | Dual-Display : Viewsonic VX2260WM

  10. #635
    Xtreme Addict
    Join Date
    Jul 2007
    Posts
    1,488
    Quote Originally Posted by Luka_Aveiro View Post
    If it plays games the way they are meant to be played </irony>, who cares if it was designed with gpgpu in mind from ground up?

    I mean, if they can really adress gpgpu power to Anti-Virus solutions, it's a very big market they are entering...
    The gaming market is what got them to where they are. I'm glad that they are trying to enter new markets, GPGPU has so many possibilities. But what they can't do is enter those new markets at the expense of the market that really supports them, gaming.

    If they have poor price/performance or power/performance for games then those of us who use video cards primarily for gaming may pass on it. GPGPU needs more killer apps developed on standards and then it will become more of a decision making factor.
    Last edited by Solus Corvus; 10-08-2009 at 02:31 PM.

  11. #636
    Xtreme Member
    Join Date
    Jul 2009
    Location
    Madrid (Spain)
    Posts
    352
    Quote Originally Posted by Solus Corvus View Post
    The gaming market is what got them to where they are. I'm glad that they are trying to enter new markets, GPGPU has so many possibilities. But what they can't do is enter those new markets at the expense of the market that really supports them, gaming.

    If they have poor price/performance or power/performance for games then those of us who use a video cards primarily for gaming may pass on it. GPGPU needs more killer apps developed on standards and then it will become more of a decision making factor.
    I'd say they can... as long as they price their products acording to what they offer to the consumer. Think about last generation, for example. Once the HD4000 series was launched, GTX200 series adjusted their prices accordingly.

    Of course, given the higher cost of GTX200, that translated into economical loses for them, but that could be seen as an investment on the new market they are aiming to. If they have the funds to assume this investment...

    Then, if everything goes well, and the new market grow as expected, they can split both product lines, normalizing the situation, and try to make the investment in the new market profitable.

    Of course, if they have gone into this too soon, or they have overestimated the profitability of this market, or underestimated the loses they are going to face, they could find themselves in a complicated finantial position... maybe they're risking because of the Intel arrival to the market with Larrabee, I don't know...

    I'm not an expert on this matter, but I think I see some logic in all this (that may be completely unrealistic of course ). Good or bad decision, I don't know... too much for me.

  12. #637
    Xtreme Enthusiast
    Join Date
    Feb 2005
    Posts
    970
    I think it's reasonable to say: If Fermi fails, so does nvidia, by how much depends on how much of a miss it is. If it succeeds it's business as usual, at least from a gaming and enthusiast perspective.

  13. #638
    Banned
    Join Date
    Feb 2009
    Posts
    165
    Ever since the specs were released, this thread has been nothing but people repeating the same thing over and over for the past week or two.

    IMO it should be locked until more concrete info is leaked/released. AKA closed for a month or two cause that's how long until we get some actual new news on gt300.

  14. #639
    Xtreme Addict
    Join Date
    Oct 2004
    Posts
    1,838
    even with the specs, we cant really tell how these cards perform until they are run through a bunch of benchmarks, becuase they are too different than existing cards.

    historically though, newer graphics cards tend to be better than the graphics cards that already are on the market. its somewhat rare to see exceptions to that rule.
    Last edited by grimREEFER; 10-08-2009 at 02:44 PM.
    DFI P965-S/core 2 quad q6600@3.2ghz/4gb gskill ddr2 @ 800mhz cas 4/xfx gtx 260/ silverstone op650/thermaltake xaser 3 case/razer lachesis

  15. #640
    I am Xtreme
    Join Date
    Dec 2008
    Location
    France
    Posts
    9,060
    Quote Originally Posted by cegras View Post
    Haha.
    So true.
    Donate to XS forums
    Quote Originally Posted by jayhall0315 View Post
    If you are really extreme, you never let informed facts or the scientific method hold you back from your journey to the wrong answer.

  16. #641
    Xtreme Cruncher
    Join Date
    May 2009
    Location
    Bloomfield
    Posts
    1,968
    Quote Originally Posted by zalbard View Post
    So true.
    so you think a 5870 is going to be faster than gf100? nvidia is more reliable than rumors(not by much). the specs show that it will be faster.

  17. #642
    I am Xtreme
    Join Date
    Dec 2008
    Location
    France
    Posts
    9,060
    Quote Originally Posted by Chumbucket843 View Post
    so you think a 5870 is going to be faster than gf100? nvidia is more reliable than rumors(not by much). the specs show that it will be faster.
    Show me benchmark results then we'll talk.
    Till then Nvidia saying that their new card has a mind blowing performance means jack all.
    Donate to XS forums
    Quote Originally Posted by jayhall0315 View Post
    If you are really extreme, you never let informed facts or the scientific method hold you back from your journey to the wrong answer.

  18. #643
    Xtreme Cruncher
    Join Date
    May 2009
    Location
    Bloomfield
    Posts
    1,968
    Quote Originally Posted by zalbard View Post
    Show me benchmark results then we'll talk.
    Till then Nvidia saying that their new card has a mind blowing performance means jack all.
    thats just pure denial. you can get an approximation of performance based on all of the whitepapers and specs that have been released. no one said anything about "mind blowing performance". its fairly obvious that it will beat the 5870 even if its only 60% faster. thats based from the increase in bandwidth.

  19. #644
    Xtreme Member
    Join Date
    Jul 2009
    Location
    Madrid (Spain)
    Posts
    352
    We can't know for sure, but I'm completely convinced about GTX380 (or whatever they name it) being faster than HD5870. By how much is other story, and how much room they will have between HD5870 and GTX380 to place a GTX360, too.

    We're talking about a monstrous >500mm^2 chip, and even if they are focusing more on the GPU computing side than on the 3D rendering side, Fermi should perform better than the much smaller Cypress. Better not be otherwise if they don't want to be in serious troubles, since it's a much more costly chip.

  20. #645
    Xtreme Addict
    Join Date
    Aug 2007
    Location
    Istantinople
    Posts
    1,574
    I can't believe people think about how GT200 was abnormally priced at launch and was completely s//tstormed by 4870, and then say that this is going to be another flop like that.

    Let me add some IQ (not image quality, INTELLIGENCE QUOTIENT) on the table: Nvidia, just like us, had no idea that 4800 would be so powerful and they priced everything super high. After the launches, the dust settled in, and Nvidia adjusted the prices accordingly. While I think 4700/4800 series continued to be a better pick than GT200b's, it wasn't a knockout after the price adjustments had been made. So the "total flop" nature of the initial G200's was because Nvidia was caught their pants down by the 4800s.

    Now 5800's have been released and Nvidia knows both its price and performance and WILL adjust the Fermi boards' price accordingly. In no way Fermi will be a $500 board that performs on par with the (then will be) $300ish 5870. Yes, the chip is super big and most likely this isn't going to be a super launch season for Nvidia, but it's not just going to be a super expensive board that will barely match the competition's half priced board, like the G200's were initially.
    Has anyone really been far even as decided to use even go want to do look more like?
    INTEL Core i7 920 // ASUS P6T Deluxe V2 // OCZ 3G1600 6GB // POWERCOLOR HD5970 // Cooler Master HAF 932 // Thermalright Ultra 120 Extreme // SAMSUNG T260 26"

  21. #646
    Xtreme Member
    Join Date
    Apr 2006
    Posts
    393
    why does everyone want Nvidia to fail? All it means is that your AMD card will be more expensive next time.

  22. #647
    Xtremely Kool
    Join Date
    Jul 2006
    Location
    UK
    Posts
    1,875
    Quote Originally Posted by Clairvoyant129 View Post
    why does everyone want Nvidia to fail? All it means is that your AMD card will be more expensive next time.
    No Pain No Gain.

  23. #648
    Xtreme Addict
    Join Date
    Jan 2008
    Posts
    1,176
    Quote Originally Posted by Clairvoyant129 View Post
    why does everyone want Nvidia to fail? All it means is that your AMD card will be more expensive next time.
    They don't.

    I personally wish nvidia would stop all this IDIOTIC behaviour they have inflicted over the past two years.

    Renaming the same product multiple times as new material, bad vista drivers for which they patsied the blame.
    Marketing the infamous whoopass event. Having faulty designed products (hl solder bump issues) and denying it even after it's exposed they were well aware a year before that point.

    Restricitve physx behaviour, physx indeed being snake oil moonshine faker that does PHYSICS! ...at the expense of major performance drops.
    We had physics for years before physx with none of this strategic performance limitation to make new cards relevant

    Taking jibes at intel. Slowing down direct x api advancements

    They should take a stance of a professional company sometime, earn that respect they lost

  24. #649
    Banned
    Join Date
    Jun 2008
    Location
    Mi
    Posts
    1,063
    I wish Nvidia would stop trying to be Intel or AMD and just concentrate on gaming... Jen-Hsun Huang is delusional in thinking Nvidia has that much clout. They need to stick with their CORE BUSINESS, CUDA doesn't matter to 99.9% of the populace!

    But Nvidia has hung their entire business hat on it's acceptance, dumb!

  25. #650
    Xtreme Mentor
    Join Date
    Oct 2005
    Posts
    2,788
    Quote Originally Posted by Xoulz View Post
    They need to stick with their CORE BUSINESS, CUDA doesn't matter to 99.9% of the populace!
    I'm not even going to bother explaining what is wrong with this. If somebody else wants to have a go, be my guest.

    OCN is full of blind AMD/ATI fanboys who have no idea what they are talking about. XS was more immune for a while, but the cancer has started to rapidly spread here as well.
    Asus Rampage II Gene | Core i7 920 | 6*2GB Mushkin 998729 | BFG GTX280 OCX | Auzentech X-Fi Forte | Corsair VX550
    —Life is too short to be bound by the moral, ethical and legal constraints imposed on us by modern day society.

Page 26 of 42 FirstFirst ... 162324252627282936 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •