Page 79 of 109 FirstFirst ... 29697677787980818289 ... LastLast
Results 1,951 to 1,975 of 2723

Thread: The GT300/Fermi Thread - Part 2!

  1. #1951
    Xtreme Member
    Join Date
    Jul 2009
    Location
    Madrid (Spain)
    Posts
    352
    Regarding the tesselation performance of Fermi, I'd wait to see the comparison in real world games. IF (and I don't know if that's true, false, or somewhat in between) NV100 relies more in CUDA processors to solve tesselation calculations and RV870 relies more in fixed performance dedicated hardware, I'd expect the first to have a huge advantage in a nearly synthetic benchmark with most of its load being tesselation (because Fermi could use more resources to do it), but then, the situation would rebalance severely when complex shaders should be computed in addition to tesselation (like it would be the most likely case in most real world games). The heaven benchmark seems to be pretty heavy on tesselation, but much lighter in any other kind of shader. Maybe that's why NV are focusing so much on Unigine Heaven benchmark. Maybe not. We will see... when we have proper reviews and real world use cases.

    Same like with general performance... I find hard to believe numbers like what Charlie is claiming.

    Quote Originally Posted by damha View Post
    My recommendation to ATI would be to adopt/adapt the CUDA api from nvidia and get on with it. Let's be honest, nvidia has the clout to raise a stink big enough for everyone to notice. Their connections run deep.

    Unless ATI is planning an API release secretly. I really hope they aren't, waste of resources.
    They can't do that. If they did, CUDA would quickly become the standard of GPGPU in mainstream market, most of developers would use it and that would give NV a huge competitive advantage against AMD/ATI. Even if it would only mean that NV could know much sooner than its competence how CUDA would evolve in the future and be always a step ahead (with no chance to catch up) in mainstream computing. And they always could take much more advantage by exploiting their rights on their propietary API one it had been settled and standardized in mainstream market, similarly to Creative Labs with EAX.

    My impression is that AMD is focussing on gaining some market share in mainstream market while NVIDIA tries to open and take the emergent HPC market, and I think that could be a mistake in the long term... except if they directly don't want to enter in that market. Which could be another different mistake. In mainstream... well, I think that's more the terrain of non-hw vendor dependant solutions like Direct Compute and OpenCL.

    Quote Originally Posted by RaV[666] View Post
    Ati definietely has price flexibility right now.Dont forget that ATI prices have risen above launch msrp.They are cheaper to produce, they have them for months, and have had better yields from the start (6 months ago).
    Thing is, if they will not feel threaten by fermi, they probably wont :/.MSRP for 5850 on launch was 259$ ,it stands at 300+ now.
    Yeah, I have always thought that was going to be the worst problem for Fermi since we knew the cards would be SO late. Competing against a hardware product (even more talking about graphics cards) which life cycle is 6 months older is painful. The price of such products always shows a diminishing curve along its life in a natural way, and a 6 months delay on a product like this is too much IMO. Indeed I think that the current prices of HD5800 series at this point are not natural and only due to not having any kind of competence until now. I'd expect price drops as soon as GTX400 cards are released (or little after that) unless Fermi is worse than what I think it will be.

  2. #1952
    Xtreme Enthusiast
    Join Date
    Jul 2009
    Location
    Perth Australia
    Posts
    651
    If nVidia have not been able to reach the full potential (512) of the GT100.
    How long do you feel it will take to release a card that has everything working at full tilt and can be meet the consumer demand for it?

    Month, two months, this year?

  3. #1953
    Xtreme Member
    Join Date
    Aug 2007
    Posts
    282
    I really hate the way are taking the graphics cards lately...

    1. We're gonna need a thermonuclear plant to feed them.
    2. Grab the sauces and bring some eggs to fry them with your card!
    3. Get that guarddrobe as PC box or that large graphics card won't fit inside!
    4. Do the IHvs know that we're in the middle of an economic crisis? I cannot pay 800$/600€ for a graphics card omg!

    Now look at your mobile phone... the technology should consume less power, get more features with less money, occupy less space and to be more easy to use with the time...

    See this old 1990 image about the absurd of the card's dimensions:



    and compare it with this one:


    ( I expect the Fermi GX2 version to be almost as large as this.... red brick )

    Have we evolved REALLY?



    For me, any highend card costing more than 200$/120€, consuming more than 70W and larger than 20cm(7.9 inch) aimed to the desktop is absurd. Simply absurd.

    So I don't care if the GTX480 has 512 shader, etc... What I want is a high-end card with a reasonable configuration! Not those power-hungry, expensive and hot monsters! Of course, you can get more FPS and performance making the cards larger, using more energy, etc... The merit is to do it without growing the size and the power consuption... so nope, I don't like the GXT470/480, but neither the 5870s/59XXs.
    Last edited by jogshy; 03-05-2010 at 05:18 PM.

  4. #1954
    Xtreme Enthusiast
    Join Date
    Nov 2009
    Posts
    511
    ^ have we evolved????




    no...not really...
    /sarcasm

    i think you belong in the midrange card segment
    im not really concerned with power or heat ..i want performance/quality

  5. #1955
    Xtreme Addict
    Join Date
    Nov 2007
    Location
    Illinois
    Posts
    2,095
    E7200 @ 3.4 ; 7870 GHz 2 GB
    Intel's atom is a terrible chip.

  6. #1956
    Xtreme Member
    Join Date
    Sep 2009
    Location
    Pittsburgh, PA
    Posts
    479
    EVGA GTX480 and GTX470 boxes:





    http://www.evga.com/forums/tm.aspx?m=217028
    Regards,
    Chris



    Core i7 920 3931A318 4.4GHz 1.375vcore | EVGA X58 Classified E760 | EVGA GTX470 1280MB | Corsair Dominator GT 7-8-7-20 1688MHz | Heatkiller 3.0 CU and Feser xChanger 480 | Seasonic M12 (Soon to be replaced)

  7. #1957
    Xtreme Addict
    Join Date
    Jan 2004
    Posts
    1,313
    Quote Originally Posted by RaV[666] View Post
    Ati definietely has price flexibility right now.Dont forget that ATI prices have risen above launch msrp.They are cheaper to produce, they have them for months, and have had better yields from the start (6 months ago).
    Thing is, if they will not feel threaten by fermi, they probably wont :/.MSRP for 5850 on launch was 259$ ,it stands at 300+ now.
    No.

    IMHO 5870 and 5970 were lauched with Fermi competition already factored in.

    Regardless of what nVidia does, prices wont be less than HD48xx series - they are bigger/more expensive to make.

    Quote Originally Posted by Iconyu View Post
    AMD won't cut the price unless nvidia can meet the demand, if there are only 8,000 480's the ticket price won't mean anything. AMD are already releasing a card with a price of $1,000, it will probably sell.
    Exactly. AMD can let nVidia "steal" all the market share they want - not sure they can make a million cards though...

    24/7: A64 3000+ (\_/) @2.4Ghz, 1.4V
    1 GB OCZ Gold (='.'=) 240 2-2-2-5
    Giga-byte NF3 (")_(") K8NSC-939
    XFX 6800 16/6 NV5 @420/936, 1.33V

  8. #1958
    Xtreme Addict
    Join Date
    Jul 2007
    Location
    Alberta, Canada
    Posts
    1,264
    Quote Originally Posted by SKYMTL View Post
    What ATI did is a good thing: they concentrated on making a DX11 compatible GPU by adapting an older architecture instead of taking a ton of time designing a newer one. They knew there was no point in concentrating too hard on DX11 so early into its life cycle and they were able to actually beat NVIDIA to market while offering a perfectly suitable solution.

    So, yes the HD 5000 series takes a massive hit in DX11 but it was never meant to be their end-all DX11 product anyways. That name goes to their upcoming architecture that will be released when there is a good amount of games that support DX11.
    Couldn't agree more. Never in history has a launch card using a new API been able to master it first generation. I doubt we will see strong DX11 cards until at least 3rd generation, perhaps 2nd gen refresh best case. That said, what they did with Cypress was a smart buisness move none the less. I'm sure Nvidia will push DX11 now they can call it relavent but if the past repeats itself, I am not buying it (their marketing). Hell I had a 8800gtx launch week for a good 14 months and never felt DX10 was relavent until much later...
    Feedanator 7.0
    CASE:R5|PSU:850G2|CPU:i7 6850K|MB:x99 Ultra|RAM:8x4 2666|GPU:980TI|SSD:BPX256/Evo500|SOUND:2i4/HS8
    LCD:XB271HU|OS:Win10|INPUT:G900/K70 |HS/F:H115i

  9. #1959
    Xtreme Addict
    Join Date
    May 2007
    Posts
    2,125
    Quote Originally Posted by Chickenfeed View Post
    Couldn't agree more. Never in history has a launch card using a new API been able to master it first generation. I doubt we will see strong DX11 cards until at least 3rd generation, perhaps 2nd gen refresh best case. That said, what they did with Cypress was a smart buisness move none the less. I'm sure Nvidia will push DX11 now they can call it relavent but if the past repeats itself, I am not buying it (their marketing). Hell I had a 8800gtx launch week for a good 14 months and never felt DX10 was relavent until much later...
    Yep, I had a G80 earlier, and while it crushed DX9 games, it could not run DX10 as well as the 2nd generation of cards (G92s were better, then RV770 & GT200 finally saw DX10 be relevant).

    The best bet is always to buy the card that you can use now, and not what you will worry about a year down the line because by then, it will already be outdated/slow

  10. #1960
    Xtreme Addict
    Join Date
    Jan 2008
    Posts
    1,463
    http://bbs.expreview.com/thread-27750-1-1.html

    Farcry2 2560 x 1600 Ultra 8xAA Max

    5870: 17/32/55
    GTX4x0: 30/40/67



    Last edited by jaredpace; 03-05-2010 at 09:38 PM.
    Bring... bring the amber lamps.
    [SIGPIC][/SIGPIC]

  11. #1961
    Xtreme Addict
    Join Date
    May 2007
    Posts
    2,125
    Quote Originally Posted by jaredpace View Post
    http://bbs.expreview.com/thread-27750-1-1.html

    Farcry2 2560 x 1600 Ultra 8xAA Max

    5870: 17/32/55
    GTX470: 30/40/67
    Where's it say GTX 470? It only says GTX 4x0

    And the thread talks about a memory bug, so I wouldn't be surprised if the extra memory of the 4x0 is helping it out at 2560x1600 vs. the 1GB on the 5870 (will be interesting to see what 2GB does though)

    Any who, Far Cry 2 was one of the games that Nvidia PR highlighted earlier, so it's nice numbers, but we already knew FC2 was going to run well on Fermi and Nvidia hardware in general, so this doesn't tell us a whole lot right now. Can't wait to see some other games benched

  12. #1962
    Xtreme Member
    Join Date
    Oct 2008
    Posts
    263
    yea finally a card with good performance at 2560x1600.
    Whats up?

  13. #1963
    Xtreme Addict
    Join Date
    Jan 2008
    Posts
    1,463
    Yah, zerazax thx for pointing that out. I changed it.

    These watermarks are the same on the benches of Heaven. And the heaven benchmarks are the numbers from that first bench chart with chinese writing. I guess phk has 470 & 480, or either just has one 470. It's true the 5870 takes a big hit on FC2, 2560x1600 8xAA, probably ati's toughest challenge with AA.


    Bring... bring the amber lamps.
    [SIGPIC][/SIGPIC]

  14. #1964
    Xtreme Addict
    Join Date
    Nov 2007
    Location
    Illinois
    Posts
    2,095
    2560 x 1600 with 8xAA. Nice.
    E7200 @ 3.4 ; 7870 GHz 2 GB
    Intel's atom is a terrible chip.

  15. #1965
    Xtreme Member
    Join Date
    Jan 2009
    Posts
    261
    Quote Originally Posted by Farinorco View Post
    Regarding the tesselation performance of Fermi, I'd wait to see the comparison in real world games. IF (and I don't know if that's true, false, or somewhat in between) NV100 relies more in CUDA processors to solve tesselation calculations and RV870 relies more in fixed performance dedicated hardware, I'd expect the first to have a huge advantage in a nearly synthetic benchmark with most of its load being tesselation (because Fermi could use more resources to do it), but then, the situation would rebalance severely when complex shaders should be computed in addition to tesselation (like it would be the most likely case in most real world games). The heaven benchmark seems to be pretty heavy on tesselation, but much lighter in any other kind of shader. Maybe that's why NV are focusing so much on Unigine Heaven benchmark. Maybe not. We will see... when we have proper reviews and real world use cases.
    Exactly my thoughts. I'm not buying NVIDIA's magical tessellation performance story yet.

    I have doubt that NVIDIA simply invest so much in dedicated tessellation hardware. It's probably more likely that they use the "cuda cores" to do tessellation in exchange for shader performance.

    If that's the case, their DX 11 driver will have some delicated load balancing to do. It must decide how many cores should be reserved for tessellation for each game. Not neccessarily an elegant solution.


    Until we see performances of real games, I'm not holding my breath.

  16. #1966
    Xtreme Addict
    Join Date
    May 2007
    Posts
    2,125
    I think the key for them is that they increased the # of triangles done per clock.... the games that Nvidia PR has benched heavily have been games like Far Cry 2, HAWX, and the Heaven benchmark... which are all known to be pretty triangle intense

  17. #1967
    Xtreme Member
    Join Date
    Jan 2009
    Posts
    261
    Quote Originally Posted by zerazax View Post
    Far Cry 2, HAWX, and the Heaven benchmark... which are all known to be pretty triangle intense
    Where did you see the numbers for HAWX?
    I must be missing something.

  18. #1968
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    Quote Originally Posted by SKYMTL View Post
    Tue but the way information flows through the GF100 is quite different from the GT200.

    Also, at this point in time everything will be "evolutionary" versus "revolutionary" since the unified shader-based architecture will be with us for some time.
    yeah... and looking back its hard to think of any gpu architecture in the past as revolutionary, the only ones i can think of are r300, r600, nv30 and G80... those are what id call really different architectures, everything that followed them were more like intel style tocks, a shrink with tweaks...

    Quote Originally Posted by trinibwoy View Post
    No Fermi is the first mass market GPU architecture to have:

    #1: Parallel geometry setup
    #2: Generalized, coherent read/write caching

    Both are huge deals because of the engineering effort required and make a lot of things easier to do. Of course it doesn't mean squat if you just care about the fps that comes out the end.
    right, ill give you that, that is pretty revolutionary... but the fact that this isnt really all that useful for games says a lot, doesnt it? as a gpu, fermi isnt really revolutionary imo and is more of a GT300... if youd use it for gpgpu then calling it GF100 makes sense... but thats just semantics

    Quote Originally Posted by Dark-Energy View Post
    Does anyone think dual GPU Fermi is possible at all (GTX470, and downed clock speeds like the 5970) without requiring a nuclear power plant and generating as much heat equivalent to the surface of the sun?
    definately, even with two full blown 480 chips its definately possible... but it would run at very low clocks and the final result wont be that much faster.

    Quote Originally Posted by jaredpace View Post
    Fermi is a complete arch redesign, like g80 & r350. Focus on cache, compute, & gpgpu programmability. cypress is rv770 with alu's & rops doubled and scheduler & setup redesigned to effectively use new resources.
    mhhh i wouldnt say complete... its more than the last step from g92 to gt200... id say the rv770 to rv870 step is about the same as the g92 to gt200 step.

    Quote Originally Posted by jaredpace View Post
    http://bbs.expreview.com/thread-27750-1-1.html

    Farcry2 2560 x 1600 Ultra 8xAA Max

    5870: 17/32/55
    GTX4x0: 30/40/67
    yeah but how representative is farcry2 performance?
    everybody knows by now that gf100 rocks in fc2, but that game is not exactly brand new, everybody played it already, its not a game youd want to play two or three times as its really repetitive the first time already, and it runs perfectly fine on even a gts250 at pretty high res and max details.
    2560x1600 8aa perf is nice, but who has that big of a display?
    why would anybody spend so much on a big display for games? at that res you depend on sli or xfire to get enough perf to play the top graphics games on the displays full res... so cool, you will be able to play fc2 at that res, more or less, but what about other games such as metro 2033, hawx, crysis warhead, crysis2...?

    i dont see the point in showing off 2560x1600 8aa performance... its better than the competition but still not good enough to actually play with that setting...

    its like showing off 8fps vs 1fps at some insane resolution and claiming you are 8x faster... yeah but whats the point?

    Quote Originally Posted by Levish View Post
    - If the 480GTX comes in at close to its current price/performance to deny any signficant marketshare. If the 480GTX comes in at 5-10% higher performance than the 5870 would the average upgrader still choose it if the 5870 was 25% cheaper (probably not)?
    you mean would people chose a 300$ card or a 400$ card if the 400$ card is 5-10% faster? how can you say they probably would??? its pretty obvious that they would go for the 300$ card, isnt it?

    Quote Originally Posted by Levish View Post
    - If AMD/ATI counters with a 5875 (or whatever 5870 rev2) at the same price point of the current card (to retain "performance crown"), the "old" 5870 wouldn't sell at all at the current price.

    I'm inclined to say that its very likely AMD/ATI would price cut, I'd probably bank on it being closer to $50 though (I would be quite happy if it was $100).
    while lots of people bashed charlie for claiming that there will only be 5-10.000 cards at launch, thats what everybody is reporting now, and even nvidias ceo admitted that they wont REALLY launch until the second half of this year... so why should ati drop their prices and cut their margins if all nvidia can do is offer 5-10.000 cards in a few months which is what ati probably sells per DAY!

  19. #1969
    Xtreme Member
    Join Date
    Aug 2009
    Posts
    244
    2GB 5870 is 20% faster than 1GB 5870 in Farcry2 3X1920X1200 8XAA

    Obviously 1GB vram is a bottleneck in extreme setting.
    Last edited by mindfury; 03-06-2010 at 12:04 AM.

  20. #1970
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    about dual fermi, look at dual gt200...
    http://www.xbitlabs.com/articles/vid...0_5.html#sect0
    285 = 160W
    295 = 200W

    480 = 280W (rumored)
    495 = 300W (tdp limit)

    going above 300W is painful... you need very expensive cooling above that point, and you break compatibility with pci-e specs, so i think thats unlikely.

    so for gt200 nvidia had to cut down a single gt200 card from 160W to 100W, a 60% reduction, and they achieved that by lowering clocks to 576/1200/1000 from 648/1476/1242 which means a reduction of 12%/9%/24% and they cut the memory hub down from 8 to 7 channels, which means a bw reduction of 13%. so id say on average they cut its specs down by around 25% and that allowed them to reduce the power consumption by 60%. thats a very good trade off!

    for gf100 they will have to cut down power consumption by ~90%... we dont know how gf100 scales, how well does it clock with lower voltages, and how fast does the power consumption drop when they reduce voltages? gt200 did very well there, if we assume its the same for gf100 then we need to cut the perf specs by 90% or 1/3 more than for gt200. in other words, the specs need to go down not 25% but 33%. thats a very optimistic guess, and in this case a 490 would perform about 1.5x as fast as a 480, so 50% faster, best case! since sli doesnt always scale lineraly, the average perf boost would probably be around 40% or less.

    a dual gf100 card would easily cost 1000$ if not more, would be even hotter and power hungry than a 480, and would only be 40% faster... i think this doesnt really make sense... from a business point of view nvidia will make more money by selling single gpu cards, as even if they offer us a good deal for the dual gpu card, it will be very expensive and too expensive for 99% of us for sure. the only reason to do this would be pr...

    but if you followed the numbers, you will notice that, assuming the numbers we heard are true, and a 480 is only marginally faster than a 5870, a 490 would barely beat the 5970... it would, but not by much... and it would cost a lot more... and amd is prepping 850mhz+ clocked 5970 which are 10-20% faster than current 5970s, so a dual gf100 card doesnt make much sense at all right now... they have a limited supply for gf100 chips to begin with, they are rumored to have slim margins even at high prices, and they are rumored to have tdp problems... im sure there will be a dual gf100 card, but i dont think we will see this anytime soon... Q3 the earliest if you ask me...

  21. #1971
    Xtreme Addict
    Join Date
    May 2007
    Location
    'Zona
    Posts
    2,346
    Quote Originally Posted by SKYMTL View Post
    HOWEVER, supposedly the GF100 series was built with DX11 in mind which could translate into great performance in supporting applications. But that is just a rumor at this point and only time will tell if it translates into better performance.
    I remember a LOT of people saying the same thing about R600...
    Originally Posted by motown_steve
    Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.

    Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.

  22. #1972
    Xtreme Addict
    Join Date
    Jan 2008
    Posts
    1,463
    Quote Originally Posted by saaya View Post
    yeah but how representative is farcry2 performance?
    everybody knows by now that gf100 rocks in fc2, but that game is not exactly brand new, everybody played it already, its not a game youd want to play two or three times as its really repetitive the first time already, and it runs perfectly fine on even a gts250 at pretty high res and max details.
    2560x1600 8aa perf is nice, but who has that big of a display?
    why would anybody spend so much on a big display for games? at that res you depend on sli or xfire to get enough perf to play the top graphics games on the displays full res... so cool, you will be able to play fc2 at that res, more or less, but what about other games such as metro 2033, hawx, crysis warhead, crysis2...?

    i dont see the point in showing off 2560x1600 8aa performance... its better than the competition but still not good enough to actually play with that setting...

    its like showing off 8fps vs 1fps at some insane resolution and claiming you are 8x faster... yeah but whats the point?

    Yes I agree with your last statement. They want to show off benches like this to paint the 5870 in the worst light possible. I don't consider it representative of anything important, but it is one of the only few benches out there, which is why I posted it here. Not many folks have a 30" and want 8xAA along with it. I don't consider it showing off, just relative data for this thread. Maybe we could ask him to rerun it at 1920 4xAA - if so, we would probably get results similar to his dirt2 and crysis numbers.

    Totally agree with the 8fps vs. 1fps scenario. Nvidia claiming, "Impeccable 800% performance over HD5870!"
    Bring... bring the amber lamps.
    [SIGPIC][/SIGPIC]

  23. #1973
    Xtreme Addict
    Join Date
    Mar 2006
    Location
    Saskatchewan, Canada
    Posts
    2,207
    If anything about charlie is write, its that NV architecture is being held back by clock speed at the silicon level. If it is true, hopefully NV with its next spin can introduce vastly better clocks and with a shuffling of things.

    I remember the best respin and architecture shift for them was the 7800 to 7900 gtx generation. They jumped up from 430mhz up to 650mhz. There was even less transisters in the 7900 gtx. Although some of the clock speed was from using a smaller process, I think some of the change had to do with NV rearranging things. That needs to happen ASAP.

    A 1700 shader clock with good driver's should be competitive even with the 5970 even with the current leaked performance.
    Core i7 920@ 4.66ghz(H2O)
    6gb OCZ platinum
    4870x2 + 4890 in Trifire
    2*640 WD Blacks
    750GB Seagate.

  24. #1974
    One-Eyed Killing Machine
    Join Date
    Sep 2006
    Location
    Inside a pot
    Posts
    6,340
    Quote Originally Posted by Teemax View Post
    I have doubt that NVIDIA simply invest so much in dedicated tessellation hardware. It's probably more likely that they use the "cuda cores" to do tessellation in exchange for shader performance.

    If that's the case, their DX 11 driver will have some delicated load balancing to do. It must decide how many cores should be reserved for tessellation for each game. Not neccessarily an elegant solution.
    If what you said is true, then nVIDIA lied in a public technical specifications pdf.
    And you're wrong about load balancing via the driver.
    That would be the silliest move ever.
    "CUDA cores" can be dynamically assigned to various tasks and load balancing should and is done by hardware not software.

    Not an elegant thinking I'd say
    Coding 24/7... Limited forums/PMs time.

    -Justice isn't blind, Justice is ashamed.

    Many thanks to: Sue Wu, Yiwen Lin, Steven Kuo, Crystal Chen, Vivian Lien, Joe Chan, Sascha Krohn, Joe James, Dan Snyder, Amy Deng, Jack Peterson, Hank Peng, Mafalda Cogliani, Olivia Lee, Marta Piccoli, Mike Clements, Alex Ruedinger, Oliver Baltuch, Korinna Dieck, Steffen Eisentein, Francois Piednoel, Tanja Markovic, Cyril Pelupessy (R.I.P. ), Juan J. Guerrero

  25. #1975
    Xtreme Member
    Join Date
    Mar 2008
    Location
    Idaho
    Posts
    122

    One worry

    One thing I worry about is game development. For example, if you use MW2 as an example, only 13% of the copies sold, if I remember right, were for PC. And even with "only" that percentage it was still a huge-selling PC title. I disagree with folks that say PC gaming is dying, and I will religiously defend my 4.0GHz i7 rig with all it's GTX and SSD and RAID 0 goodness (soon to be watercooled as well!). And I also agree with those that make mention of the digital distributors, such as Steam and D2D.

    HOWEVER ... Even with all those above considerations, consoles are still king when it comes to overall sales. And from a profit maximization standpoint it makes sense. If you're a dev, don't you want to develop in an arena where you'll sell the maximum amount of copies possible? And as an aside, isn't it nice to develop for the SAME PLATFORM, year after year after year???

    I know DX11 brings some cool stuff to the table, but with a stack of games on my "to play" list that don't even have DX10, I am starting to have a hard time agreeing with such aggressive pushes to new standards. Look at the spot dev's are in right now. If they want the high end enthusiasts' attention, they're going to have to go after DX11. But how many people out there have DX11 hardware? The number is growing, but there are still millions of PC users, and even games with decent hardware that lack support for DX10. So now DEV's have to make new games support DX11, the latest and greatest, support DX10 also, what's supposed to be the current mainstream, AND write in compatibility for the older DX9 if they also want to be able to sell to that market segment, which is still quite large????

    ARE YOU KIDDING?

    I hope this isn't having a counterintuitive effect on PC game development, and I know for some companies, it is not. They will go after that market no matter what. But I still worry that it's putting them in a spot where they are having to do more work for an even smaller part of the market. And if they want ALL the PC market, they're going to have to do even more work.

    I'm all about leaving an older engine for a newer, leaner, more efficient one, but I wonder if we're pushing a bit too fast. Maybe it was ok back when consoles weren't such an overwhelming majority option, but today things are different.

    Just some thoughts. I might be crazy though. Way past my bedtime.
    Workstation/Playstation: i7 920 D0 @ 3.8, TRUE, EVGA x58, BFG GTX 295, WD 640x2 in RAID 0, 12GB OCZ DDR3, ATCS 840, Dell 24" & Samsung 19", Razer Lycosa and Logitech G9r, HTO Striker, Windows 7 Ult. 64, Antec TruePower 1000w.
    Wife's Computer: E4500 @ 3.0GHz on 1.36v, Rocketfish CPU Cooler, Foxconn, 4GB Corsair, Sea 250, Antec Sonata w/500w, ATI 2400, 17" Samsung, Vista Ult. 32.
    HTPC: X2 2400BE, 500GB + 750GB + 80GB, ATI 3450 Hybrid CF w/Asus 780G, 4GB A-Data 800MHz DDR2, 32" Westinghouse, XBOX 360 HDDVD (w/o xbox), Aver DTV Tuner, Huappage DTV tuner, Windows 7 32 bit.
    PDAs: iPod Touch 16GB and Samsung Omnia

    See my work log here: PROJECT ATCS 840

Page 79 of 109 FirstFirst ... 29697677787980818289 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •