Results 1 to 25 of 2723

Thread: The GT300/Fermi Thread - Part 2!

Threaded View

  1. #11
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    Quote Originally Posted by mapel110 View Post
    Yes, because ATI launched a mid range chip with 2.15 Billion Transistor and nvidia is releasing a real high end monster with 3 Billion Transistors.
    drawing conclusions from their sheer transistor count is not a good idea...
    and calling the 5870 a midrange chip is a pretty weird statement... ati had to go for multi display setups to find a configuration that makes use of all the graphics performance it offers...

    Quote Originally Posted by H2O View Post
    Saaya, I thought we had a PCIe rep confirm that neither the HD5970 nor the GTX295 were certified as PCIe compatiable, because they could pass the 300W TDP limit. So if the GTX495 goes over 300W, and Nvidia can sufficiently cool it, losing the PCIe compatiability should not be a big deal, right?
    it was a pciE rep? i thought somebody just checked the list on the pci sig site? note that i looked at real meassured power consumption, not tdp and peak values... of course you can build a mars like card, but we all know that cooling was a major issue with that card and it wasnt stable at stock speeds with some people as it simply ran too hot. and that was with a huge and expensive heatsink already... like i said, above 300W you reach a point where any extra watt of power makes the pcb, pwm and heatsink designs exponentially more expensive.

    Quote Originally Posted by trinibwoy View Post
    Depends on which games you're talking about. Future games will make much more use of compute shaders and hence the distinction between games and general computing will begin to fade. The software is badly lagging the hardware at this point so it's hard to see the benefits on anything more than an academic level but hopefully that changes soon.
    yes, but why should software suddenly, magically, catch up? why should there not only be a lot of dx11 games, but good dx11 games, and then not only good dx11 games but good dx 11 games that use compute shaders so much that gf100 has an advantage from it? i just dont see that happening... sure, eventually games will demand a lot more tesselation and compute shader power, but by then we will have second and most likely third or 4th gen dx11 hardware and all this first gen dx11 stuff will be useless.

    Quote Originally Posted by Marios View Post
    The official TDP is different though.
    GTX295, Radeon HD 5970 and Radeon HD 4870X2 have a TDP of about 290W.
    GTX280 240W, HD5870 190W, GTX285 180W, HD4890 190W full load.
    yes, but the tdp values only matter for certificates and verification with pci sig... im more interested in feasability of card above 300W than whether it can be certified

    Quote Originally Posted by Olivon View Post
    http://tof.canardpc.com/view/a46f0aba-0458-4abe-9ce6-f1c14b4d5cbd.jpg
    http://tof.canardpc.com/preview2/a46...c14b4d5cbd.jpg
    more 8 vs 1 fps, 1.3gb vs 1gb nonsense...

    Quote Originally Posted by zalbard View Post
    GPU-Z does not support Fermi yet, it is a fake.
    it does... but this one is a fake
    whoever did it made a loooot of mistakes, i think he wanted people to know its fake, the mistakes are too obvious...

    Quote Originally Posted by ethomaz View Post
    Crysis Warhead: HD 5870 vs GTX 470 (Already posted?).

    PS. GTX 470 confirmed looking the mem size.
    http://bbs.expreview.com/attachments...2c1b865e9c.jpg
    15fps average...and more 1.3gb vs 1gb nonsense...

    Quote Originally Posted by illidan View Post
    so then, in your opinion, what's the difference between GDDR5 and DDR5?
    thats like asking you what the difference between an elephant and a llama is in your opinion its not an opinion, it IS a different standard... why evga keeps making this mistake, who knows... either they dont know, which is very possible, they are marketing people after all, or, they say ddr+high number because it makes it sound more advanced... everybody knows that his system is using ddr2 or ddr3 memory, and if they think the memory on the card is 2 or 3 generations ahead many n00bs probably go whOooOOoOAaAaaA

    Quote Originally Posted by Chickenfeed View Post
    Buy a HD5 series or GT4xx because of performance in current (DX9/10) games.
    yes, totally agree
    buy a next gen card that DOES support the next standard, but when it comes to performance, focus on current games.
    Quote Originally Posted by Chickenfeed View Post
    All of EVGAs boxes say DDR not GDDR.
    that doesnt make it correct, does it?
    some cards actually do use ddr memory as its cheaper, especially entry level and mainstream cards tend to use ddr2 and ddr3 these days as its fast enough and cheaper.

    Quote Originally Posted by Chickenfeed View Post
    Given the wider bus, they don't need memory much faster if faster at all than whats on the 58x0 cards to achieve adequete bandwidth. Unless they chose to use faster memory later on down the pipe (admist the delays), I don't expect clocks faster than 1300Hz (5.2Ghz) myself.
    yes, i totally agree... that was nvidias strategy with gt200 as well, wider bus means they can use cheaper slower memory and still beat ati in bandwidth and they dont need to push clocks really high which can be a pita. since they need all the performance they can get though, i wouldnt be surprised if they actually go for fast gddr5 now... probably as fast as they can get it to run... since their gddr5 controller is first gen or maaaybe second gen, im not sure how high they will be able to get... the imc will be the same or a slightly tweaked version of that in the gt21x 10.1 40nm cards and those only clock in at 3500mhz effective...

    Quote Originally Posted by Chickenfeed View Post
    All that said, with 1.5gb VRAM and high bandwidth, SLI 480s should be the best 2560x1600 high IQ config for some time to come ( I have my doubts that 2 5870 eyefinity cards will do better )
    5870 xfire might be good enough though, and cheaper...

    Quote Originally Posted by orangekiwii View Post
    If the advantage is for future dx11 games, then thats not really an advantage for consumer usage. By the time there is any spectacular or even semi decent game (AvP is bad and runs poorly anyway) there will be MUCH better dx11 hardware out. What matters is current games, if gtx480 is the same as 5870, then whats the point for consumers? I want performance in current games not games in 2 years, if I did I'd buy a card in 2 years. If gtx480 does in fact more or less equal 5870, then theres no reason to buy any card from this generation as simply put its just not a big enough step up in performance on any level.
    totally agree
    hogging hw performance "for later" is the most foolish thing you can do in IT

    Quote Originally Posted by annihilat0r View Post
    crysis warhead numbers look interesting!

    Quote Originally Posted by annihilat0r View Post
    This is getting ridiculous. At launch the highest Cypress chip (5870) was slower than the x2 version of the latest generation. (4870x2 > 5870) Now it's a problem that 470 (not even 480) is slower than a GTX 295?
    well maaaybe, just maaaybe thats because nvidia was creating a huge hype with several events and claiming 40-60% over 5870?

    Quote Originally Posted by Sly Fox View Post
    So a mere 7 to 9 month wait gets you an extra 5% performance. With probably an additional 15% power consumption.
    Impressive Nvidia, impressive.
    Quote Originally Posted by weston View Post
    don't forget the 25% extra price
    gt200 (295) vs gt300 (470/480)
    ~5-20% extra performance
    ~5-20% extra power consumption
    ~5-40% higher price
    +dx11
    +single gpu instead of dual gpu

    i think thats actually pretty damn good, and its not like ati did any better...
    the 5870 is slower than the 4870x2 and costs more, consumed less power and was a single gpu and had dx11 which made it acceptable... while the 470 will probably lose to the 295, the 480 definately wont. more perf comes at a cost, more power and a higher price. i think the price doesnt justified the extra performance, especially because more performance at higher prices is not what 90% of the market needs and wants right now... but hey, market demand will take care of that, and im sure there are enough people who are willing to pay huge prices for the fastest single gpu card. the only problem i see for nvidia is availability...

    if you compare the last product cycles from ati and nvidia, the differences are that nvidia uses more power and costs more, but also offers a performance boost while ati couldnt even reach the performance of their previous gen highend dual gpu card... ati was able to ship though, slowly and with a few bumps, but they could... even the 470 seems to be veeery limited in numbers :/

    i think this is a clasical example of a pr hype actually hurting the product because it drove expecations too high... and it was a bad decision to focus so much on more performance for a higher price instead of the same performance for a lower price, as performance really isnt such a limiting factor for todays gaming pcs...

    the specs look fine though, performance is good i think, price is acceptable, so is the power cosumption and heat... but availability... thats a real issue... not all that much for nvidia, but for its partners its a huge problem... they need some business to make money...
    Last edited by saaya; 03-08-2010 at 01:49 AM.

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •