Page 120 of 143 FirstFirst ... 2070110117118119120121122123130 ... LastLast
Results 2,976 to 3,000 of 3567

Thread: Kepler Nvidia GeForce GTX 780

  1. #2976
    Xtreme Addict
    Join Date
    Nov 2007
    Location
    Illinois
    Posts
    2,095
    Quote Originally Posted by SKYMTL View Post
    What's wrong with CUDA? People in the professional / HPC world are still "shouting it from the rooftops" as you put it. Currently, there isn't a better, more adaptable GPU compute language on the market. OpenCL surely has the chance to make it big but being an open format, there is still very little directed focus on prioritizing many inefficiencies.
    Because every time we talked about gaming perf / dollar, gaming perf / watt, or whatever metric in *gaming*, we got the same refrain of 'CUDA and PhysX, perf regardless of watt, blahblah.' Now Nvidia is pursuing the same philosophy ATI did, and the arguments have (not) surprisingly shifted their focus on the exact same metrics ATI's VLIW4/5 was lauded for since the beginning.
    E7200 @ 3.4 ; 7870 GHz 2 GB
    Intel's atom is a terrible chip.

  2. #2977
    Xtreme Addict
    Join Date
    Jan 2007
    Location
    Brisbane, Australia
    Posts
    1,264
    So according to the whitepaper, a full GK110 should be:

    Cuda Cores: 2880 (15 SMX)
    ROPS: 48?
    Memory int: 384bit

    7.1B Transistors.

    GK104 is:

    Cua Cores: 1536
    ROPS: 32
    Memory int: 256bit

    3.5B Transistors


    With the Added DP units, and 720KB of data cache, it's gonna be one hell of a big peice of sand!

    PS there's a typo in xbit link for Core count.. -edit hangon I'm guessing they're including DP units..
    Last edited by mAJORD; 05-16-2012 at 08:02 PM.

  3. #2978
    Xtreme Enthusiast
    Join Date
    Nov 2007
    Posts
    872
    Quote Originally Posted by cegras View Post
    Because every time we talked about gaming perf / dollar, gaming perf / watt, or whatever metric in *gaming*, we got the same refrain of 'CUDA and PhysX, perf regardless of watt, blahblah.' Now Nvidia is pursuing the same philosophy ATI did, and the arguments have (not) surprisingly shifted their focus on the exact same metrics ATI's VLIW4/5 was lauded for since the beginning.
    Errr Cegras, you do know the 6 series runs PhysX faster than the 5 series, right?

    http://physxinfo.com/news/7862/gtx-6...marks-roundup/

    I'd say having PhysX that's better than a GTX580>>>>>>no PhysX at all.
    Intel 990x/Corsair H80 /Asus Rampage III
    Coolermaster HAF932 case
    Patriot 3 X 2GB
    EVGA GTX Titan SC
    Dell 3008

  4. #2979
    Xtreme Addict
    Join Date
    Nov 2007
    Location
    Illinois
    Posts
    2,095
    Quote Originally Posted by Rollo View Post
    Errr Cegras, you do know the 6 series runs PhysX faster than the 5 series, right?

    http://physxinfo.com/news/7862/gtx-6...marks-roundup/

    I'd say having PhysX that's better than a GTX580>>>>>>no PhysX at all.
    A majority of reviews I've checked don't even bother with PhysX testing. Techreport / techpowerup / anandtech .. and I don't think I saw a mention on HWC of it either. The point is, I could look up a GTX480 review and chances are it would be testing mirrors edge and batman with physx. Now, we don't see any of that. We might have seen some reviewers spend a page or two talking about CUDA, now? Compute is barely mentioned except nowadays too.

    http://www.anandtech.com/show/2977/n...th-the-wait-/5

    http://www.anandtech.com/show/5699/n...gtx-680-review
    E7200 @ 3.4 ; 7870 GHz 2 GB
    Intel's atom is a terrible chip.

  5. #2980
    Xtreme Addict
    Join Date
    Mar 2006
    Location
    Saskatchewan, Canada
    Posts
    2,207
    Quote Originally Posted by cegras View Post
    Because every time we talked about gaming perf / dollar, gaming perf / watt, or whatever metric in *gaming*, we got the same refrain of 'CUDA and PhysX, perf regardless of watt, blahblah.' Now Nvidia is pursuing the same philosophy ATI did, and the arguments have (not) surprisingly shifted their focus on the exact same metrics ATI's VLIW4/5 was lauded for since the beginning.
    Your giving AMD way too much credit here. Besides holding the performance per watt crown for a while, performance per dollar has been pretty similar for both companies for a while(gtx 260/4870, 4890/gtx 275 gtx 470/5870, gtx 570/6970. The only card that has been pricier than their AMD counterpart has been NV top card, which typically justifies its price by being the fastest on the market and being much costlier to produce compared to anything else from both companies on the market. Although the flagship card means a lot about the company, it doesn't represent everything about the company.

    Also we are seeing a lot of people including yourself for buying and making the choice of AMD this generation for what Nvidia was strong at last generation. That is it GPU computing abilities so I really don't get your annoyance or anger. Nvidia has been stressing this for years and in the past, a chasm even bigger than the performance per watt was established in favor of Nvidia cards in more GPU compute situation besides bitcoining and it held a tremendous lead for performance per watt in the professional market. This reason holds less water when talking about AMD based solutions at the moment because AMD is still largely untested compared to Nvidia on how good their cards are in the professional market. AMD needs to get their professional card out ASAPand get increase their driver support in this field exponentially greater because NV has a unquestionable lead in the professional driver support market(the consumer driver is somewhat debatable) and start winning more than a benchmark like luxmark in reviews and win in industry standard programs like Nvidia has done in the past. Nvidia's dominance here is unquestionable as well because of there vast marketshare lead over AMD in the pro market.

    http://hothardware.com/Reviews/NVIDI...Review/?page=5

    In addition, although AMD might be good at openCL, without them putting huge amounts of marketing muscle and money in it like NV has done with Cuda, these efforts might not bare fruit.

    Both companies are simply matching design philosophies because their goals are not mutually exclusive. Performance per watt has always been a goal in the past with Nvidia because it is essential I can imagine for both gaming and GPU compute. GPU compute has been one of Nvidia's number one priorities and has directed how they have designed GPU's for the last 4+ years. Performance per watt is one of the biggest factors of success for GPU cards to be successful and to be used in the professional systems and super computers. AMD has wanted to get into the professional market for a while, hence it's shift to making its shaders more like Nvidias. Both companies are merging their design philosophies especially now because the GPU compute is a massive and untapped industry for growth and revenue.
    Last edited by tajoh111; 05-16-2012 at 08:43 PM.
    Core i7 920@ 4.66ghz(H2O)
    6gb OCZ platinum
    4870x2 + 4890 in Trifire
    2*640 WD Blacks
    750GB Seagate.

  6. #2981
    Xtreme Addict
    Join Date
    Feb 2008
    Location
    America's Finest City
    Posts
    2,078
    Quote Originally Posted by SKYMTL View Post
    I wasn't referring to your post.

    However, this time I found the keynote had more info than the Q&A, but I guess that's just me....
    I thought the Q&A only lacked when he was asked about Project Denver, but considering that it wasn't mentioned at all during the Keynote I understand a bit why there was a hesitance to respond.

    Also, now I feel dumb for having even replied :P
    Quote Originally Posted by FUGGER View Post
    I am magical.

  7. #2982
    Xtreme Addict
    Join Date
    Nov 2007
    Location
    Illinois
    Posts
    2,095
    Quote Originally Posted by tajoh111 View Post
    Also we are seeing a lot of people including yourself for buying and making the choice of AMD this generation for what Nvidia was strong at last generation. That is it GPU computing abilities so I really don't get your annoyance or anger. Nvidia has been stressing this for years and in the past, a chasm even bigger than the performance per watt was established in favor of Nvidia cards in more GPU compute situation besides bitcoining and it held a tremendous lead for performance per watt in the professional market. This reason holds less water when talking about AMD based solutions at the moment because AMD is still largely untested compared to Nvidia on how good their cards are in the professional market. AMD needs to get their professional card out ASAPand get increase their driver support in this field exponentially greater because NV has a unquestionable lead in the professional driver support market(the consumer driver is somewhat debatable) and start winning more than a benchmark like luxmark in reviews and win in industry standard programs like Nvidia has done in the past. Nvidia's dominance here is unquestionable as well because of there vast marketshare lead over AMD in the pro market.
    I'm not buying it for compute. Although I wanted a 7850, the 7870 outclasses all cards in the same price region on nearly every metric, even if only gaming related ones are taken into account.

    Also, perf / dollar is a constantly shifting comparison between nvidia and ati due to the constant price drops that ati forced. What I was talking about was things like perf/transistor / mm^2 / watt, all sorts of things that were architecturally derived (and not market derived), that are now being used as merits for the nvidia instead of ati, while noticeably physx and cuda have taken a huge back seat in all reviews published thus far. It's hypocrisy.

    Not that it really bugs me, but it should be acknowledged.
    E7200 @ 3.4 ; 7870 GHz 2 GB
    Intel's atom is a terrible chip.

  8. #2983
    Xtremely Kool
    Join Date
    Jul 2006
    Location
    UK
    Posts
    1,875
    Quote Originally Posted by cegras View Post
    I'm not buying it for compute. Although I wanted a 7850, the 7870 outclasses all cards in the same price region on nearly every metric, even if only gaming related ones are taken into account.

    Also, perf / dollar is a constantly shifting comparison between nvidia and ati due to the constant price drops that ati forced. What I was talking about was things like perf/transistor / mm^2 / watt, all sorts of things that were architecturally derived (and not market derived), that are now being used as merits for the nvidia instead of ati, while noticeably physx and cuda have taken a huge back seat in all reviews published thus far. It's hypocrisy.

    Not that it really bugs me, but it should be acknowledged.
    Yep i remember saying something to that effect on another forum.

  9. #2984
    Xtreme Mentor
    Join Date
    May 2008
    Posts
    2,554
    Quote Originally Posted by cegras View Post
    I'm not buying it for compute. Although I wanted a 7850, the 7870 outclasses all cards in the same price region on nearly every metric, even if only gaming related ones are taken into account.

    Also, perf / dollar is a constantly shifting comparison between nvidia and ati due to the constant price drops that ati forced. What I was talking about was things like perf/transistor / mm^2 / watt, all sorts of things that were architecturally derived (and not market derived), that are now being used as merits for the nvidia instead of ati, while noticeably physx and cuda have taken a huge back seat in all reviews published thus far. It's hypocrisy.

    Not that it really bugs me, but it should be acknowledged.
    How it it hypocracy when ON TOP OF THAT even the $400 GTX670 flat out outperforms not only 7950 but also 7970 at a lower cost? Yeah, Nvidia also provides a more feature rich solution.

    WTF does 7870 outclass? Its hardly faster than a GTX570 that I could have and did pick up a year and a half before 7870 was ever released. I don't get your logic. Its just hardcore fanboyism.

    When GTX680 launched it was cheaper than 7970, faster, and drew less power. The first two are the most important part.
    Last edited by BababooeyHTJ; 05-17-2012 at 03:24 PM.

  10. #2985
    Xtreme Addict
    Join Date
    Mar 2006
    Location
    Saskatchewan, Canada
    Posts
    2,207
    Quote Originally Posted by cegras View Post
    I'm not buying it for compute. Although I wanted a 7850, the 7870 outclasses all cards in the same price region on nearly every metric, even if only gaming related ones are taken into account.

    Also, perf / dollar is a constantly shifting comparison between nvidia and ati due to the constant price drops that ati forced. What I was talking about was things like perf/transistor / mm^2 / watt, all sorts of things that were architecturally derived (and not market derived), that are now being used as merits for the nvidia instead of ati, while noticeably physx and cuda have taken a huge back seat in all reviews published thus far. It's hypocrisy.

    Not that it really bugs me, but it should be acknowledged.
    This is simply not correct. It goes both ways and it hasn't been only AMD forcing NV hand, it has been NV doing the price attacks lately. I.e gtx 570, gtx 560 ti, Gtx 460 and gtx 670 and gtx 680. As I have said, you are giving AMD far too much credit, as if they have done all that is good for the GPU in general.

    Architectures are purely market derived. Both companies have designed their chips to make money in different ways, both gaming and professional market.

    The performance per transister has never been talked about really lol, I don't know why your even mentioning it. And for that metric, Nvidia has been been competitive with AMD. I.e gtx 580 vs 6970 is 3 billion vs 2.64 billion.

    If anything, AMD showing this round for performance has shown why NV has fallen behind for performance per watt and die size in the past.
    Core i7 920@ 4.66ghz(H2O)
    6gb OCZ platinum
    4870x2 + 4890 in Trifire
    2*640 WD Blacks
    750GB Seagate.

  11. #2986
    I am Xtreme
    Join Date
    Dec 2007
    Posts
    7,750
    Quote Originally Posted by cegras View Post
    It's hypocrisy.

    Not that it really bugs me, but it should be acknowledged.
    agreed on this part. some details are debatable, but what amd use to be strong at didnt matter until nvidia did it
    2500k @ 4900mhz - Asus Maxiums IV Gene Z - Swiftech Apogee LP
    GTX 680 @ +170 (1267mhz) / +300 (3305mhz) - EK 680 FC EN/Acteal
    Swiftech MCR320 Drive @ 1300rpms - 3x GT 1850s @ 1150rpms
    XS Build Log for: My Latest Custom Case

  12. #2987
    Xtreme Guru
    Join Date
    Dec 2002
    Posts
    4,046
    lol so they spend so much time/money/r&d on gk110 only to offer it in tesla form ?? hmmm

    ill take ~1000$ 3gb/6gb buffer 2880 core gk110 over a stupid arse gtx690 any time any place

  13. #2988
    Xtreme Addict
    Join Date
    Jan 2007
    Location
    Brisbane, Australia
    Posts
    1,264
    Quote Originally Posted by NapalmV5 View Post
    lol so they spend so much time/money/r&d on gk110 only to offer it in tesla form ?? hmmm

    ill take ~1000$ 3gb/6gb buffer 2880 core gk110 over a stupid arse gtx690 any time any place

    Tesla is primarily what GK110 was developed for.. so what you said doesn't make much sense.

    I'm a fan of single cards too, but it's a pretty safe bet when/if GK110 does appear as a gaming card, it will be notably slower than a GTX690 out of the box, so i'd expect to pay a lot less than that..

    Also highly likely it wouldn't have all SMX's enabled either.

    Like all Power/ thermally limited SKU's though, it would be a mad overclocker with the right cooling

  14. #2989
    Xtreme Guru
    Join Date
    Dec 2002
    Posts
    4,046
    lol what im saying theyd have to be stupid not to offer gk110 in geforce form

    less indeed ~600$ for the same gtx690 full performance.. not notably slower

    tesla k10 = gtx690 no reason "gtx780" is not going to equal tesla k20 (15x smx)

  15. #2990
    Xtreme Addict
    Join Date
    Mar 2006
    Location
    Saskatchewan, Canada
    Posts
    2,207
    The small size of Nv chips in these forums, if anything has been more of a negative for Nvidia this round atleast for the debates around here. I hardly see people taking about performance per mm2 and saying it is crazy fantastic. What I do see alot more arguing about is for people to complain is how NV could charge so much for a small chip and I think the same thing applies for both companies. I have to agree gk104 is priced too high and the same argument applies to AMD this round too(not as much after the pricecuts).

    I think the big thing why people are lauding NV performance per mm2 if for any reason when it comes to debates is this(plus this is the gtx 780 thread)..If the mid-tier gk104 performs a bit better than a 7970, then just imagine how the doubled up gk110 going to perform? The performance per watt and mm allows for a much grander flagship for the enthusiast from NV which honestly anyone should get excited about since the performance is simply monstrous depending if they screwup or not.

    As someone that can appreciate AMD efficient designs in the past, imagine if AMD announced it was going to make a near 600mm2 version of their architecture. You or anyone on this forum would be excited at the performance potential of taking an efficient architecture and scaling it up. NV has made an efficient architecture like AMD, but has the balls to scale up like it has in the past.

    What has hurt AMD image this round with the reviewers and the debaters is this. If your going to raise the pricing bar, you better bring the performance to back it up, especially when your on a new manufacturing node. By making only a moderately sized hybrid gaming/GPU computing card(rather than a moderately sized gaming part or monolith GPU hybrid), you simply leave so much room on the table for the competition to make you look bad. Did AMD really think it's pricing would hold with a card that is 20% faster than a gtx 580 while costing 10% more for the next generation?

    AMD should have designed a larger chip to compensate for the die area that the compute area was going to take up and clocked higher(but really 75mhz would have hardly made a difference, check the original 7970 thread for expectations of AMD performance this generation) or priced the chip more appropriately. Because of AMD pricing this generation, the gk104, got promoted to gtx 680 status and with the pricing to match and still was the much better deal out of the gate with this inflated pricing.
    Core i7 920@ 4.66ghz(H2O)
    6gb OCZ platinum
    4870x2 + 4890 in Trifire
    2*640 WD Blacks
    750GB Seagate.

  16. #2991
    Xtreme Addict
    Join Date
    Jan 2007
    Location
    Brisbane, Australia
    Posts
    1,264
    Quote Originally Posted by NapalmV5 View Post
    lol what im saying theyd have to be stupid not to offer gk110 in geforce form

    less indeed ~600$ for the same gtx690 full performance.. not notably slower

    tesla k10 = gtx690 no reason "gtx780" is not going to equal tesla k20 (15x smx)
    Forget K10 vs K20.. K20 is aimed at a different market.

    I'm sorry, but unless GK110 is a step UP in performance/watt for gaming (despite the opposite being more likely ), then I wouldn't get your hopes up for that.

    not saying it wouldn't be fast.. just not that fast.

  17. #2992
    Xtreme Guru
    Join Date
    Dec 2002
    Posts
    4,046
    k10 vs k20 different markets ?? what you said doesn't make much sense jk

    why because nvidia categorized k10 k20 differently ? its the same hpc crowd

    its not about getting hopes up its about knowing/looking at the data

    gtx690 full sli performance will be matched and in many games it will be surpassed and in some games it will get decimated because of nill sli support

  18. #2993
    Xtreme Addict
    Join Date
    Jan 2007
    Location
    Brisbane, Australia
    Posts
    1,264
    Quote Originally Posted by NapalmV5 View Post
    k10 vs k20 different markets ?? what you said doesn't make much sense jk

    why because nvidia categorized k10 k20 differently ? its the same hpc crowd
    http://www.anandtech.com/show/5840/g...ased-tesla-k20 figure 1

    its not about getting hopes up its about knowing/looking at the data

    gtx690 full sli performance will be matched and in many games it will be surpassed and in some games it will get decimated because of nill sli support

    What is it about the available data for GK110 that makes you so certain it will match gtx690?

  19. #2994
    Xtreme Enthusiast
    Join Date
    Feb 2009
    Location
    Hawaii
    Posts
    611
    Quote Originally Posted by tajoh111 View Post
    In addition, although AMD might be good at openCL, without them putting huge amounts of marketing muscle and money in it like NV has done with Cuda, these efforts might not bare fruit.
    True to a point. If openCL fails to catch on then AMD does stand to lose out. The problem with that is Apple has baked openCL into their Mac OS. Adobe now supports it in their creative suite and rumor has it that handbreak will soon support it. Add to that the fact that nvidia supports openCL as well.

  20. #2995
    Xtreme Addict
    Join Date
    May 2007
    Location
    'Zona
    Posts
    2,346
    Quote Originally Posted by tajoh111 View Post
    What has hurt AMD image this round with the reviewers and the debaters is this. If your going to raise the pricing bar, you better bring the performance to back it up, especially when your on a new manufacturing node. By making only a moderately sized hybrid gaming/GPU computing card(rather than a moderately sized gaming part or monolith GPU hybrid), you simply leave so much room on the table for the competition to make you look bad. Did AMD really think it's pricing would hold with a card that is 20% faster than a gtx 580 while costing 10% more for the next generation?

    AMD should have designed a larger chip to compensate for the die area that the compute area was going to take up and clocked higher(but really 75mhz would have hardly made a difference, check the original 7970 thread for expectations of AMD performance this generation) or priced the chip more appropriately. Because of AMD pricing this generation, the gk104, got promoted to gtx 680 status and with the pricing to match and still was the much better deal out of the gate with this inflated pricing.
    You seem to be forgetting AMD's big push with Tahiti... time to market.
    AMD was first out the gates with their "high end" on a new process. They were cautious and conservative.
    There is a reason that GTX680/670 is able to be priced where it is and no, it isn't because of it's size.

    AMD tested the waters, put a new highend on the market that was unchallenged for a quarter and subsequently an entire lineup that is still unchallenged...
    Don't get me wrong. GK104 is a great chip and shows just how well Nvidia can compete when focused. I'm sure big daddy will be good as well but if you think AMD doesn't know this and doesn't have anything planned for later this year...
    Originally Posted by motown_steve
    Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.

    Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.

  21. #2996
    Xtreme Member
    Join Date
    Mar 2007
    Posts
    317
    Here's why I think this chip (GK110) should be released on the desktop as well... in a way it is "owed" to us.

    Let us start from November 2006 when nVidia decided to release her first card with unified shader controllers (8800GTX). It seems that nVidia follows an equivalent release/manufacturing pattern as Intel's tick tock, with even numbered cards being the "tock" (new architecture, much greater performance) and the odd numbered card generations being the tick (minor tweaks).

    Anyhow let us go back to November 2008. 8800 GTX is being released and takes the world by a storm. nVidia needs to wait another 20 months (June 2008) to release another card architecture which is truly more powerful from the G80s. It was called the GTX 280 and it was (in average) about 80% faster across the board in most games in the most popular resolution than the G80. It is not exactly a doubling in performance but enough to call it a new generation card as it matches the performance (or even exceeds it) of two G80 cards (the least "tock" cards) in an SLI arrangement.

    From there on it took nVidia a further 22 months to actually get around 75% more performance from the new "tock" card (GTX 480) which was released in March 2010. Again we see the same pattern of advancement (but sadly the cards start getting hotter and more power hungry and the release schedule moved on -21 months instead of 20-).

    And the above leads us to "modern day", March 2012 (24! months since the last "tock" card), and we get a card which is at most 60% faster than the last "tock" card (GTX 480) and it took *even* longer to be released (24 months). My point is that things are obviously slowing down, there's more time involved for the creation of new generation(s) and the end product progressively less impressive from a performance point of view as the generations go by.

    I'm not trying to argue that the GTX 680 is a bad card (it's a surprisingly good one if all parameters are to be concerned), my argument is -instead- that nVidia would probably *need* to release the GK110 on the desktop as well to keep pace with the program she initially instigated back in 2006 (and probably did even before that). Of course she is not bound to do that either but it would be greatly dissapointing from a consumer's point of view, as we get increasingly more delayed products with increasingly less performance upgrade.

    There will be a day -sadly not too far away- that new generations would mean close to nothing as it seems that Intel is also nearing that point (Ivy Bridge is no better than Sandy Bridge if most of the things are to be concerned). And like I said that it would be sad because it means that consumer grade IT industry is being slowed down, quite aggressively, our hobby of taking new and exciting things and tweaking to the max may soon be to its end or -worse than that- be rendered meaningless as companies would increasingly start to release rehashes of the same product. But worse of all it would slow down the world at large as advances in desktop computing eventually infiltrated in every other aspect of computing or social life indeed; it's as if the whole world is slowing down when nVidia (and Intel) decide that they do not want to release keeping the pace they once used to... I know how silly it sounds but I'm afraid that when all the ramifications are to be concerned it would be proved right

  22. #2997
    Xtreme Member
    Join Date
    Dec 2008
    Location
    India
    Posts
    394
    HD8xxx is expected soon i guess .... dont think we will see GK110 till ATI has their next gen out .... and most likely its gonna be out a little earlier than GK110. Great times for us consumers you cant go wrong with either party imo, I am a little biased towards nv because red team seem to cheat on IQ every now and then or so they say also drivers are nicer and physx is cool.

    On a 6950 myself and stuff looks fine imo.

    This battle has been good 7xxx vs 6xx looking forward to a similar closely contested 8xxx vs 7xx.

  23. #2998
    Xtreme Addict
    Join Date
    Apr 2011
    Location
    North Queensland Australia
    Posts
    1,445
    Quote Originally Posted by Zloyd View Post
    HD8xxx is expected soon i guess .... dont think we will see GK110 till ATI has their next gen out .... and most likely its gonna be out a little earlier than GK110. Great times for us consumers you cant go wrong with either party imo, I am a little biased towards nv because red team seem to cheat on IQ every now and then or so they say also drivers are nicer and physx is cool.
    Actually I think its a terrible time.

    The price of these graphics cards compared to previous top end stuff is ridiculous.

    -PB
    -Project Sakura-
    Intel i7 860 @ 4.0Ghz, Asus Maximus III Formula, 8GB G-Skill Ripjaws X F3 (@ 1600Mhz), 2x GTX 295 Quad SLI
    2x 120GB OCZ Vertex 2 RAID 0, OCZ ZX 1000W, NZXT Phantom (Pink), Dell SX2210T Touch Screen, Windows 8.1 Pro

    Koolance RP-401X2 1.1 (w/ Swiftech MCP35X), XSPC EX420, XSPC X-Flow 240, DT Sniper, EK-FC 295s (w/ RAM Blocks), Enzotech M3F Mosfet+NB/SB

  24. #2999
    Xtreme Member
    Join Date
    Dec 2008
    Location
    India
    Posts
    394
    Well fastest single gpu is 500-550$ .... looks ok to me, dualie is 1000$ which is a ouch but previous dualies have not had fully enabled x80 chips so ....

    Edit - But I totally see your point, friend sitting on SLI 580 setup I ate his head to avoid 680 and 690 upgrades knowing GK110 is around. Another friend needed a card 25xx rez so he is getting a 690 but damn its not in stock anywhere(doesnt have anything he relocated and built a new rig)

    So coming back to your point of this not really being good value .... its not considering nvidia had the 680 pegged for a 670 and the 670 was a 660. Also lots of people screamed 7970 was overpriced at launch etc etc ..... so you are absolutely correct in that sense , no argument, we are getting RIPPED OFF this gen. Lets hope for a better next gen, im expecting a good jump. Sadly you can bet 780 is gonna be 700$ if HD8xxx is meh .
    Last edited by Zloyd; 05-18-2012 at 05:05 AM.

  25. #3000
    Xtreme Addict
    Join Date
    Mar 2006
    Location
    Sydney , Australia
    Posts
    1,600
    Yeah its $500 to you but that is $700+ here, 4gb 680 is $800+ and a 690 is $1500+ !!!

    Yes its expensive.

    Bencher/Gamer(1) 4930K - Asus R4E - 2x R9 290x - G.skill Pi 2200c7 or Team 2400LV 4x4GB - EK Supreme HF - SR1-420 - Qnix 2560x1440
    Netbox AMD 5600K - Gigabyte mitx - Aten DVI/USB/120Hz KVM
    PB 1xTitan=16453(3D11), 1xGTX680=13343(3D11), 1x GTX580=8733(3D11)38000(3D06) 1x7970=12059(3D11)40000(vantage)395k(AM3) Folding for team 24

    AUSTRALIAN DRAG RACING http://www.youtube.com/watch?v=OFsbfEIy3Yw

Page 120 of 143 FirstFirst ... 2070110117118119120121122123130 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •