View Poll Results: Will you buy GTX BLUE "F" Moon?

Voters
26. You may not vote on this poll
  • Yes, take my moneyz

    6 23.08%
  • Nope, I will pass

    6 23.08%
  • Dont know, but I do look forward to it!

    11 42.31%
  • Dont know, I am not interested at the moment

    3 11.54%
Page 2 of 7 FirstFirst 12345 ... LastLast
Results 26 to 50 of 169

Thread: Updated 2nd Time: Maxwell 800 series GPUs will be "quiet" powerful.

  1. #26
    Xtreme Addict
    Join Date
    Jan 2005
    Posts
    1,970
    Quote Originally Posted by boxleitnerb View Post
    Your math is nice and all, but it won't happen that way. 20 nm isn't ready (capacity, Apple) and it is expensive. And Nvidia will not double performance if they don't need to. People bought the 680 that was only 30% faster than the 580. We call it "salami tactics": A little slice, another one, making money with each one instead of saturating the market only once with cards that are too fast. The times when a next gen card was more than 40-50% faster than its direct predecessor are gone for sure. Development has become too expensive for that.
    Which is why I said, for 28nm...

    Quote Originally Posted by GoldenTiger View Post
    I could see a 4-5 GPC (probably 4) coming out as the GM204 flagship part which would still make for an incredibly fast card (4 GPC = 2560 Maxwell shader units). Check out the thread I linked above for more thoughts, but I definitely think even with a GM204 we'll be very very happy once it's available .
    Yes, the math was indeed pretty nice. Completely feasible @ 28nm to have a very fast 4 GPC card released... as I said at the end of the linked post, they have tons of room to cut down and still release a nice jump. But I guess you didn't actually read it, from your response.

  2. #27
    Xtreme Addict
    Join Date
    Dec 2003
    Location
    At work
    Posts
    1,369
    I wish they'd release a card with the full-fledged GPU right out of the gate (at a premium price) and have it be completely unrivalled for a long period of time, rather than slowly increment the performance over two years.
    Server: HP Proliant ML370 G6, 2x Xeon X5690, 144GB ECC Registered, 8x OCZ Vertex 3 MAX IOPS 240GB on LSi 9265-8i (RAID 0), 12x Seagate Constellation ES.2 3TB SAS on LSi 9280-24i4e (RAID 6) and dual 1200W redundant power supplies.
    Gamer: Intel Core i7 6950X@4.2GHz, Rampage Edition 10, 128GB (8x16GB) Corsair Dominator Platinum 2800MHz, 2x NVidia Titan X (Pascal), Corsair H110i, Vengeance C70 w/Corsair AX1500i, Intel P3700 2TB (boot), Samsung SM961 1TB (Games), 2x Samsung PM1725 6.4TB (11.64TB usable) Windows Software RAID 0 (local storage).
    Beater: Xeon E5-1680 V3, NCase M1, ASRock X99-iTX/ac, 2x32GB Crucial 2400MHz RDIMMs, eVGA Titan X (Maxwell), Samsung 950 Pro 512GB, Corsair SF600, Asetek 92mm AIO water cooler.
    Server/workstation: 2x Xeon E5-2687W V2, Asus Z9PE-D8, 256GB 1866MHz Samsung LRDIMMs (8x32GB), eVGA Titan X (Maxwell), 2x Intel S3610 1.6TB SSD, Corsair AX1500i, Chenbro SR10769, Intel P3700 2TB.

    Thanks for the help (or lack thereof) in resolving my P3700 issue, FUGGER...

  3. #28
    One-Eyed Killing Machine
    Join Date
    Sep 2006
    Location
    Inside a pot
    Posts
    6,340
    Quote Originally Posted by boxleitnerb View Post
    Too conservative? The 750 Ti is about 25% faster than the 650 Ti. Yes, it consumes less power, but it also has a bigger die size. I can't see Nvidia going all in with GM204, that's what GM200 is for.
    As for the release date, didn't GM204 tape out only about 1-2 months ago?
    Too accurate I'd say
    A bit conservative only for the 20nm product.
    Coding 24/7... Limited forums/PMs time.

    -Justice isn't blind, Justice is ashamed.

    Many thanks to: Sue Wu, Yiwen Lin, Steven Kuo, Crystal Chen, Vivian Lien, Joe Chan, Sascha Krohn, Joe James, Dan Snyder, Amy Deng, Jack Peterson, Hank Peng, Mafalda Cogliani, Olivia Lee, Marta Piccoli, Mike Clements, Alex Ruedinger, Oliver Baltuch, Korinna Dieck, Steffen Eisentein, Francois Piednoel, Tanja Markovic, Cyril Pelupessy (R.I.P. ), Juan J. Guerrero

  4. #29
    Xtreme Member
    Join Date
    Apr 2014
    Posts
    307
    TOP card/cards of 800 will be 20nm. There is no room for growth on 28. The performance boost wont be nearly identical on 28nm compared to 20nm.

    Performance: Please. Kepler is a dead horse. Maxwell will be making major changes in cache area, ARM cores will aid to performance boost as well as very wide bus width(512 bit_ will help in high res along with an immense horse power. And Maxwell core is faster than the Kepler core. Even 28nm based 750 ti, a card thrown down the bulk to test the architecture left its predecessor in dust...

    How many times did NV list GK110 cards in their driver updates? 3-4 times, or even less when you exclude DX11 CPU overhead driver where Fermni cards got a nice boost from, even further decreasing the performance gap. GK 104 is a veteran chip and its cards went a long way but it is a mid range chip, meanwhile GK110 received get go from the start.
    Last edited by SinOfLiberty; 06-12-2014 at 02:06 AM.

  5. #30
    Xtreme Addict
    Join Date
    Jan 2005
    Posts
    1,970
    Quote Originally Posted by BenchZowner View Post
    A bit conservative only for the 20nm product.
    Yeah, that's the main thing I was referring to... 20nm has a lot of room to grow comparatively. I still think a 28nm medium Maxwell will be a joy though .

    Personally I'd be thrilled with even a 28nm Maxwell product that was 30% over the 780 Ti (bonus points if even more @ 4k), was a good clocker like baby Maxwell is, and maintained compatibility with the old PCB for coolers . 28nm is very mature, and hopefully we wouldn't see completely stratospheric pricing on such a card given how cheap the R9 290/290x have become.

    On a personal level I've been holding off buying a second 780 simply because of the impending Maxwell launch... just a question of how long from now still.
    Last edited by GoldenTiger; 06-12-2014 at 03:02 AM.

  6. #31
    Xtreme Enthusiast
    Join Date
    Dec 2010
    Posts
    594
    Quote Originally Posted by GoldenTiger View Post
    Which is why I said, for 28nm...



    Yes, the math was indeed pretty nice. Completely feasible @ 28nm to have a very fast 4 GPC card released... as I said at the end of the linked post, they have tons of room to cut down and still release a nice jump. But I guess you didn't actually read it, from your response.
    I admit I only browsed over it quickly on my way to work. 2560 SP would equal about 3400 SP on Kepler, meaning about 20-30%% faster than the GTX 780 Ti at the same clocks. I don't know if 256-bit would be sufficient to not only reach this performance level but surpass it by those 20-30%, even with the Maxwell architecture. I'm rather cautious with my estimates instead of being disappointed later

  7. #32
    Xtreme Addict
    Join Date
    Jan 2005
    Posts
    1,970
    Quote Originally Posted by boxleitnerb View Post
    I admit I only browsed over it quickly on my way to work. 2560 SP would equal about 3400 SP on Kepler, meaning about 20-30%% faster than the GTX 780 Ti at the same clocks. I don't know if 256-bit would be sufficient to not only reach this performance level but surpass it by those 20-30%, even with the Maxwell architecture. I'm rather cautious with my estimates instead of being disappointed later
    No worries, thanks for the response . I hope it is though!

  8. #33
    I am Xtreme
    Join Date
    Dec 2008
    Location
    France
    Posts
    9,060
    Quote Originally Posted by GoldenTiger View Post
    You seem to be stuck on the concept of bus width. The 2mb of L2 cache used in Maxwell blocks helps bandwidth efficiency and reduces fetch needs greatly over the Kepler generation.
    You don't seem to understand how cache works.

    If you have 500MB of geometry you have to stream for every frame, whether you have 256KB or 2MB of cache makes no difference.

    Cache hides latency, but it can't do anything about insufficient bandwidth.
    Last edited by zalbard; 06-12-2014 at 10:59 AM.
    Donate to XS forums
    Quote Originally Posted by jayhall0315 View Post
    If you are really extreme, you never let informed facts or the scientific method hold you back from your journey to the wrong answer.

  9. #34
    Xtreme Member
    Join Date
    Apr 2014
    Posts
    307
    WCCF Tech is copying my thread`s info by telling "We have just received some really pleasant news on Nvidia’s Maxwell Architecture"

    Good Job, WCCF tech is a site of half arsed made up data and put up as from an unknown source.

    WCCF should be banned from any news site, the links to their website.


    Edit: Other news sites have started to post this info, referring to Expreview as the main source. Note: Expreview have copied my news a few hours later after this thread went live,

    PS: We are Officially, Nvidia now. I am Jen Huang and I say 800 series will be a blast.
    Last edited by SinOfLiberty; 06-13-2014 at 12:12 AM.

  10. #35
    Xtreme Member
    Join Date
    Apr 2014
    Posts
    307
    Update: Read the OP for additional info.

  11. #36
    Xtreme Enthusiast
    Join Date
    Dec 2010
    Posts
    594
    Quote Originally Posted by SinOfLiberty View Post
    Update: Read the OP for additional info.
    If they kept the 16:1 ALU/TMU ratio from GM107 that would be 6144 ALUs compared to 2880 on GK110. Sounds way too much for a midrange part. This leaves the big dog. And I cannot imagine GM100/200 being ready so soon given how late Nvidia was in the last 5 years with their largest GPUs. Additionally, for 6144 ALUs you surely would need 20 nm and I don't think it is ready yet for GPU production (capacity wise, possibly yields on a 500+ mm2 GPU).

  12. #37
    Xtreme Member
    Join Date
    Apr 2014
    Posts
    307
    Quote Originally Posted by boxleitnerb View Post
    If they kept the 16:1 ALU/TMU ratio from GM107 that would be 6144 ALUs compared to 2880 on GK110. Sounds way too much for a midrange part. This leaves the big dog. And I cannot imagine GM100/200 being ready so soon given how late Nvidia was in the last 5 years with their largest GPUs. Additionally, for 6144 ALUs you surely would need 20 nm and I don't think it is ready yet for GPU production (capacity wise, possibly yields on a 500+ mm2 GPU).
    Well, there is no official source of 20nm delay.

    2nd. Who told you and why should GM204 be flagship again. It is just a rumor, bad one.

  13. #38
    I am Xtreme
    Join Date
    Dec 2008
    Location
    France
    Posts
    9,060
    Quote Originally Posted by SinOfLiberty View Post
    Good Job, WCCF tech is a site of half arsed made up data and put up as from an unknown source.
    This man gets it!
    Quote Originally Posted by SinOfLiberty View Post
    Well, there is no official source of 20nm delay.
    No announcement, either.
    Donate to XS forums
    Quote Originally Posted by jayhall0315 View Post
    If you are really extreme, you never let informed facts or the scientific method hold you back from your journey to the wrong answer.

  14. #39
    Xtreme Enthusiast
    Join Date
    Dec 2010
    Posts
    594
    Quote Originally Posted by SinOfLiberty View Post
    Well, there is no official source of 20nm delay.

    2nd. Who told you and why should GM204 be flagship again. It is just a rumor, bad one.
    No delay, but it can be assumed that Apple bought up much if not all of the initial 20 nm capacity. And then there is the question of process maturity/yields. When in the past 5+ years was such a large GPU released first or among the first products in that process? On 65 nm there were G92 GPUs way before GT200. There were small 40 nm GPUs many months before GF100 was launched, and on 28 nm it took over a year until GK110 was ready and not even the fully enabled part. Why would that be different on 20 nm?

    Logic. Why sell a 500+ mm2 GPU with possibly bad yields for $600 if you can sell a 300-400 mm2 GPU with better yields for (almost) the same price? That makes no sense. You make more money by providing your customers with just enough incentive to upgrade/to sell the targeted amount of GPUs (i.e. as many as you can order at TSMC). In addition, you reserve some headroom for a "refresh" like GK110 and are able to react better to unforeseen competition.

  15. #40
    Xtreme Member
    Join Date
    Apr 2014
    Posts
    307
    Quote Originally Posted by zalbard View Post
    This man gets it!

    No announcement, either.
    This should be said about Expreview too.

  16. #41
    Xtreme Addict
    Join Date
    May 2003
    Location
    Peoples Republic of Kalifornia
    Posts
    1,541
    Quote Originally Posted by SinOfLiberty View Post
    In the post you quoted, I wrote NV is launching first. So, my statement hold true.

    PS: 680 was not much faster. Compared to 580 OCed, 680 was hardly a next gen card. Unless you had to take TDP into an account.
    Is that why the GTX 680 typically beat the 580 by 40-60%? Not bad for a "midrange" card

    Cache hides latency, but it can't do anything about insufficient bandwidth.
    Then why did the EDRAM found on the XBOX 360 do exactly what you claim it didn't? The reason the 360 was able to outperform the clearly faster PS3 was the use of EDRAM to free memory bandwidth up for the use of texture filtering, z buffer, anti aliasing, alpha blending, etc.

    Just look at how much performance the low end 28nm nvidia 750 can pull off while having only 128bit memory. You cannot attribute that to lower latencies alone.

    "If the representatives of the people betray their constituents, there is then no resource left but in the exertion of that original right of self-defense which is paramount to all positive forms of government"
    -- Alexander Hamilton

  17. #42
    Xtreme Member
    Join Date
    Apr 2014
    Posts
    307
    60%- was very rare, only in very few titles.

    40%> that was an average gap. Both OCed and vanilla.

    This is with the latest drivers, where Fermi got a nice perf boost.

  18. #43
    I am Xtreme
    Join Date
    Dec 2008
    Location
    France
    Posts
    9,060
    Quote Originally Posted by Andrew LB View Post
    The reason the 360 was able to outperform the clearly faster PS3 was the use of EDRAM to free memory bandwidth up for the use of texture filtering, z buffer, anti aliasing, alpha blending, etc.
    Haha, no. It was because of PS3's hardware choices that made proper hardware utilization a difficult task.

    Quote Originally Posted by Andrew LB View Post
    Just look at how much performance the low end 28nm nvidia 750 can pull off while having only 128bit memory.
    It has 45% of bandwidth and 50-ish% of performance of GTX 680. No magic here. That's without 4K and surround where it would be ultimately choked.
    Donate to XS forums
    Quote Originally Posted by jayhall0315 View Post
    If you are really extreme, you never let informed facts or the scientific method hold you back from your journey to the wrong answer.

  19. #44
    Xtreme Member
    Join Date
    Apr 2014
    Posts
    307
    Possible Maxwell reveal event!:



    From Nvidia Event Calendar:

    GPU Technology Conference Japan 2014
    July 16, 2014
    Tokyo Midtown Hall & Conference
    Tokyo, Japan

  20. #45
    Xtreme Addict
    Join Date
    May 2009
    Location
    Switzerland
    Posts
    1,972
    Quote Originally Posted by Andrew LB View Post
    Is that why the GTX 680 typically beat the 580 by 40-60%? Not bad for a "midrange" card



    Then why did the EDRAM found on the XBOX 360 do exactly what you claim it didn't? The reason the 360 was able to outperform the clearly faster PS3 was the use of EDRAM to free memory bandwidth up for the use of texture filtering, z buffer, anti aliasing, alpha blending, etc.

    Just look at how much performance the low end 28nm nvidia 750 can pull off while having only 128bit memory. You cannot attribute that to lower latencies alone.
    This type of discussion oare extremly technic and dont touch only cache or bandwhith.. but if the increase in cache is effectively permit to increase the efficiency of the bandwith with low memory storage possibility, do you really think it is the same for high memory bandwith .. In general the impact decrease with larger memory size ... dont forget that the cache is not used only for that ...
    Today, cache is uses for more that just memory "storage / read and wrtite " .. this large cache favor more graphics task than feed the memory bus .. maxwell on the 750TI is different of Kepler architecture, it is simpler, you will never ask to the 750T what you ask about a 780TI, this said the high end Maxwell will be quiet different ...
    Last edited by Lanek; 06-13-2014 at 06:13 PM.
    CPU: - I7 4930K (EK Supremacy )
    GPU: - 2x AMD HD7970 flashed GHZ bios ( EK Acetal Nickel Waterblock H2o)
    Motherboard: Asus x79 Deluxe
    RAM: G-skill Ares C9 2133mhz 16GB
    Main Storage: Samsung 840EVO 500GB / 2x Crucial RealSSD C300 Raid0

  21. #46
    Xtreme Enthusiast
    Join Date
    Dec 2010
    Posts
    594
    Quote Originally Posted by zalbard View Post
    It has 45% of bandwidth and 50-ish% of performance of GTX 680. No magic here. That's without 4K and surround where it would be ultimately choked.
    According to TPU the 680 is only 80% faster, not double. Maxwell is indeed more bandwidth effixient.

  22. #47
    Xtreme Member
    Join Date
    Apr 2014
    Posts
    307
    Why everyone thinks gtx 800= GM204?!

  23. #48
    Xtreme Member
    Join Date
    Apr 2014
    Posts
    307

  24. #49
    Xtreme Member
    Join Date
    Apr 2014
    Posts
    307
    GM 200 will be a popular tag very soon

  25. #50
    Xtreme Addict
    Join Date
    Jan 2005
    Posts
    1,970
    Nice find, means they have working silicon of some type.
    Last edited by GoldenTiger; 06-14-2014 at 02:50 PM.

Page 2 of 7 FirstFirst 12345 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •