Page 50 of 123 FirstFirst ... 404748495051525360100 ... LastLast
Results 1,226 to 1,250 of 3051

Thread: The Fermi Thread - Part 3

  1. #1226
    Xtreme Addict
    Join Date
    Jan 2005
    Posts
    1,970
    Quote Originally Posted by japamd View Post
    Where did you saw clock's reference?

    As GTX480 power draw was about 283W, it could be 512 Cuda Cores card with 700 mhz.
    So the final GTX480 will have 480 shaders (one main unit disabled) and clocks upped to 675, so I'll pretty have the performance of the 512 shaders, 600MHz ES I tested, that being 3.6% faster on average then the 5870.
    So judging by the price, power & heat - overall it's not worth the trouble.
    One of the posts was already deleted which a user comments on in the thread... but he clearly said the clocks I mentioned before as what his ES came with. The one with that quote is top post on page 4 of the linked thread.

  2. #1227
    Xtreme Addict
    Join Date
    Jul 2007
    Location
    Alberta, Canada
    Posts
    1,264
    Quote Originally Posted by ***Deimos*** View Post
    Virtually all 5870 (850Mhz) can max out overdrive at 900. Most cards do around 920-935 @1.15V.
    Memory on 5870 (1200) overclocks to 1300. I doubt AMD will bother trying to source some rare uber high end GDDR5, since most of performance improvement is from core overclocking.

    "5870 OC" is probably gonna be something like 950/1250 2GB
    I agree with your estimations of a 5870OC / 5890 however where the hell did you pull this 920-935@1.15V number from I've used 4 different 5870s ( one of which is my own ) and not a single one claims such a feat ( I doubt they'd do 850 at 1.15 for that matter ) AMD have even upped the stock voltage from 1.168-1.172 on the reference bios to improve stability.

    If anything they needed 1.2 (mine needs 1.237) best case to get 925. Now this isn't to say with improvements it couldn't be done but this just doesn't seem indictive of reference launch cards ( the cards I've used were all revision 1 with initial bios, all tested in 2009 ) Sure a 4 card sample isn't huge however they were all from different vendors and all consistent in how much voltage they seemed to play nice with (or not). Anyways just a pety thing to kick dust up at, just found this statement to be a tad up in the air is all.

    I'd love to see a 5890 with at least 950/1300 clocks coupled with 2GB of memory. That should offer a 5-15% performance bump over a 5870. Going by the leaks so far, this would be more than enough to challenge the 480s. Given those supposive 512sp leaks were at 2560x1600, in some cases with 8x AA, im shocked the 5870 doesnt fare worse with its vram handicap. Should bode well for a future ATI refresh.
    Last edited by Chickenfeed; 03-23-2010 at 08:34 PM.
    Feedanator 7.0
    CASE:R5|PSU:850G2|CPU:i7 6850K|MB:x99 Ultra|RAM:8x4 2666|GPU:980TI|SSD:BPX256/Evo500|SOUND:2i4/HS8
    LCD:XB271HU|OS:Win10|INPUT:G900/K70 |HS/F:H115i

  3. #1228
    Xtreme Member
    Join Date
    Dec 2005
    Posts
    427
    Is GeForce Fermi asking ppl to invest their money to support Nvidia's cloud server future?

  4. #1229
    Xtreme Member
    Join Date
    Nov 2006
    Location
    Brazil
    Posts
    257
    Quote Originally Posted by GoldenTiger View Post
    One of the posts was already deleted which a user comments on in the thread... but he clearly said the clocks I mentioned before as what his ES came with. The one with that quote is top post on page 4 of the linked thread.
    That explains it. Cheers

  5. #1230
    Xtreme Guru
    Join Date
    Aug 2007
    Posts
    3,562
    Quote Originally Posted by spursindonesia View Post
    All i know, reviewers like SKYMTL and Benchzowner have hinted that GF 100 cards will have AWESOME gaming performance compared to competitor, even on todays games, that GF 100 will stomp RV 870 to the thrash bin where it belongs, so i think i will put more weight on their cues rather than Charlie the satan himself for this launch occasion.
    I believe BZ and I said it has the POTENTIAL to achieve this. Whether or not it will accomplish its goals is yet to be seen.

    What we BZ and I have said again and again is that people should WAIT before jumping too far onto a bandwagon claiming anything extreme.

    As for Charlie's Dirt 2 story, not many reviewers are benchmarking with the demo version of the game and with the retail version you can physically force DX11 through the config file.
    Last edited by SKYMTL; 03-23-2010 at 08:45 PM.

  6. #1231
    Xtreme Member
    Join Date
    Dec 2005
    Posts
    427
    No potential at all. I only see an inefficient premature design intended to be a scalable cloud server product, disguised into desktop GeForce Fermi.

  7. #1232
    Xtreme Addict
    Join Date
    Nov 2007
    Posts
    1,195
    Quote Originally Posted by SKYMTL View Post
    I believe BZ and I said it has the POTENTIAL to achieve this. Whether or not it will accomplish its goals is yet to be seen.

    What we BZ and I have said again and again is that people should WAIT before jumping too far onto a bandwagon claiming anything extreme.

    As for Charlie's Dirt 2 story, not many reviewers are benchmarking with the demo version of the game and with the retail version you can physically force DX11 through the config file.
    .........and here goes skymtl ruining your dreams
    Quote Originally Posted by LesGrossman View Post
    So for the last 3 months Nvidia talked about Uniengine and then Uniengine and more Uniengine and finally Uniengine. And then takes the best 5 seconds from all the benchmark run, makes a graph and then proudly shows it everywhere.

  8. #1233
    Xtreme Member
    Join Date
    Sep 2008
    Posts
    449
    Quote Originally Posted by GoldenTiger View Post
    By pure math...

    32 shaders would be about 6.5% of the shading, lost...

    but you'd gain about 17% clock rate... on the cores alone... and a good chunk more memory bandwidth.

    You'd also be speeding up the ROP's whereas more cores don't do that alone...

    Additionally the TMU's would be sped up as those are tied to the core clock and not shaders as far as I'm aware...

    He also said the newer drivers added a couple % performance...

    Also worth noting, this guy claims his Engineering Sample card had 512 shaders @ 600mhz, 1200mhz shader, and 2800 (GDDR5) RAM: full specs should be 480 shaders @ 700mhz, 1400mhz shader, and 3600-3700 (GDDR5) RAM.

    So my guess is that with those #'s considered, it would be maybe 16-17% better performance for 700mhz @ 480 cores, vs. 600mhz @ 512 cores, taking into account the memory frequency as well. Couple that with a couple of extra percent from those tests adding 3% (he said a few) and we might see a graph looking a bit more favorable. Still, if this guy's accurate, it wouldn't be enough to make it what I would call a clean win for nV here.
    If GDDR5 is at 2800Mhz that should be seriously crippling Fermi's performance.
    --lapped Q9650 #L828A446 @ 4.608, 1.45V bios, 1.425V load.
    -- NH-D14 2x Delta AFB1212SHE push/pull and 110 cfm fan -- Coollaboratory Liquid PRO
    -- Gigabyte EP45-UD3P ( F10 ) - G.Skill 4x2Gb 9600 PI @ 1221 5-5-5-15, PL8, 2.1V
    - GTX 480 ( 875/1750/928)
    - HAF 932 - Antec TPQ 1200 -- Crucial C300 128Gb boot --
    Primary Monitor - Samsung T260

  9. #1234
    Xtreme Member
    Join Date
    Dec 2005
    Posts
    427
    GTX480 has 3 billion transistors doing what? 2.812 billion are enabled and supposedly working.
    Though the originally designed capabilities are disabled on GeForce Fermi.

    * NVIDIA OptiX engine for real-time ray tracing
    * NVIDIA SceniX engine for managing 3D data and scenes
    * NVIDIA CompleX engine for scaling performance across multiple GPUs
    * NVIDIA PhysX 64-bit engine for real-time, hyper-realistic physical and environmental effects

    What's left? The power consumption.
    Last edited by Marios; 03-23-2010 at 09:37 PM.

  10. #1235
    Xtreme Addict
    Join Date
    Jan 2008
    Posts
    1,463
    Bring... bring the amber lamps.
    [SIGPIC][/SIGPIC]

  11. #1236
    Xtreme Addict
    Join Date
    May 2007
    Posts
    2,125
    niec pics! finally a clear one of the ihs and pcb

  12. #1237
    Xtreme Enthusiast
    Join Date
    Jul 2007
    Location
    Kuwait
    Posts
    616
    damn from last leaked benchmarks i think Fermi is disappointing

  13. #1238
    Xtreme Addict
    Join Date
    Aug 2008
    Location
    Hollywierd, CA
    Posts
    1,284
    Quote Originally Posted by Marios View Post
    GTX480 has 3 billion transistors doing what? 2.812 billion are enabled and supposedly working.
    Though the originally designed capabilities are disabled on GeForce Fermi.

    * NVIDIA OptiX engine for real-time ray tracing
    * NVIDIA SceniX engine for managing 3D data and scenes
    * NVIDIA CompleX engine for scaling performance across multiple GPUs
    * NVIDIA PhysX 64-bit engine for real-time, hyper-realistic physical and environmental effects

    What's left? The power consumption.
    thank you for the enlightening post. btw, power consumption for the 480sp part should be around 250 watts.



    does anyone know what this connector is for?
    [SIGPIC][/SIGPIC]

    I am an artist (EDM producer/DJ), pls check out mah stuff.

  14. #1239
    Xtreme Mentor
    Join Date
    Jun 2008
    Location
    France - Bx
    Posts
    2,601
    Quote Originally Posted by zalbard View Post
    No chip is designed to operate at 100C+.
    You're physically killing the silicon in this case. There is no way around that.
    Believe me or not but that the truth and W1zzard is not a noob too ;p

    HD 4870 was really hot too especially with VRM's. Just have a look :

    http://www.hardware.fr/articles/751-...n-hd-4870.html

    Back to the subject, it seems that Fermi 1st generation isn't a great deal.

    Hope ATI prices go down though, I don't want to pay a HD 5850 @ 250€, more 150-200€. Competition FTW

    Quote Originally Posted by 570091D View Post
    does anyone know what this connector is for?
    nVidia greenlight logo if I remember well
    Last edited by Olivon; 03-23-2010 at 10:50 PM.

  15. #1240
    Xtreme Enthusiast
    Join Date
    Jul 2007
    Location
    Kuwait
    Posts
    616
    Quote Originally Posted by 570091D View Post
    thank you for the enlightening post. btw, power consumption for the 480sp part should be around 250 watts.



    does anyone know what this connector is for?
    Nvidia glowing Logo

  16. #1241
    Xtreme Addict
    Join Date
    Aug 2008
    Location
    Hollywierd, CA
    Posts
    1,284
    Quote Originally Posted by Olivon View Post
    nVidia greenlight logo if I remember well
    Quote Originally Posted by Mk View Post
    Nvidia glowing Logo
    ahhhh, thanks!
    [SIGPIC][/SIGPIC]

    I am an artist (EDM producer/DJ), pls check out mah stuff.

  17. #1242
    Registered User
    Join Date
    Mar 2010
    Location
    Canada
    Posts
    19
    Nice thread. These are my thoughts and speculations:

    ATI:

    The 5000 series 40nm process is now refined enough to produce a 1GHz chip that coupled with 2GB or RAM will stay within the PCIe electrical spec and deliver a catastrophic 1 - 2 punch to Nvidia. That would be a formidable marketing coup for ATI to be first to market a 1Ghz chip *AND* take over the 480.

    ATI will do it. Simply for the market dominant position and bragging rights of having the fasted single GPU. No matter what's the manufacturing cost of that hypothetical card, they will be able to ask 500.00$+ for it anyway and push Nvidia prices down when they already have no margin left on these GF100 cards. If in the next couple of weeks it's technically feasible for ATI to jump at Nvidia's throat for the kill, why wouldn't they do it?


    Nvidia:

    Out of the chipset market, bump-gate scandal, wood-screw puppy, multiple re-bagged cards, miserable execution of the 6 month late Fermi, their general attitude problem and the fact they have pretty much alienated everyone in the industry confirm 1 thing: Nvidia is way too full of itself and need an ego / reality check, fast.

    The gaming side of the Fermi equitation is an after thought at best. It's a GPGPU with a bolted-on gaming chunk. I really hope for their sake that the double precision floating point power, ECC RAM and CUDA SDK with C++ capability will be enough to save this monster of a chip. Putting 2 GF100 chip on a single card is out of the question, the clocks are already at their maximum. Short of introducing a 512sp version very quickly or solely rely on driver optimization, there is nowhere to go up for them.

    What you'll see on the 26 is pretty much what you'll get for the next year or so. Nvidia won't be a force to be reckoned with in the gaming market until Fermi die shrink to 32nm. Problem is, they will have to face the HD 6000 way before that. Nvidia won't be able to absorb blow after blow like that for many more quarters. There's a limit to keep surfing on the wave of your dominant market position without sinking.


    Conclusion:

    It's all sad because based on the benchmark already available it's clear where we are going. I would have liked Nvidia to be not only competitive, but dominant in order to keep the prices low. I think they need to restructure, rebuild their bridges and corporate image. The first thing Nvidia needs to do is to fire Jen-Hsun Huang before it's too late.


    Ramon

  18. #1243
    Xtreme Addict
    Join Date
    Apr 2006
    Location
    Cairo
    Posts
    2,366
    Quote Originally Posted by Olivon View Post
    Believe me or not but that the truth and W1zzard is not a noob too ;p

    HD 4870 was really hot too especially with VRM's. Just have a look :

    http://www.hardware.fr/articles/751-...n-hd-4870.html

    B
    Yeap , my HD4850 went up to 110C Shader and 116C MemIO while running OCCT stress test for few minutes and still running till today with no problem at all


    Quote Originally Posted by jaredpace View Post



    Is that is part of the Metal heatsink showing on the surface , if so then adding a 120mm will get some nice temps and no need to run the stock fan at high speed
    Last edited by kemo; 03-23-2010 at 11:08 PM.
    Intel Core I7 920 @ 3.8GHZ 1.28V (Core Contact Freezer)
    Asus X58 P6T
    6GB OCZ Gold DDR3-1600MHZ 8-8-8-24
    XFX HD5870
    WD 1TB Black HD
    Corsair 850TX
    Cooler Master HAF 922

  19. #1244
    Banned
    Join Date
    Jan 2010
    Posts
    73
    Quote Originally Posted by Ramon Zarat View Post
    .
    Charlie is that you? Na, that's over the top even for him. I think we should all at least wait until we see legitimate tests before we start tossing dirt on Fermi. Edit - the small connector is for the fan most likely. Or the glow logo
    Last edited by ToxAvenger; 03-23-2010 at 11:18 PM.

  20. #1245
    I am Xtreme
    Join Date
    Dec 2008
    Location
    France
    Posts
    9,060
    Quote Originally Posted by ethomaz View Post
    And this... GTX 480 sample with 512 SPs and possible 600Mhz (1200Mhz).

    This looks about right, I guess... Not impressive, but decent enough.

    Quote Originally Posted by Marios View Post
    No potential at all. I only see an inefficient premature design intended to be a scalable cloud server product, disguised into desktop GeForce Fermi.
    Inefficient, yes. But it has potential. There could be quite a few interesting desktop applications, heavy tessellation environments being one of them.
    Quote Originally Posted by Marios View Post
    GTX480 has 3 billion transistors doing what? 2.812 billion are enabled and supposedly working.
    Though the originally designed capabilities are disabled on GeForce Fermi.

    * NVIDIA OptiX engine for real-time ray tracing
    * NVIDIA SceniX engine for managing 3D data and scenes
    * NVIDIA CompleX engine for scaling performance across multiple GPUs
    * NVIDIA PhysX 64-bit engine for real-time, hyper-realistic physical and environmental effects

    What's left? The power consumption.
    There is no real-time ray tracing.
    The other stuff you mentioned is pure marketing...
    Quote Originally Posted by Olivon View Post
    Believe me or not but that the truth and W1zzard is not a noob too ;p

    HD 4870 was really hot too especially with VRM's. Just have a look :

    http://www.hardware.fr/articles/751-...n-hd-4870.html
    I never said he was a noob.
    And do not confuse VRMs with GPU chips.
    VRM has a different structure and can operate at higher temperatures. Most GPU VRMs are rated for 120C. But the GPU chips are definitely not.
    Last edited by zalbard; 03-23-2010 at 11:26 PM.
    Donate to XS forums
    Quote Originally Posted by jayhall0315 View Post
    If you are really extreme, you never let informed facts or the scientific method hold you back from your journey to the wrong answer.

  21. #1246
    Banned
    Join Date
    Sep 2009
    Location
    Face Clinic, Harley Street
    Posts
    282
    Days To Go

  22. #1247
    Xtreme Addict
    Join Date
    May 2007
    Location
    'Zona
    Posts
    2,346
    Quote Originally Posted by highoctane View Post
    Quote Originally Posted by LordEC911 View Post
    Lower binned GDDR5? It is clocked slower but I don't think it is a lower bin.
    I can "think" its binned lower and is cheaper, same logic being used...

    Until somebody posts what vram is used what basis of comparison is there without a spec sheet at the least.

    We can both assume the other is wrong but that doesn't make either of us right.
    I try to not be wrong... I don't assume things often.
    Also your logic =| my logic. So no, the same logic isn't being used.
    Is this good enough for you?





    Samsung GDDR product sheet
    K4G10325FE HC04
    K - Samsung
    4 - DRAM
    G - GDDR5 SGRAM
    10 - 1G 8k/32ms
    32 - x32
    5 - 8Banks
    F - 7th Gen
    E - 6th Gen

    H - GDDR 170FBGA
    C - Commercial Normal
    04 - .4ns (5Gbps)

    So the same memory ICs as 5870, so if the MCs aren't limiting frequency, ~250Gbps is possible w/ a memory clock of 5.2ghz.

    Quote Originally Posted by Sushi Warrior View Post
    And lower clocked DDR5 is cheaper, obviously.... and I don't know why they would use fast DDR5 and downclock it (if that is what you are implying).
    Maybe they purchased the high binned GDDR5 before they knew they messed up the MC(Charlie) or that they couldn't use the full clocks due to a power/TDP wall
    and decide to leave it on there to allow users/AIBs to overclock them.
    Last edited by LordEC911; 03-24-2010 at 12:55 AM.
    Originally Posted by motown_steve
    Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.

    Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.

  23. #1248
    Registered User
    Join Date
    Mar 2010
    Location
    Canada
    Posts
    19
    Quote Originally Posted by ToxAvenger View Post
    Charlie is that you? Na, that's over the top even for him. I think we should all at least wait until we see legitimate tests before we start tossing dirt on Fermi. Edit - the small connector is for the fan most likely. Or the glow logo
    LOL.... No, it's not "Charlie"!

    I'll admit, I *HATE* corporate arrogance. From that perspective, Nvidia is very easy to hate and it shows in my "editorial" position. Another example: Followed Apple business practice lately? I just can't stand it. In Apple's case, on top of the ego-maniacal, self appointed emperor Steve Jobs's tyranny, you get overpriced and somewhat palatable products. I'm not saying ATI/AMD is all white. It's just hard to deny that they're just not in the same league when it come to business practice and corporate attitude.

    But this thread is about the new and upcoming Fermi, isn't it? Then lets put my logic to the test with arguments that disprove my theory:

    FACT: Out of the chipset market
    FACT: Bump-gate scandal
    FACT: Wood-screw puppy
    FACT: Multiple re-bagged cards
    FACT: Miserable execution of the 6 month late Fermi
    FACT: General attitude problem
    FACT: Alienated everyone in the industry
    FACT: Putting 2 GF100 chip on a single card is technically impossible at 40nm
    FACT: At 700Mhz, the clocks are already at their maximum because of heat.
    FACT: Driver optimization and a 512sp version are the only way to gain speed on Fermi. What you see on the 26, except a 14% bump (7% sp 7% drivers), is what you'll get until 32nm.
    FACT: Unless a VERY dramatic turn of even happen, ATI will launch the HD 6000 series in Q2 2010, WAY before Nvidia can shrink it's Fermi to 32nm.

    Do you deny any of those previous statement? If so, please explain.

    My opinion: Fermi gaming capability is an after thought.
    My opinion: Fermi is first and foremost a GPGPU with a bolted-on gaming section.
    My opinion: DPFP, ECC RAM, C++ and superior SDK are the real differentiator that will make or kill Fermi in the GPGPU market.
    My opinion: If Fermi fail to sell significantly as a GPGPU and to a lesser extent as a gaming card, that can only bad news for Nvidia and all of us.
    My opinion: Nvidia can't afford to miss the boat like that too many time in a row without seriously impacting it's long term viability.

    I would like to hear you opinion about those specifics point too. I'm fully aware that no official benchmark are available and announcing Fermi's death right now would be premature. But you have to admit, based on what we already know, it's gona be FAR from the 5870 killer it was supposed to be. In fact, it's clearly going to be an attainable target for a 5870 refresh.

    Lets put it that way: A cheaper 5870 that consume 62 Watt less under load, and even much less at idle, run cooler in your casing, with less noise from the fan and run your game at 85-90% of the speed of the 480 OR a 5890 that run same/faster, with more RAM than 480 for the same price as a 480. Which one you choose? 5870, 5890 or the 480? I would take the 5870 and overclock the hell out of it if and when I need it. That is, unless Fermi end up 40% faster than the 5870 instead of just 10-15% *AND* if they fire Jen-Hsun Huang. Then I'll buy TWO 480 and put them in SLI. LOL.


    Ramon
    Last edited by Ramon Zarat; 03-24-2010 at 12:55 AM.

  24. #1249
    NooB MOD
    Join Date
    Jan 2006
    Location
    South Africa
    Posts
    5,799
    Quote Originally Posted by ElSel10 View Post
    It's OK because it's ATI, not Nvidia. It's also OK that eventually, the VRMs on that card will pop when running Furmark. Again, because it isn't nvidia.
    QFT. 99% of NVidia haters aren't even sure why they hate the company.
    Xtreme SUPERCOMPUTER
    Nov 1 - Nov 8 Join Now!


    Quote Originally Posted by Jowy Atreides View Post
    Intel is about to get athlon'd
    Athlon64 3700+ KACAE 0605APAW @ 3455MHz 314x11 1.92v/Vapochill || Core 2 Duo E8500 Q807 @ 6060MHz 638x9.5 1.95v LN2 @ -120'c || Athlon64 FX-55 CABCE 0516WPMW @ 3916MHz 261x15 1.802v/LN2 @ -40c || DFI LP UT CFX3200-DR || DFI LP UT NF4 SLI-DR || DFI LP UT NF4 Ultra D || Sapphire X1950XT || 2x256MB Kingston HyperX BH-5 @ 290MHz 2-2-2-5 3.94v || 2x256MB G.Skill TCCD @ 350MHz 3-4-4-8 3.1v || 2x256MB Kingston HyperX BH-5 @ 294MHz 2-2-2-5 3.94v

  25. #1250
    Xtreme Addict
    Join Date
    Nov 2007
    Posts
    1,195
    disassembling ? is that what reviewers do all day with fermi ? wonder what they want to find inside the card gold or something
    Quote Originally Posted by LesGrossman View Post
    So for the last 3 months Nvidia talked about Uniengine and then Uniengine and more Uniengine and finally Uniengine. And then takes the best 5 seconds from all the benchmark run, makes a graph and then proudly shows it everywhere.

Page 50 of 123 FirstFirst ... 404748495051525360100 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •