Page 13 of 109 FirstFirst ... 3101112131415162363 ... LastLast
Results 301 to 325 of 2723

Thread: The GT300/Fermi Thread - Part 2!

  1. #301
    Xtreme Mentor
    Join Date
    Jul 2008
    Location
    Shimla , India
    Posts
    2,631
    Quote Originally Posted by NapalmV5 View Post
    new final/performance numbers are in:

    gtx470 battles 57xx radeon series
    gtx480 battles 58xx radeon series
    gtx490 battles 59xx radeon series

    and this is the best nvidia can for 2010

    you guys were right all along nvidia sux!!
    hahah first you broke Ati fan boys ballz now you are after Nvidia fan boys...

    Geeez you are on a ballz breaking spree
    Coming Soon

  2. #302
    Xtreme Guru
    Join Date
    Dec 2002
    Posts
    4,046
    Quote Originally Posted by ajaidev View Post
    hahah first you broke Ati fan boys ballz now you are after Nvidia fan boys...

    Geeez you are on a ballz breaking spree
    my loyalty lies with neither


  3. #303
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    Quote Originally Posted by SKYMTL View Post
    Several of my contacts there (marketing, development, etc) have risen through the ranks over the years I have been talking to them so I don't know where you gotyour info from.
    forgot the name of the website... its the no1 site where employees rate and review the companies they worked or work for, plus they give their ceos approval ratings.

    Quote Originally Posted by SKYMTL View Post
    As you increase the performance of a GPU, the CPU naturally needs to be faster to feed it information at a quick pace. However, as these GPUs quickly outpace game development, the CPU will continue to be a bottleneck all the way into high instances of AA. Luckily, it seems like DX11 has moved less emphasis off of the CPU which bodes well for the future.
    yeah i know... so what do you think? will fermi need a fast cpu or not? they showed it off with a 960...

    Quote Originally Posted by SKYMTL View Post
    This really depends on how good of a heatsink NVIDIA sticks on it. I have a feeling though that the combination of 8-pin / 6 pin power connectors and the PCI-E 2.0 slot will be able to provide more power than even an overclocked GF100 can ask. Naturally, when you get into areas like voltage tweaks the consumption of any component will skyrocket.
    well if the tdp numbers i heard are true, then its 50W more than a 285... and thats really a lot... i cant imagine what kind of a heatsink that occupies 2 slots is needed to keep that cool... i just wondered if that was only early silicon and if the newer stuff is running cooler...

    Quote Originally Posted by ElSel10 View Post
    No it isn't. With AFR each frame is handled completley by either GPU1 or GPUn.

    And since the memory is mirrored for both GPUs, you effectively have the same amount, speed, bus width and bandwidth of the single GPU counterpart.
    thats not true, while a frame gets rendered there is constantly data written to and read from the mem... and that is NOT mirrored between the two gpus... otherwise both frames would end up identical...

    both gpus get the same raw data, i guess, but they then use their memory and memory bandwidth independantly... if they would really mirror each others memory then you would have to split the memory into 2 partitions and the effective memory per gpu would actually drop in half
    but why would you do that? why does gpu1 need to know what that gpu is doing with the data and what its frame will look like?
    Quote Originally Posted by zalbard View Post
    It's not just the consumers, also the shop managers often have no clue about various brands and models of hardware. Whenever I see one of them 'helping' one of the customers... I really wish I wouldn't hear what they were suggesting.
    So no wonder renaming works for Nvidia so well.
    Most people only look at the numbers... model number, price, VRAM amount, that's it. And a lot assume that the more they pay the better product they get... So there is a lot of room for ridiculous pricing, and people are more than happy to buy such stuff.
    its even worse, ive seen several sales people in shops telling people marketing nonsense i could SEE they knew was not true... but they dont care, they want to sell their stuff... i can understand it, but i wouldnt do it...

    Quote Originally Posted by Tim View Post
    There are little to no constructive posts found in this thread, hence my post in my opinion is valid. It's just the same old same old rehashed over and over again. Now it's even more of a load of rubbish with even less info.

    Forums would exist, and they would be a lot more interesting if people would post some more 'useful' things.

    Every day I check this thread and it's bla bla this bla bla that. It's probably because Fermi is so late that I just can't bear this stuff anymore, usually it isn't as bad and actually quite entertaining sometimes, because there is actually stuff to discuss. Now there is nothing much to discuss anymore, and people still keep yapping.
    idk, i consider this an offline chat... as soon as something interesting is discussed i go back a page or two to catch up on what happened... i prefer too much info over not enough info that somebody thought was not important... and besides, even if there is no or little info, its fun to talk to others about tech, the companies that make them, their products...

  4. #304
    Xtreme Member
    Join Date
    Dec 2009
    Posts
    435
    Quote Originally Posted by saaya View Post
    thats not true, while a frame gets rendered there is constantly data written to and read from the mem... and that is NOT mirrored between the two gpus... otherwise both frames would end up identical...
    Every frame that is rendered using AFR can only use the amount of memory on one card.
    The graphics memory is NOT doubled in SLI mode. If you have two 128MB graphics cards, you do NOT have an effective 256MB. Most of the games operate in AFR (alternate frame rendering). The first card renders one full frame and then the next card renders the next frame and so on. If you picture SLI working in this way it is easy to see that each frame only has 128MB of memory to work with.
    Quoted from Mad Mod Mike on SLIzone.

  5. #305
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    Quote Originally Posted by ElSel10 View Post
    Every frame that is rendered using AFR can only use the amount of memory on one card. Quoted from Mad Mod Mike on SLIzone.
    what does that have to do with memory bandwidth?
    i never said that you end up with double the memory, but you do end up with double the bandwidth from my understanding...

    at the same time a dual gpu card is working on two frames, and each gpu can read and write independently to its own memory while working on those frames. as a result, in the same period of time, you end up with (up to) double the frames being rendered, and (up to) double the reads and writes to memory. just think about it... you cant produce additional frames without additional reads/writes to memory...
    and think about the real world performance of dualgpu cards... if you would just double the shaders i dont think we could be able to get as much of a performance boost as we see from single to dualgpu cards

    what your saying is that both gpus only use the memory of one of the two cards... which makes no sense... rendering a frame takes several steps, you read from memory, manipulate the values and write back to memory... as far as i know its impossible to render two different frames if you force the memory of both gpus to be 100% identical at all times... if you would do that then youd end up with 2 identical frames...
    Last edited by saaya; 02-08-2010 at 09:24 AM.

  6. #306
    Xtreme Addict
    Join Date
    Apr 2007
    Posts
    1,870
    Quote Originally Posted by ElSel10 View Post
    Every frame that is rendered using AFR can only use the amount of memory on one card.
    saaya is correct. Capacity is not doubled (both cards hold the same data). However bandwidth is doubled (both cards work on different frames in parallel).

  7. #307
    Xtreme Addict
    Join Date
    Nov 2004
    Posts
    1,692
    Quote Originally Posted by saaya View Post
    idk, i consider this an offline chat... as soon as something interesting is discussed i go back a page or two to catch up on what happened... i prefer too much info over not enough info that somebody thought was not important... and besides, even if there is no or little info, its fun to talk to others about tech, the companies that make them, their products...
    I suppose one can see it like that as well.

    Intel Core i7-3770K
    ASUS P8Z77-I DELUXE
    EVGA GTX 970 SC
    Corsair 16GB (2x8GB) Vengeance LP 1600
    Corsair H80
    120GB Samsung 840 EVO, 500GB WD Scorpio Blue, 1TB Samsung Spinpoint F3
    Corsair RM650
    Cooler Master Elite 120 Advanced
    OC: 5Ghz | +0.185 offset : 1.352v

  8. #308
    Xtreme Addict
    Join Date
    May 2007
    Location
    'Zona
    Posts
    2,346
    Quote Originally Posted by annihilat0r View Post
    someone whom I trust told me that the clocks would be GTX 280-like, if a bit higher. So no 700 for you, prolly around 650 like we first thought
    Correct, that is what I heard quite awhile ago. B1 is whole new territory though.

    Quote Originally Posted by zerazax View Post
    I'm not sure what to say about triangles. Over at B3d, they brought up the fact that 2 x 5770's beats 1 x 5870 in triangle intensive games like HAWX, which the GF100 was benched at as being pretty fast. Supposedly 2 cards means twice the tri/clock?
    Correct. Each card has it's own setup engine and can effectively double the tri/clock.
    Originally Posted by motown_steve
    Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.

    Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.

  9. #309
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    Quote Originally Posted by trinibwoy View Post
    saaya is correct. Capacity is not doubled (both cards hold the same data). However bandwidth is doubled (both cards work on different frames in parallel).
    so for dualgpu solutions the shader power is doubled, the triangle setup is doubled, texturing power is doubled, memory bandwidth is doubled... but you need double the memory compared to a single gpu to have the same effective memory capacity. and another downside is that you need more cpu power and loss of efficiency when aligning both gpus to work on the same scene...

    what i wonder about is that the cpu requirements for dual gpu setups are not double of what a single gpu setup requires. how come?
    its definately higher, but not double, at least not in most scenarios... does anybody know why?

    no news on fermi tdps?

  10. #310
    Xtreme Addict
    Join Date
    Aug 2007
    Location
    Istantinople
    Posts
    1,574
    Quote Originally Posted by saaya View Post
    what i wonder about is that the cpu requirements for dual gpu setups are not double of what a single gpu setup requires. how come?
    its definately higher, but not double, at least not in most scenarios... does anybody know why?

    Why should it be double? If with say, a 5870 the CPU was a bottleneck, then it would require double power when you add a second 5870. In this age of console games 5870 can be bottlenecked by the CPU quite often, I accept that but when you're bound by the CPU you are at 100 FPS levels so you wouldn't plug in a second card anyway.

    As long as the limiting factor is the graphics card I don't think a second card would require double CPU power.
    Has anyone really been far even as decided to use even go want to do look more like?
    INTEL Core i7 920 // ASUS P6T Deluxe V2 // OCZ 3G1600 6GB // POWERCOLOR HD5970 // Cooler Master HAF 932 // Thermalright Ultra 120 Extreme // SAMSUNG T260 26"

  11. #311
    Xtreme Addict
    Join Date
    May 2005
    Posts
    1,656
    Work Rig: Asus x58 P6T Deluxe, i7 950 24x166 1.275v, BIX2/GTZ/D5
    3x2048 GSkill pi Black DDR3 1600, Quadro 600
    PCPower & Cooling Silencer 750, CM Stacker 810

    Game Rig: Asus x58 P6T, i7 970 24x160 1.2v HT on, TRUE120
    3x4096 GSkill DDR3 1600, PNY 660ti
    PCPower & Cooling Silencer 750, CM Stacker 830

    AMD Rig: Biostar TA790GX A2+, x4 940 16x200, stock hsf
    2x2gb Patriot DDR2 800, PowerColor 4850
    Corsair VX450

  12. #312
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    Quote Originally Posted by annihilat0r View Post
    Why should it be double? If with say, a 5870 the CPU was a bottleneck, then it would require double power when you add a second 5870. In this age of console games 5870 can be bottlenecked by the CPU quite often, I accept that but when you're bound by the CPU you are at 100 FPS levels so you wouldn't plug in a second card anyway.

    As long as the limiting factor is the graphics card I don't think a second card would require double CPU power.
    mhhh good point, cpu limitation shows slowly, first your only losing 1% of performance then more and more the slower the cpu is compared to the gpu...

    Quote Originally Posted by highoctane View Post
    cool, ill be there

  13. #313
    I am Xtreme
    Join Date
    Dec 2008
    Location
    France
    Posts
    9,060
    Quote Originally Posted by saaya View Post
    cool, ill be there
    Will you make cool videos and photos again?!
    Donate to XS forums
    Quote Originally Posted by jayhall0315 View Post
    If you are really extreme, you never let informed facts or the scientific method hold you back from your journey to the wrong answer.

  14. #314
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    Quote Originally Posted by zalbard View Post
    Will you make cool videos and photos again?!
    if i feel like it... maybe
    im going there on my own, no sponsor... too bad actually :P

  15. #315
    Xtreme Member
    Join Date
    May 2009
    Location
    between my house and my buddy next door
    Posts
    440
    i broke i ended up with a 5870 couldant wait any longer
    My build is taking way to long

    Quotes

    dasickninja
    Sweet merciful God... and it survived? Those parts are either Jesus or Juggernaut.


  16. #316
    Xtreme Addict
    Join Date
    Jan 2008
    Posts
    1,176



    More G92!
    The 9600gso 55nm cards with a new name.

    8800gt = 9800gt = GT 240 = GT 340

    identical, equal performance, different names, 2 of which imply there is performance upgrade
    Last edited by Jowy Atreides; 02-10-2010 at 02:49 PM.

  17. #317
    Xtreme Addict
    Join Date
    Apr 2007
    Posts
    1,870
    Nice. Rise of the Undead Part IV.

  18. #318
    Xtreme Addict
    Join Date
    Nov 2007
    Posts
    1,195
    die g92 dieeeeeeeeeeeeeeeeee

  19. #319
    Banned
    Join Date
    Sep 2009
    Posts
    97
    gtx 580 g92 edition , thank god for the tags.

  20. #320
    Xtreme Addict
    Join Date
    Aug 2008
    Location
    Hollywierd, CA
    Posts
    1,284
    lol, i don't remember any g92 cards with 768mb of memory.... and gpu-z says it's a gt330 not a gt340.
    [SIGPIC][/SIGPIC]

    I am an artist (EDM producer/DJ), pls check out mah stuff.

  21. #321
    Xtreme Addict
    Join Date
    Jan 2005
    Posts
    1,970
    Quote Originally Posted by 570091D View Post
    lol, i don't remember any g92 cards with 768mb of memory.... and gpu-z says it's a gt330 not a gt340.
    The original 9600gso's had 96 stream processors and 768mb or 384mb configurations for RAM providing a 192-bit bus, G92-based. They later gimped them to 48sp with 512mb RAM and a 128-bit bus.

  22. #322
    Xtreme Addict
    Join Date
    Jan 2004
    Posts
    1,313
    Quote Originally Posted by GoldenTiger View Post
    The original 9600gso's had 96 stream processors and 768mb or 384mb configurations for RAM providing a 192-bit bus, G92-based. They later gimped them to 48sp with 512mb RAM and a 128-bit bus.
    a lot of gimping going on at nVidia. Its like a Frankenstein GPU, PCB garage sale.

    Quote Originally Posted by eric66 View Post
    die g92 dieeeeeeeeeeeeeeeeee
    look at my quote in Jowy Atreide;s sig.

    A history lesson in complacency.
    Like no other industry in the history of the world, computers ushered in dramatic increases in performance and functionality and unheard of price reductions. Competition is fierce. Those who take a break, and fail to push the boundaries are doomed to be amongst the forgotten has-beens: Cyrix, 3Dfx, VIA, S3, Abit.

    4 years of milking Athlon64/Opteron sales, and a delayed Barcelona with TLB Bug almost crushed AMD.

    That's why nVidia's 2007-2010 rebrandfest is concerning. Sure, way back when before 8800GT, you could argue that DX10.1 was a novelty. But time goes by fast. A hush-hush DX10.1 GT240 rollout, 2 MONTHS AFTER AMD launched DX11 cards... pathetic. Just because you were making money yesterday, doesn't guarantee future revenue.

    Its a mystery that nVidia alone has taken upon themselves to sabotage graphics progress. Its time to get act together. Optimus is a great start and should be in EVERY notebook.

    No more 5-7 month delays for launch of DX11 Fermi mainstream and value derivatives. Bad Company 2 is coming out in 20 days. Hup-too-hup-too double time soldier!
    Last edited by ***Deimos***; 02-10-2010 at 03:55 PM.

    24/7: A64 3000+ (\_/) @2.4Ghz, 1.4V
    1 GB OCZ Gold (='.'=) 240 2-2-2-5
    Giga-byte NF3 (")_(") K8NSC-939
    XFX 6800 16/6 NV5 @420/936, 1.33V

  23. #323
    Xtreme Addict
    Join Date
    Jan 2008
    Posts
    1,176
    Quote Originally Posted by 570091D View Post
    lol, i don't remember any g92 cards with 768mb of memory.... and gpu-z says it's a gt330 not a gt340.

    9600GSO

    And the 240 is an 8800gt, 230 is gso

  24. #324
    Xtreme Cruncher
    Join Date
    May 2009
    Location
    Bloomfield
    Posts
    1,968
    yeah, they just magically rebranded directX 10.1 into their chips too. 40nm doesnt count for anything either? i guess the definition of rebrand has changed. if thats the case then many chips are just rebrands and no one should buy those. in my opinion if it still uses silicon its a rebrand.

  25. #325
    Xtreme Addict
    Join Date
    Aug 2008
    Location
    Hollywierd, CA
    Posts
    1,284
    Quote Originally Posted by Jowy Atreides View Post

    8800gt = 9800gt = GT 240 = GT 340

    identical, equal performance, different names, 2 of which imply there is performance upgrade
    Quote Originally Posted by Jowy Atreides View Post
    9600GSO

    And the 240 is an 8800gt, 230 is gso
    you're contradicting yourself... and how exactly is this GF100 news?

    Quote Originally Posted by Chumbucket843 View Post
    yeah, they just magically rebranded directX 10.1 into their chips too. 40nm doesnt count for anything either? i guess the definition of rebrand has changed. if thats the case then many chips are just rebrands and no one should buy those. in my opinion if it still uses silicon its a rebrand.


    Jowy A, shouldn't you be mad at ati for rebranding the 4870 as a 5770? just because it's built on 40nm and has dx11 doesn't mean it's new!
    [SIGPIC][/SIGPIC]

    I am an artist (EDM producer/DJ), pls check out mah stuff.

Page 13 of 109 FirstFirst ... 3101112131415162363 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •