Page 8 of 9 FirstFirst ... 56789 LastLast
Results 176 to 200 of 210

Thread: Intel Core i5 Performance

  1. #176
    Xtreme X.I.P.
    Join Date
    Apr 2005
    Posts
    4,475
    Quote Originally Posted by Drwho? View Post
    So, here is some detail:

    On games, the Phenom II will do better than phenom just because the L3 cache is large enough, it is not going to be any better than a Conroe with 4Mb cache, at same frequency. Of course, you can find corner case, but those are rare.
    It looks like the 2nd Load port of the phenom I or II is not helping it, due the the fact that its decoding bandwidth is becoming a problem on 64bits. (the extra byte increase by 25% the bandwitdh required). The lack of wilder decoder engine does not feed properly their back end of the processor. The next step for AMD is to copy hyperthreading, if they don't do so, they will never come back to competitive position, and if they do so, they will have to pay attention to be power efficent when doing it. Doubling the number of decoders would be a power catastrophy.

    The next Core mainstream will have no problem there

    This is my personal opinion, my employer is not responsible for this posting.

    enough quality for you? If you ask details, you get them, don't complain about it.
    im sorry but how that answers this:

    Have you seen Phenom II performance numbers with SLI/CF?

    Based on what AMD is showing, you'd be crazy spending a whole crap load of money for a small bump in frame rate (if any) that you'd get with a core i7.

  2. #177
    Xtreme Enthusiast
    Join Date
    Dec 2007
    Posts
    816
    Quote Originally Posted by Cooper View Post
    im sorry but how that answers this:

    answer was already posted:

    http://www.guru3d.com/article/core-i...ance-review/19

    DrWho, The last of the time lords, setting up the Clock.

  3. #178
    Xtreme X.I.P.
    Join Date
    Apr 2005
    Posts
    4,475
    Quote Originally Posted by Shintai View Post
    Maybe due to the avalible bandwidth for the GPU(s) to both memory and CPU.
    I don't see how dual-channel would cripple performance of a desktop system. But castrated pci-e b/w is definately good point. Guess intel wants those spending $250 on CF or SLI setup spend another $700-800 in i7 rig instead of $300-$400 in PH2/i5.

  4. #179
    Xtreme Cruncher
    Join Date
    Aug 2006
    Location
    Denmark
    Posts
    7,747
    Quote Originally Posted by Cooper View Post
    I don't see how dual-channel would cripple performance of a desktop system. But castrated pci-e b/w is definately good point. Guess intel wants those spending $250 on CF or SLI setup spend another $700-800 in i7 rig instead of $300-$400 in PH2/i5.
    250$ on CF/SLI? That would be very very lowend wouldnt it. And most likely with a faster singlecard option thats better Plus you wouldnt get much i5/Ph2 for 300-400$. In that case the i7 should be 500-600$.

    But basicly its the issue due to speed. Since GPus needs to fetch the textures from main memory.

    However one thing you gotta relalize with the rest. i5 is just another tiny step of where AMD/Intel is going. The IMC before that was yet another.

    The average consumer will get less and less flexibility as time passes. Simply because we are moving forward towards SoC designs. GPUs moving to the CPU, later on southbridge functions etc etc.
    Last edited by Shintai; 12-14-2008 at 09:50 AM.
    Crunching for Comrades and the Common good of the People.

  5. #180
    Xtreme X.I.P.
    Join Date
    Apr 2005
    Posts
    4,475
    I dont see how HD4850 CF is very low-end, but you are right - price tag is a bit low. 4850 CF can be had for $300 and 260GTX SLI for $400 at newegg.

  6. #181
    Xtreme Enthusiast
    Join Date
    Dec 2007
    Posts
    816
    Quote Originally Posted by Cooper View Post
    I dont see how HD4850 CF is very low-end, but you are right - price tag is a bit low. 4850 CF can be had for $300 and 260GTX SLI for $400 at newegg.

    if you look here: http://www.guru3d.com/article/core-i...ance-review/19

    a core i7 even the 920 will give you a big boost on Crisis ... it is more than adding one additional card on Core 2 ... the Core i7 920 is around 300$ ... it should be the beginning of a new rig if you plan SLI or Xcross fire, the data is obvious.
    Last edited by Drwho?; 12-14-2008 at 10:57 AM.
    DrWho, The last of the time lords, setting up the Clock.

  7. #182
    Xtreme Addict
    Join Date
    Oct 2004
    Posts
    1,356
    Quote Originally Posted by Drwho? View Post
    if you look here: http://www.guru3d.com/article/core-i...ance-review/19

    a core i7 even the 920 will give you a big boost on Crisis ... it is more than adding one additional card on Core 2 ... the Core i7 920 is around 300$ ... it should be the beginning of a new rig if you plan SLI or Xcross fire, the data is obvious.
    "Core 2 Quad QX 9770 versus Core i7 965"

    Seems bench's of the 920 would be more relevant.

  8. #183
    Xtreme X.I.P.
    Join Date
    Apr 2005
    Posts
    4,475
    Quote Originally Posted by Drwho? View Post
    if you look here: http://www.guru3d.com/article/core-i...ance-review/19

    a core i7 even the 920 will give you a big boost on Crisis ... it is more than adding one additional card on Core 2 ... the Core i7 920 is around 300$ ... it should be the beginning of a new rig if you plan SLI or Xcross fire, the data is obvious.
    Well I think we already went through the "CPU is more important for gaming than graphics" phase. Secondly that's 3.2GHz i7 on that chart.

  9. #184
    Xtreme Addict
    Join Date
    Jan 2005
    Posts
    1,366
    Quote Originally Posted by Cooper View Post
    Well I think we already went through the "CPU is more important for gaming than graphics" phase.
    Not so fast pls.
    If AND can't optimize their drivers to use multicore processor that dosn't mean that CPU dosn't important for gaming at all.



  10. #185
    Xtreme Addict
    Join Date
    Nov 2007
    Location
    Illinois
    Posts
    2,095
    The difference between a phenom and a Core i7 at 1920x1200 with a single card is over 10 FPS? Furthermore, how does this effect diminish with more cards added? Shouldn't the trend be reversed?
    E7200 @ 3.4 ; 7870 GHz 2 GB
    Intel's atom is a terrible chip.

  11. #186
    Registered User
    Join Date
    Nov 2005
    Posts
    70
    Quote Originally Posted by Drwho? View Post
    So, here is some detail:

    On games, the Phenom II will do better than phenom just because the L3 cache is large enough, it is not going to be any better than a Conroe with 4Mb cache, at same frequency. Of course, you can find corner case, but those are rare.
    It looks like the 2nd Load port of the phenom I or II is not helping it, due the the fact that its decoding bandwidth is becoming a problem on 64bits. (the extra byte increase by 25% the bandwitdh required). The lack of wilder decoder engine does not feed properly their back end of the processor. The next step for AMD is to copy hyperthreading, if they don't do so, they will never come back to competitive position, and if they do so, they will have to pay attention to be power efficent when doing it. Doubling the number of decoders would be a power catastrophy.

    The next Core mainstream will have no problem there

    This is my personal opinion, my employer is not responsible for this posting.

    enough quality for you? If you ask details, you get them, don't complain about it. (We are in a Core thread, i did notice )
    Wow, you know so much that you know how AMD designed their yet unreleased processor. How is that?

    And yeah, maybe AMD needs to copy hyperthreading, if not to even out the copying going on lately. The core i7 looks a lot like a Phenom, except more than a year later. Native quad core, integrated memory controller, QPI (just like HT), same L3 cache structure. Hmmm, maybe dreamland is at AMD HQ. Did you go and see, then smile?

  12. #187
    Xtreme Addict
    Join Date
    Aug 2008
    Posts
    2,036
    Ummm, hang on....







    OK, had to check to be sure this was an i5 thread. For a second there I thought I was seeing posts about AMD stuff again...wait a minute! I was!

    AMD has nothing to do with i5. It wasn't designed by AMD, so the "copying" BS is just that...pure 100% grade A farm fresh BS. Some may not be aware of this, but Intel isn't in a position where they have to "copy" anything. They are doing extremely well on their own.

    Please take the Fanboi BS elsewhere. Preferably another forum where they allow that kind of thing. This isn't one of them.

  13. #188
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Quote Originally Posted by taurus_sel View Post
    Dude, do you understand english? Your responses make no sense.
    Just stop trolling already....

    Final warning, fwiw.

  14. #189
    Xtreme Addict
    Join Date
    Aug 2004
    Location
    Austin, TX
    Posts
    1,346
    Quote Originally Posted by Drwho? View Post
    So, here is some detail:

    On games, the Phenom II will do better than phenom just because the L3 cache is large enough, it is not going to be any better than a Conroe with 4Mb cache, at same frequency. Of course, you can find corner case, but those are rare.
    It looks like the 2nd Load port of the phenom I or II is not helping it, due the the fact that its decoding bandwidth is becoming a problem on 64bits. (the extra byte increase by 25% the bandwitdh required). The lack of wilder decoder engine does not feed properly their back end of the processor. The next step for AMD is to copy hyperthreading, if they don't do so, they will never come back to competitive position, and if they do so, they will have to pay attention to be power efficent when doing it. Doubling the number of decoders would be a power catastrophy.
    That's an interesting point about the decode bandwidth, especially since AMD increased the I-cache bandwidth to 256bits. Why doesn't Intel have a similar problem? You seem to be implying that AMD is bottlenecked by the front-end. That seems like some low-hanging fruit though: increasing the number of decoders is simple. They don't need to double the number of decoders: why not just add one more? Both AMD and Intel chips are heavily optimized, so I doubt that the bottleneck is huge.

    Also, although I doubt that they will need to double the number of decoders, let's assume that this is the best method for performance and area for now. Why would this be a "power catastrophy"? First of all, let me acknowledge t hat decoders use up tons of power in the CPU (~20%, last time I checked). However, decoders are highly parallel, unlike the back-end of the CPU. They can also easily be gated when not in use. In addition, designers can optimize them for low power by removing dynamic logic and high Vt transistors and keep high clock speeds by adding another pipeline stage (since each macro-op is independent of each other, so no slowdown other than branch mispredictions due to a longer pipeline).

    In summary, two main points:
    1. Decoders will be gated when not in use.
    2. Decoders can be made to be power efficient.

    I agree that AMD needs to add SMT though, or use some sort of clustering or shared resource technique.

    http://www.realworldtech.com/page.cf...1607033728&p=3

  15. #190
    Xtreme Cruncher
    Join Date
    Aug 2006
    Location
    Denmark
    Posts
    7,747
    Quote Originally Posted by taurus_sel View Post
    Wow, you know so much that you know how AMD designed their yet unreleased processor. How is that?

    And yeah, maybe AMD needs to copy hyperthreading, if not to even out the copying going on lately. The core i7 looks a lot like a Phenom, except more than a year later. Native quad core, integrated memory controller, QPI (just like HT), same L3 cache structure. Hmmm, maybe dreamland is at AMD HQ. Did you go and see, then smile?
    Because its basicly 100% identical to the released Shanghai...

    And if i7 looks like a Phenom. Then a cow looks like a monkey. They are both animals, they both fart and they both have bad breath. How can they not be 100% identical?
    Last edited by Shintai; 12-14-2008 at 04:18 PM.
    Crunching for Comrades and the Common good of the People.

  16. #191
    Registered User
    Join Date
    Nov 2005
    Posts
    70
    Quote Originally Posted by T_Flight View Post
    Ummm, hang on....







    OK, had to check to be sure this was an i5 thread. For a second there I thought I was seeing posts about AMD stuff again...wait a minute! I was!

    AMD has nothing to do with i5. It wasn't designed by AMD, so the "copying" BS is just that...pure 100% grade A farm fresh BS. Some may not be aware of this, but Intel isn't in a position where they have to "copy" anything. They are doing extremely well on their own.

    Please take the Fanboi BS elsewhere. Preferably another forum where they allow that kind of thing. This isn't one of them.
    It is an i5 thread...so why is the Intel rep talking about (sorry talking) a Phenom II? And also why is the Intel rep saying that a core i7 is best for Xfire/SLI? I wonder how many people are complaining about that. Exactly....

    Or is it that there's different rules for different people. Exactly...

  17. #192
    Xtreme Member
    Join Date
    Apr 2006
    Posts
    393
    Quote Originally Posted by taurus_sel View Post
    It is an i5 thread...so why is the Intel rep talking about (sorry talking) a Phenom II? And also why is the Intel rep saying that a core i7 is best for Xfire/SLI? I wonder how many people are complaining about that. Exactly....

    Or is it that there's different rules for different people. Exactly...
    Because at the moment, i7 is the best for SLI and CF. No one is talking about price/performance ratio, we are talking what is the best for multi GPU scenarios and it is i7. Did you look at the review I posted? Or is your fanboyism blinding you?

    And you were the one who brought up the issue about Phenom II. Why don't you check your own posts? DrWho isn't wrong when he said i7 is the best for SLI and CF.

  18. #193
    Registered User
    Join Date
    Nov 2005
    Posts
    70
    Quote Originally Posted by Clairvoyant129 View Post
    Because at the moment, i7 is the best for SLI and CF. No one is talking about price/performance ratio, we are talking what is the best for multi GPU scenarios and it is i7. Did you look at the review I posted? Or is your fanboyism blinding you?

    And you were the one who brought up the issue about Phenom II. Why don't you check your own posts? DrWho isn't wrong when he said i7 is the best for SLI and CF.
    Yeah but I thought it was an i5 thread? So he can mention i7 and I can't mention Phenom II? And I question if i7 really is better based on what AMD showed and what people who were there and saw (from this forum). But I already said that in my original posts.

  19. #194
    Registered User
    Join Date
    Dec 2008
    Location
    Beaverton, OR
    Posts
    19
    Sorry this has probably already been discussed, but I don't really understand the point of the i5. The i7 offers a decent performance boost over the C2D and C2Q processors, but from my understanding the i5 series will offer less in performance gains than the i7 series. Won't that be putting the performance of the i5 in or around the C2D/C2Q range? I don't see why anyone with a decent system right now would upgrade to an i5 series chip.

  20. #195
    Xtreme Enthusiast
    Join Date
    Apr 2008
    Posts
    912
    Quote Originally Posted by scarywoody View Post
    Sorry this has probably already been discussed, but I don't really understand the point of the i5. The i7 offers a decent performance boost over the C2D and C2Q processors, but from my understanding the i5 series will offer less in performance gains than the i7 series. Won't that be putting the performance of the i5 in or around the C2D/C2Q range? I don't see why anyone with a decent system right now would upgrade to an i5 series chip.
    For basically every task this chip will be just as good as current Nehalems. Memory bandwidth, a dual-channel DDR3 IMC is still gonna be great for desktop applications, and that's basically the only practical difference.

    Only reason I bought this thing was because I couldn't stand the old setup a second longer. As long as it'll overclock and the socket will be maintained in the future the Lynnfield should be a better buy.

  21. #196
    Registered User
    Join Date
    Dec 2008
    Location
    Beaverton, OR
    Posts
    19
    Ok, I guess I'll have to wait for more i5 series details before I start judging and loading this thread up with already answered questions. cheers

  22. #197
    Xtreme Enthusiast
    Join Date
    Dec 2007
    Posts
    816

    Red face

    Quote Originally Posted by Shadowmage View Post
    That's an interesting point about the decode bandwidth, especially since AMD increased the I-cache bandwidth to 256bits. Why doesn't Intel have a similar problem? You seem to be implying that AMD is bottlenecked by the front-end. That seems like some low-hanging fruit though: increasing the number of decoders is simple. They don't need to double the number of decoders: why not just add one more? Both AMD and Intel chips are heavily optimized, so I doubt that the bottleneck is huge.

    Also, although I doubt that they will need to double the number of decoders, let's assume that this is the best method for performance and area for now. Why would this be a "power catastrophy"? First of all, let me acknowledge t hat decoders use up tons of power in the CPU (~20%, last time I checked). However, decoders are highly parallel, unlike the back-end of the CPU. They can also easily be gated when not in use. In addition, designers can optimize them for low power by removing dynamic logic and high Vt transistors and keep high clock speeds by adding another pipeline stage (since each macro-op is independent of each other, so no slowdown other than branch mispredictions due to a longer pipeline).

    In summary, two main points:
    1. Decoders will be gated when not in use.
    2. Decoders can be made to be power efficient.

    I agree that AMD needs to add SMT though, or use some sort of clustering or shared resource technique.

    http://www.realworldtech.com/page.cf...1607033728&p=3
    decoders are not highly parrallel if you try to extract early some code fusion, like it is done since Conroe. Phenom I/II is limited by its 3 large decodes, Conroe/Penryn and Nehalem are up to 5 large ... with code fusion. That is a severe difference that they pay.

    decoders are not so cold, if highly efficent. The problem is to feed your out of order buffers, early enough to extract parralelism. At this, AMD is really late. They did catch up when they acquired the design of Athlon, but they now need to get into a serious improvement rebuild, and that is not easy, it takes years.

    I keep thinking that with the threading taking off in the software community, Hyperthreading is a must for everbody now, this is why i am convinced they will implement it too. Doing it the way the Intel guys did it is very complexe, it toke many stepping and try error to figure out from the P4 to Nehalem. I think AMD will try a more brutal approche, and duplicate the decoders, because the lack of time to design it. They should have started at P4 time frame, when it showed some promissing improvement for 5% transistor in the core.

    Again, this is my personal opinion, It may be bias, if you think so, I try to keep it honnest, as I got to keep it honnest for my own understanding of the industry.

    DrWho, The last of the time lords, setting up the Clock.

  23. #198
    Xtreme Addict
    Join Date
    Aug 2004
    Location
    Austin, TX
    Posts
    1,346
    Quote Originally Posted by Drwho? View Post
    decoders are not highly parrallel if you try to extract early some code fusion, like it is done since Conroe. Phenom I/II is limited by its 3 large decodes, Conroe/Penryn and Nehalem are up to 5 large ... with code fusion. That is a severe difference that they pay.

    decoders are not so cold, if highly efficent. The problem is to feed your out of order buffers, early enough to extract parralelism. At this, AMD is really late. They did catch up when they acquired the design of Athlon, but they now need to get into a serious improvement rebuild, and that is not easy, it takes years.
    I'm wondering what's the difference between an AMD decode unit and an Intel "simple decoder" unit. It seems from the RWT link from my previous post that the AMD decoder unit is more complex than the Intel counterpart (1-2uops instead of just 1). Also, AMD does have some code fusion, although I don't think it's as heavy as Intel's version.

    As for the "serious improvement rebuild", I have on good word that Bulldozer is a complete redesign which should "put AMD back into the lead". Until then, Shanghai and its derivatives are band-aids to stem off the bleeding until it arrives.

    Sidenote: the necessity of uop fusion just proves how out-of-date x86 has become... yes I know that x86 is Intel's biggest asset and will never die out...


    I keep thinking that with the threading taking off in the software community, Hyperthreading is a must for everbody now, this is why i am convinced they will implement it too.
    My personal theory is that they'll double the issue width to 6-way with parallel 3-instruction packets (instead of the current single-issue "packet"). Each packet has a single thread-ID for multithreading. I think that this will put AMD in the lead while keeping it a logical evolution of their back-end.

    Doing it the way the Intel guys did it is very complexe, it toke many stepping and try error to figure out from the P4 to Nehalem. I think AMD will try a more brutal approche, and duplicate the decoders, because the lack of time to design it. They should have started at P4 time frame, when it showed some promissing improvement for 5% transistor in the core.
    Pardon me for saying so, but AMD's architecture has always been much more aggressive than Intel's, especially after Intel's P4 "mistake". This is because AMD needs to make up for their 20% clock speed deficiency due to manufacturing. IIRC AMD's K8 had a similar FO4 delay to Northwood (about 10-ish), despite its obvious lead in IPC. Currently Intel has the more evolved architecture, so to speak, but that's probably the fault of AMD's execution lately rather than their architects' design aggressiveness. I'm not trying to downplay the awesome work done by Ronak and the rest of the guys in ORCA but as far as their general architecture is concerned, it's pretty conservative especially when compared to academia or even the DEC Alphas from the 1990's: same Tomasulo algorithms, not even a physical register file (although with a new matrix scheduler, very nice)
    Last edited by Shadowmage; 12-15-2008 at 09:25 AM.

  24. #199
    Xtreme Enthusiast
    Join Date
    Dec 2007
    Posts
    816
    Quote Originally Posted by Shadowmage View Post
    I'm wondering what's the difference between an AMD decode unit and an Intel "simple decoder" unit. It seems from the RWT link from my previous post that the AMD decoder unit is more complex than the Intel counterpart (1-2uops instead of just 1). Also, AMD does have some code fusion, although I don't think it's as heavy as Intel's version.

    As for the "serious improvement rebuild", I have on good word that Bulldozer is a complete redesign which should "put AMD back into the lead". Until then, Shanghai and its derivatives are band-aids to stem off the bleeding until it arrives.

    Sidenote: the necessity of uop fusion just proves how out-of-date x86 has become... yes I know that x86 is Intel's biggest asset and will never die out...




    My personal theory is that they'll double the issue width to 6-way with parallel 3-instruction packets (instead of the current single-issue "packet"). Each packet has a single thread-ID for multithreading. I think that this will put AMD in the lead while keeping it a logical evolution of their back-end.



    Pardon me for saying so, but AMD's architecture has always been much more aggressive than Intel's, especially after Intel's P4 "mistake". This is because AMD needs to make up for their 20% clock speed deficiency due to manufacturing. IIRC AMD's K8 had a similar FO4 delay to Northwood (about 10-ish), despite its obvious lead in IPC. Currently Intel has the more evolved architecture, so to speak, but that's probably the fault of AMD's execution lately rather than their architects' design aggressiveness. I'm not trying to downplay the awesome work done by Ronak and the rest of the guys in ORCA but as far as their general architecture is concerned, it's pretty conservative especially when compared to academia or even the DEC Alphas from the 1990's: same Tomasulo algorithms, not even a physical register file (although with a new matrix scheduler, very nice)
    Some time, I don t follow you ... For example, why saying that x86 is out of date? it is design to use the legacy of the code, you can boot Dos 3.1 on your Core i7, it is the power of it, you never have to worry about back compatibility. Look at the cellphone business, where the lack of compatibility makes a market so fragmented that when you buy a phone, you are hostage of the brand you are buying it from... I am not going to point on Opera not being release on iPhone ... oh! i just did ...
    x86 and its legacy is what make sure this does not happen, imagine if all PCs were running its own version of manufacture ... a dell version, an HP version ... it would be a nightmare.
    fortunatly, Intel and AMD are smart enough to agree every few years together, some time intel take it from AMD, sometime the other way around (Fanboys in both side stupidly argue all the time about this, the reality is that the engineers behind it deal with this in a very elegant matter, and with respect for each other. I am in this pool, I have buddies working in Austin, with a greenbadge)

    The strenght of x86 is that you describe as its weakness.

    For the rest, you got to understand that making a decoder "larger" introduce a lot of issue in the speed path way, it is not so easy to do without slowing down the frequency of the CPU, barcelona was a very good demonstration of this.

    We will see what our buddies in Green will show up with, I like competition, it allows me to ask more toys to my management, so, let s see

    Today, I fixed my Play & watch Nintendo from 1981

    My mom gave it to me when i was 12 ... dude! I am just having as much fun as I did at this time!!!!
    Last edited by Drwho?; 12-18-2008 at 10:32 PM.
    DrWho, The last of the time lords, setting up the Clock.

  25. #200
    Xtreme Cruncher
    Join Date
    Aug 2006
    Location
    Denmark
    Posts
    7,747
    Quote Originally Posted by Drwho? View Post
    Today, I fixed my Play & watch Nintendo from 1981

    My mom gave it to me when i was 12 ... dude! I am just having as much fun as I did at this time!!!!
    I played that exact game too :o

    It was my favourite and I broke records on it in a train between denmark and germany.

    Also had the one with snoopy and tennis.
    Crunching for Comrades and the Common good of the People.

Page 8 of 9 FirstFirst ... 56789 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •