OK and yes you're right!Well, I'm just comparing L2 and L3 together. L1 is considerably less latency than L2 or L3, for Intel and AMD. No way should L1 even be near a latency of 10, otherwise, it'd be pointless. Intel currently doesn't use L3 for their consumer products, but they have perfected their L3 to almost L2 standards (if you've seen intel's itanium server products like the montecito, you can't deny that intel hasn't been working on L3), and that is a very large benefit for intel processors as they can load more cache to the processor and distance itself away from the core farther while suffering very little performance loss due to greater latencies. I've noticed AMD has been saying a lot about their L3 cache, but with the latencies that I've seen, it doesn't look to assist in gaining processor performance. These technologies you mention may assist in that, but like I've said, I don't know much about them, so it'd be fruitless (and quite retarded), for me to argue with you on that point (or anyone else for that matter). That does make sense though. I was stating that despite the architecture differences, processors would benefit from lower latency cache. Kind of like processor frequencies. doesn't matter what the architecture is, the higher clocked the processor, the better performance it will output (though the amount it will increase will be discriminatory).
But I think before we continue this discussion, more research would have to be made. If we're going to base most of our information on assumptions, we might as well be a bunch of [h]ardforum noobies having a dumb flamewar.
Bookmarks