answer was already posted:
http://www.guru3d.com/article/core-i...ance-review/19
![]()
DrWho, The last of the time lords, setting up the Clock.
250$ on CF/SLI? That would be very very lowend wouldnt it. And most likely with a faster singlecard option thats betterPlus you wouldnt get much i5/Ph2 for 300-400$. In that case the i7 should be 500-600$.
But basicly its the issue due to speed. Since GPus needs to fetch the textures from main memory.
However one thing you gotta relalize with the rest. i5 is just another tiny step of where AMD/Intel is going. The IMC before that was yet another.
The average consumer will get less and less flexibility as time passes. Simply because we are moving forward towards SoC designs. GPUs moving to the CPU, later on southbridge functions etc etc.
Last edited by Shintai; 12-14-2008 at 09:50 AM.
Crunching for Comrades and the Common good of the People.
I dont see how HD4850 CF is very low-end, but you are right - price tag is a bit low. 4850 CF can be had for $300 and 260GTX SLI for $400 at newegg.
if you look here: http://www.guru3d.com/article/core-i...ance-review/19
a core i7 even the 920 will give you a big boost on Crisis ... it is more than adding one additional card on Core 2 ... the Core i7 920 is around 300$ ... it should be the beginning of a new rig if you plan SLI or Xcross fire, the data is obvious.![]()
Last edited by Drwho?; 12-14-2008 at 10:57 AM.
DrWho, The last of the time lords, setting up the Clock.
The difference between a phenom and a Core i7 at 1920x1200 with a single card is over 10 FPS? Furthermore, how does this effect diminish with more cards added? Shouldn't the trend be reversed?
E7200 @ 3.4 ; 7870 GHz 2 GB
Intel's atom is a terrible chip.
Wow, you know so much that you know how AMD designed their yet unreleased processor. How is that?
And yeah, maybe AMD needs to copy hyperthreading, if not to even out the copying going on lately. The core i7 looks a lot like a Phenom, except more than a year later. Native quad core, integrated memory controller, QPI (just like HT), same L3 cache structure. Hmmm, maybe dreamland is at AMD HQ. Did you go and see, then smile?
Ummm, hang on....
OK, had to check to be sure this was an i5 thread. For a second there I thought I was seeing posts about AMD stuff again...wait a minute! I was!
AMD has nothing to do with i5. It wasn't designed by AMD, so the "copying" BS is just that...pure 100% grade A farm fresh BS. Some may not be aware of this, but Intel isn't in a position where they have to "copy" anything. They are doing extremely well on their own.
Please take the Fanboi BS elsewhere. Preferably another forum where they allow that kind of thing. This isn't one of them.
That's an interesting point about the decode bandwidth, especially since AMD increased the I-cache bandwidth to 256bits. Why doesn't Intel have a similar problem? You seem to be implying that AMD is bottlenecked by the front-end. That seems like some low-hanging fruit though: increasing the number of decoders is simple. They don't need to double the number of decoders: why not just add one more? Both AMD and Intel chips are heavily optimized, so I doubt that the bottleneck is huge.
Also, although I doubt that they will need to double the number of decoders, let's assume that this is the best method for performance and area for now. Why would this be a "power catastrophy"? First of all, let me acknowledge t hat decoders use up tons of power in the CPU (~20%, last time I checked). However, decoders are highly parallel, unlike the back-end of the CPU. They can also easily be gated when not in use. In addition, designers can optimize them for low power by removing dynamic logic and high Vt transistors and keep high clock speeds by adding another pipeline stage (since each macro-op is independent of each other, so no slowdown other than branch mispredictions due to a longer pipeline).
In summary, two main points:
1. Decoders will be gated when not in use.
2. Decoders can be made to be power efficient.
I agree that AMD needs to add SMT though, or use some sort of clustering or shared resource technique.
http://www.realworldtech.com/page.cf...1607033728&p=3
Last edited by Shintai; 12-14-2008 at 04:18 PM.
Crunching for Comrades and the Common good of the People.
It is an i5 thread...so why is the Intel rep talking about (sorrytalking) a Phenom II? And also why is the Intel rep saying that a core i7 is best for Xfire/SLI? I wonder how many people are complaining about that. Exactly....
Or is it that there's different rules for different people. Exactly...
Because at the moment, i7 is the best for SLI and CF. No one is talking about price/performance ratio, we are talking what is the best for multi GPU scenarios and it is i7. Did you look at the review I posted? Or is your fanboyism blinding you?
And you were the one who brought up the issue about Phenom II. Why don't you check your own posts? DrWho isn't wrong when he said i7 is the best for SLI and CF.
Sorry this has probably already been discussed, but I don't really understand the point of the i5. The i7 offers a decent performance boost over the C2D and C2Q processors, but from my understanding the i5 series will offer less in performance gains than the i7 series. Won't that be putting the performance of the i5 in or around the C2D/C2Q range? I don't see why anyone with a decent system right now would upgrade to an i5 series chip.![]()
For basically every task this chip will be just as good as current Nehalems. Memory bandwidth, a dual-channel DDR3 IMC is still gonna be great for desktop applications, and that's basically the only practical difference.
Only reason I bought this thing was because I couldn't stand the old setup a second longer.As long as it'll overclock and the socket will be maintained in the future the Lynnfield should be a better buy.
Ok, I guess I'll have to wait for more i5 series details before I start judging and loading this thread up with already answered questions. cheers
decoders are not highly parrallel if you try to extract early some code fusion, like it is done since Conroe. Phenom I/II is limited by its 3 large decodes, Conroe/Penryn and Nehalem are up to 5 large ... with code fusion. That is a severe difference that they pay.
decoders are not so cold, if highly efficent. The problem is to feed your out of order buffers, early enough to extract parralelism. At this, AMD is really late. They did catch up when they acquired the design of Athlon, but they now need to get into a serious improvement rebuild, and that is not easy, it takes years.
I keep thinking that with the threading taking off in the software community, Hyperthreading is a must for everbody now, this is why i am convinced they will implement it too. Doing it the way the Intel guys did it is very complexe, it toke many stepping and try error to figure out from the P4 to Nehalem. I think AMD will try a more brutal approche, and duplicate the decoders, because the lack of time to design it. They should have started at P4 time frame, when it showed some promissing improvement for 5% transistor in the core.
Again, this is my personal opinion, It may be bias, if you think so, I try to keep it honnest, as I got to keep it honnest for my own understanding of the industry.
![]()
DrWho, The last of the time lords, setting up the Clock.
I'm wondering what's the difference between an AMD decode unit and an Intel "simple decoder" unit. It seems from the RWT link from my previous post that the AMD decoder unit is more complex than the Intel counterpart (1-2uops instead of just 1). Also, AMD does have some code fusion, although I don't think it's as heavy as Intel's version.
As for the "serious improvement rebuild", I have on good word that Bulldozer is a complete redesign which should "put AMD back into the lead". Until then, Shanghai and its derivatives are band-aids to stem off the bleeding until it arrives.
Sidenote: the necessity of uop fusion just proves how out-of-date x86 has become... yes I know that x86 is Intel's biggest asset and will never die out...
My personal theory is that they'll double the issue width to 6-way with parallel 3-instruction packets (instead of the current single-issue "packet"). Each packet has a single thread-ID for multithreading. I think that this will put AMD in the lead while keeping it a logical evolution of their back-end.I keep thinking that with the threading taking off in the software community, Hyperthreading is a must for everbody now, this is why i am convinced they will implement it too.
Pardon me for saying so, but AMD's architecture has always been much more aggressive than Intel's, especially after Intel's P4 "mistake". This is because AMD needs to make up for their 20% clock speed deficiency due to manufacturing. IIRC AMD's K8 had a similar FO4 delay to Northwood (about 10-ish), despite its obvious lead in IPC. Currently Intel has the more evolved architecture, so to speak, but that's probably the fault of AMD's execution lately rather than their architects' design aggressiveness. I'm not trying to downplay the awesome work done by Ronak and the rest of the guys in ORCA but as far as their general architecture is concerned, it's pretty conservative especially when compared to academia or even the DEC Alphas from the 1990's: same Tomasulo algorithms, not even a physical register file (although with a new matrix scheduler, very nice)Doing it the way the Intel guys did it is very complexe, it toke many stepping and try error to figure out from the P4 to Nehalem. I think AMD will try a more brutal approche, and duplicate the decoders, because the lack of time to design it. They should have started at P4 time frame, when it showed some promissing improvement for 5% transistor in the core.![]()
Last edited by Shadowmage; 12-15-2008 at 09:25 AM.
Some time, I don t follow you ... For example, why saying that x86 is out of date? it is design to use the legacy of the code, you can boot Dos 3.1 on your Core i7, it is the power of it, you never have to worry about back compatibility. Look at the cellphone business, where the lack of compatibility makes a market so fragmented that when you buy a phone, you are hostage of the brand you are buying it from... I am not going to point on Opera not being release on iPhone ... oh! i just did ...
x86 and its legacy is what make sure this does not happen, imagine if all PCs were running its own version of manufacture ... a dell version, an HP version ... it would be a nightmare.
fortunatly, Intel and AMD are smart enough to agree every few years together, some time intel take it from AMD, sometime the other way around (Fanboys in both side stupidly argue all the time about this, the reality is that the engineers behind it deal with this in a very elegant matter, and with respect for each other. I am in this pool, I have buddies working in Austin, with a greenbadge)
The strenght of x86 is that you describe as its weakness.
For the rest, you got to understand that making a decoder "larger" introduce a lot of issue in the speed path way, it is not so easy to do without slowing down the frequency of the CPU, barcelona was a very good demonstration of this.
We will see what our buddies in Green will show up with, I like competition, it allows me to ask more toys to my management, so, let s see
Today, I fixed my Play & watch Nintendo from 1981
My mom gave it to me when i was 12 ... dude! I am just having as much fun as I did at this time!!!!
Last edited by Drwho?; 12-18-2008 at 10:32 PM.
DrWho, The last of the time lords, setting up the Clock.
Bookmarks