time to save $$$ for a new system : p
Printable View
time to save $$$ for a new system : p
Could anyone confirm Lynnfield and Havendale are LGA1156 or 1160? :shrug:
Thank you. :)
I'm too lazy to count the number of pins on Lynnfield CPU in this picture. :D
http://i245.photobucket.com/albums/g...ehalem/L_5.jpg
Lynnfield is 1156 actually.
Things look very promising. Especially for us 775 users... :D
IM curios, can you post upload i5 "MultiCore Benchmark" file/score to the site?
http://www.xtremesystems.org/forums/...d.php?t=210021
I saw where Tweaktown.com stole the OP's content and put it up on there site. LOL
One year from now, thats like 2 years technology development.
and besides, good luck overclocking i5 to 4ghz+
I get a new videocard, before that which will enhance my fps more than any new cpu coming out the next 2 years.
;)
Unless amd pulls a rabbit out of the hat with their new p2
H'mm wonder if i5 will use the newer low voltage DDR3? wonder if it's safe to use the higher voltage DDR3 2x2gb sticks in dual channel. :shrug:
[QUOTE=Donnie27;3491860]Thanks +2! When or if the NDA is lifted please fill us in.
Sure, I guess that happens only when I die :ROTF:
not correct!
"problem" with i7 is that QPI voltage is connected with DRAM voltage... if you rise DRAM voltage you rise QPI voltage also... i5 doesn't have QPI!
The good thing with i5 is that'll bring true sucssesor to LGA775 platform... LGA13xx will stay only for server/WS market, and LGA1156 (previously LGA1160) will be the mainstream platform... current Bloomfield platform is death end for mainstream desktop just as Socket940 was for AMD... LGA 1156 is the way of the future for intel desktop users...
maximises the appeal for multicard users thats for sure (1366)
pff Lindfield's integrated PCIe bridge support 8x/8x PEG 2.0 split and that's sufficient for CFX and SLI! (efectively that's same BW as with 16x/16x PEG1.1, and that's enough for CrossFire and SLi)...
and what will benchmarking of i5 prove is that clock-4-clock it gives same performance as Bloomfield for the less price, alnd since LGA1156 mobos wiil be much more affordable it rise the question of reasoning behind purchase of any futurre Bloomfield CPU and LGA1366 mobo...
The P43 is just lacking the crossfire controller on the motherboard, not on the chip. The Chip is the same exact chip as the P45. Possibly lower binned tho.
and you think it makes sense to segmentation on the base of enabling / disabling PEG lanes splitting, and not other features, like HT on/off, turbo on/off... etc... is PEG ctr. so critical and complex part of the CPU that Intel needs to create differentiation in CPU lineup on the base of PEG capabilities... yeah right!
for a reminder AMD, NVIDIA, INEL are doing differentiation on the base of harvesting good transistors, and not artificial enabling / disabling features... only Apple is doing that with their DVDRW's capable of writing only 8x on cheaper machines :)
Segmentation should be done on the mobo. Otherwise it would be a complete messup. :/
Intel's smarter than this.
As if 1366 vs 1156 wasn't enough, now you got 1366 with or without SLi, and 1156 with or without SLi, or without MGPU. So no.
Sorry, but didnt AMD and NVIDIA use to "laserlock" parts of the GPUs, thus producing the low-performance mainstream parts (I cba to look up in which series they did it)?? What about Nvidia having special drivers with opengl overlay optimizations just for the Quadro series, when it works perfectly on the Geforce series? I bet there are more examples :P.
PS I think I might have read your post wrong :p You probably meant "unlike AMD and NVIDIA, INTEL are doing differentiation on the base of harvesting good transistors, and not artificial enabling / disabling features..." :P. Oh well, figures :up:
LGA1156, LGA1160.. What a cluster:banana::banana::banana::banana:, can't they decide?
http://www.fudzilla.com/index.php?op...10915&Itemid=1
http://smartmanufacturing.net/Gen3VRTestToolParts.html
Apparently Intel Design and Development tools use the 'LGA1160' socket, now I'm not going to count those pins but still. Is this some outdated site, is Fuad stumbling again?
First off Dr. Enthusiast doesn't mean folks with more money than Brains. Some of us aren't rich and the economy affects many of us here as well. Most Geeks aren't 14 to 18 still living at home with Mom and Dad with unlimited budgets.
What if I want a Power System with the fastest processor in the world and I DON'T game. Current X58 are fine for power Gamers but not many others who don't make Games their number one priority.
so you're saying that i5 @ say 2.6 GHz can't serve in a same fashion two, lets say, HD 4850 as i7 can at the same frequency?! How on Earth that can be possible when all that we know so far about i5 is saying that basically it's the same CPU as i7 but w/o QPI and with PEG ctrl. inside?
I don't buy it!
8x/8x Crossfire works just fine until you get to extreme resolution like 2560x1600.
"Non benching" people will enjoy i5 regardless of the "huge benefits" to get a i7 for multi gpu.
http://www.xtremesystems.org/forums/...d.php?t=202249
My only fear is that in order to get SLI support mobo makers will have to use the nForce200, since there will be so much diferent boards that NVIDIA won't do the SLI licensing like on X58 boards...
Still, I think the secound non 8x/8x capable Lynnfield is the Havendale, wich will be Dual Core only with MCM northbridge (with integrated graphics).
Someone who has the cash to get multiple video cards is assumed to be able to afford the price difference and get an i7.
If thats not your case, the product is not aimed at you.
Its for enthusiasts who pay hundreds of dollars for ~15% performance increases. There is enough demand for these products...low volume compared to mainstream platforms but its their flagship :)
Actually, depending on my next car purchase (left over funds), I'm not waiting until late summer or early fall. I do fully understand what you're saying here. I waited for Nehalem and Deneb so far, i5 can wait until late 2009 or early 2010. Unless it really sucks, I'm getting a Phenom II LOL! In fact, the order I'd place them in for my next upgrade is;
Phenom II
Q9550
i5
i7
Until 35 days ago, i7 was at the top and nothing else was really considered.
There is a huge advantage when using SLI and CF with Core i7.
http://www.guru3d.com/article/core-i...ance-review/19
:rolleyes: a small bump huh? While Phenom II will get a good boost over Phenom, it will still be lagging behind Core 2 Quads.
If you want the best, you need to spend the money for it. This is Xtremesystems, not Lamesystems or Bangforbucksystems.
What if I want i7 for something other than games? What if I want a real Power User system, with triple channel, two 1.5TB HDDs (they didn't follow motherboard trends) and maybe a Raptors or that reasonable compared to the motherboard. X-Fi, Multi-Camera Security Card, Analog Capture Card? I don't want Crossfire or SLI and if I did, the Green has aren't that much slower:rofl:. Hehehe, I already got 6GB of RAM.
Let me put this a Different way. I want a power USER board, not a Power Gamer Board or even a cheap board. WTF is on this board to make the cheapest one $244?
Thank goodness the rest of the market hasn't followed motherboard trends. I'm NOT bad mouthing the Processor here. I'm saying the board is overpriced for you folks will be getting. Namely, last years tech for cutting edge prices.
To think of it I'd buy one @ $200 if it had scantily clad women on it (has to be life-sized)
But I agree. Isn't X58 SUPPOSED to be cheaper, granted that the chipset itself is massively cheaper and requires less cooling? "More PCB Layers" is not an excuse- the extra layers are probably cheap, definitely less than 10USD for the additions.
This is what market segmentation brings you, I guess. S754 and S939 also had this problem to an extent, so it's not just Intel.
It won't. I'm sure under SLI/CF Phenom II will lag behind current Core 2 Quads and will scale similarly with current Phenoms.
Let's be realistic, Phenom II, while an improvement is the same as Kentsfield -> Yorkfield, die shrink with cache increase and some core optimizations. Core i7 on the other hand is a monster.
What are you expecting? A miracle?
Not to mention, most people who will be spending tons of money for high end SLI and CF systems will not be going for the "bang for buck" systems. They will want exploit every bit of performance out of their SLI/CF systems.
better change that to 3-way sli....
your post quality is going down :rofl:
the performance was always havy cpu limited, this has changed a lot know with phenom2 also 890 chipset adds some additional to the cf part.
check i7 and k10 bus architecture except for a bit faster bus the difference is not that big.....
So, here is some detail:Quote:
Originally Posted by DrWho?
On games, the Phenom II will do better than phenom just because the L3 cache is large enough, it is not going to be any better than a Conroe with 4Mb cache, at same frequency. Of course, you can find corner case, but those are rare.
It looks like the 2nd Load port of the phenom I or II is not helping it, due the the fact that its decoding bandwidth is becoming a problem on 64bits. (the extra byte increase by 25% the bandwitdh required). The lack of wilder decoder engine does not feed properly their back end of the processor. The next step for AMD is to copy hyperthreading, if they don't do so, they will never come back to competitive position, and if they do so, they will have to pay attention to be power efficent when doing it. Doubling the number of decoders would be a power catastrophy.
The next Core mainstream will have no problem there :up:
This is my personal opinion, my employer is not responsible for this posting. :clap:
enough quality for you? If you ask details, you get them, don't complain about it. :up: :rofl: (We are in a Core thread, i did notice :) )
answer was already posted:
http://www.guru3d.com/article/core-i...ance-review/19
:)
250$ on CF/SLI? That would be very very lowend wouldnt it. And most likely with a faster singlecard option thats better :D Plus you wouldnt get much i5/Ph2 for 300-400$. In that case the i7 should be 500-600$.
But basicly its the issue due to speed. Since GPus needs to fetch the textures from main memory.
However one thing you gotta relalize with the rest. i5 is just another tiny step of where AMD/Intel is going. The IMC before that was yet another.
The average consumer will get less and less flexibility as time passes. Simply because we are moving forward towards SoC designs. GPUs moving to the CPU, later on southbridge functions etc etc.
I dont see how HD4850 CF is very low-end, but you are right - price tag is a bit low. 4850 CF can be had for $300 and 260GTX SLI for $400 at newegg.
if you look here: http://www.guru3d.com/article/core-i...ance-review/19
a core i7 even the 920 will give you a big boost on Crisis ... it is more than adding one additional card on Core 2 ... the Core i7 920 is around 300$ ... it should be the beginning of a new rig if you plan SLI or Xcross fire, the data is obvious. :up:
Not so fast pls. :D
If AND can't optimize their drivers to use multicore processor that dosn't mean that CPU dosn't important for gaming at all.
http://media.bestofmicro.com/1/X/165...l/image021.png
http://media.bestofmicro.com/1/T/165...l/image017.png
http://media.bestofmicro.com/1/R/165...l/image015.png
The difference between a phenom and a Core i7 at 1920x1200 with a single card is over 10 FPS? Furthermore, how does this effect diminish with more cards added? Shouldn't the trend be reversed?
Wow, you know so much that you know how AMD designed their yet unreleased processor. How is that?
And yeah, maybe AMD needs to copy hyperthreading, if not to even out the copying going on lately. The core i7 looks a lot like a Phenom, except more than a year later. Native quad core, integrated memory controller, QPI (just like HT), same L3 cache structure. Hmmm, maybe dreamland is at AMD HQ. Did you go and see, then smile?
Ummm, hang on....
OK, had to check to be sure this was an i5 thread. For a second there I thought I was seeing posts about AMD stuff again...wait a minute! I was!
AMD has nothing to do with i5. It wasn't designed by AMD, so the "copying" BS is just that...pure 100% grade A farm fresh BS. Some may not be aware of this, but Intel isn't in a position where they have to "copy" anything. They are doing extremely well on their own.
Please take the Fanboi BS elsewhere. Preferably another forum where they allow that kind of thing. This isn't one of them.
That's an interesting point about the decode bandwidth, especially since AMD increased the I-cache bandwidth to 256bits. Why doesn't Intel have a similar problem? You seem to be implying that AMD is bottlenecked by the front-end. That seems like some low-hanging fruit though: increasing the number of decoders is simple. They don't need to double the number of decoders: why not just add one more? Both AMD and Intel chips are heavily optimized, so I doubt that the bottleneck is huge.
Also, although I doubt that they will need to double the number of decoders, let's assume that this is the best method for performance and area for now. Why would this be a "power catastrophy"? First of all, let me acknowledge t hat decoders use up tons of power in the CPU (~20%, last time I checked). However, decoders are highly parallel, unlike the back-end of the CPU. They can also easily be gated when not in use. In addition, designers can optimize them for low power by removing dynamic logic and high Vt transistors and keep high clock speeds by adding another pipeline stage (since each macro-op is independent of each other, so no slowdown other than branch mispredictions due to a longer pipeline).
In summary, two main points:
1. Decoders will be gated when not in use.
2. Decoders can be made to be power efficient.
I agree that AMD needs to add SMT though, or use some sort of clustering or shared resource technique.
http://www.realworldtech.com/page.cf...1607033728&p=3
It is an i5 thread...so why is the Intel rep talking about (sorry :banana::banana::banana::banana: talking) a Phenom II? And also why is the Intel rep saying that a core i7 is best for Xfire/SLI? I wonder how many people are complaining about that. Exactly....
Or is it that there's different rules for different people. Exactly...
Because at the moment, i7 is the best for SLI and CF. No one is talking about price/performance ratio, we are talking what is the best for multi GPU scenarios and it is i7. Did you look at the review I posted? Or is your fanboyism blinding you? :rolleyes:
And you were the one who brought up the issue about Phenom II. Why don't you check your own posts? DrWho isn't wrong when he said i7 is the best for SLI and CF.
Yeah but I thought it was an i5 thread? So he can mention i7 and I can't mention Phenom II? And I question if i7 really is better based on what AMD showed and what people who were there and saw (from this forum). But I already said that in my original posts.
Sorry this has probably already been discussed, but I don't really understand the point of the i5. The i7 offers a decent performance boost over the C2D and C2Q processors, but from my understanding the i5 series will offer less in performance gains than the i7 series. Won't that be putting the performance of the i5 in or around the C2D/C2Q range? I don't see why anyone with a decent system right now would upgrade to an i5 series chip. :shrug:
For basically every task this chip will be just as good as current Nehalems. Memory bandwidth, a dual-channel DDR3 IMC is still gonna be great for desktop applications, and that's basically the only practical difference.
Only reason I bought this thing was because I couldn't stand the old setup a second longer. :p: As long as it'll overclock and the socket will be maintained in the future the Lynnfield should be a better buy.
Ok, I guess I'll have to wait for more i5 series details before I start judging and loading this thread up with already answered questions. cheers
decoders are not highly parrallel if you try to extract early some code fusion, like it is done since Conroe. Phenom I/II is limited by its 3 large decodes, Conroe/Penryn and Nehalem are up to 5 large ... with code fusion. That is a severe difference that they pay.
decoders are not so cold, if highly efficent. The problem is to feed your out of order buffers, early enough to extract parralelism. At this, AMD is really late. They did catch up when they acquired the design of Athlon, but they now need to get into a serious improvement rebuild, and that is not easy, it takes years.
I keep thinking that with the threading taking off in the software community, Hyperthreading is a must for everbody now, this is why i am convinced they will implement it too. Doing it the way the Intel guys did it is very complexe, it toke many stepping and try error to figure out from the P4 to Nehalem. I think AMD will try a more brutal approche, and duplicate the decoders, because the lack of time to design it. They should have started at P4 time frame, when it showed some promissing improvement for 5% transistor in the core.
Again, this is my personal opinion, It may be bias, if you think so, I try to keep it honnest, as I got to keep it honnest for my own understanding of the industry.
:up:
I'm wondering what's the difference between an AMD decode unit and an Intel "simple decoder" unit. It seems from the RWT link from my previous post that the AMD decoder unit is more complex than the Intel counterpart (1-2uops instead of just 1). Also, AMD does have some code fusion, although I don't think it's as heavy as Intel's version.
As for the "serious improvement rebuild", I have on good word that Bulldozer is a complete redesign which should "put AMD back into the lead". Until then, Shanghai and its derivatives are band-aids to stem off the bleeding until it arrives.
Sidenote: the necessity of uop fusion just proves how out-of-date x86 has become... yes I know that x86 is Intel's biggest asset and will never die out... :shakes:
My personal theory is that they'll double the issue width to 6-way with parallel 3-instruction packets (instead of the current single-issue "packet"). Each packet has a single thread-ID for multithreading. I think that this will put AMD in the lead while keeping it a logical evolution of their back-end.Quote:
I keep thinking that with the threading taking off in the software community, Hyperthreading is a must for everbody now, this is why i am convinced they will implement it too.
Pardon me for saying so, but AMD's architecture has always been much more aggressive than Intel's, especially after Intel's P4 "mistake". This is because AMD needs to make up for their 20% clock speed deficiency due to manufacturing. IIRC AMD's K8 had a similar FO4 delay to Northwood (about 10-ish), despite its obvious lead in IPC. Currently Intel has the more evolved architecture, so to speak, but that's probably the fault of AMD's execution lately rather than their architects' design aggressiveness. I'm not trying to downplay the awesome work done by Ronak and the rest of the guys in ORCA but as far as their general architecture is concerned, it's pretty conservative especially when compared to academia or even the DEC Alphas from the 1990's: same Tomasulo algorithms, not even a physical register file (although with a new matrix scheduler, very nice) :)Quote:
Doing it the way the Intel guys did it is very complexe, it toke many stepping and try error to figure out from the P4 to Nehalem. I think AMD will try a more brutal approche, and duplicate the decoders, because the lack of time to design it. They should have started at P4 time frame, when it showed some promissing improvement for 5% transistor in the core.
Some time, I don t follow you ... For example, why saying that x86 is out of date? it is design to use the legacy of the code, you can boot Dos 3.1 on your Core i7, it is the power of it, you never have to worry about back compatibility. Look at the cellphone business, where the lack of compatibility makes a market so fragmented that when you buy a phone, you are hostage of the brand you are buying it from... I am not going to point on Opera not being release on iPhone ... oh! i just did ...
x86 and its legacy is what make sure this does not happen, imagine if all PCs were running its own version of manufacture ... a dell version, an HP version ... it would be a nightmare.
fortunatly, Intel and AMD are smart enough to agree every few years together, some time intel take it from AMD, sometime the other way around (Fanboys in both side stupidly argue all the time about this, the reality is that the engineers behind it deal with this in a very elegant matter, and with respect for each other. I am in this pool, I have buddies working in Austin, with a greenbadge)
The strenght of x86 is that you describe as its weakness. :shrug:
For the rest, you got to understand that making a decoder "larger" introduce a lot of issue in the speed path way, it is not so easy to do without slowing down the frequency of the CPU, barcelona was a very good demonstration of this.
We will see what our buddies in Green will show up with, I like competition, it allows me to ask more toys to my management, so, let s see :up:
Today, I fixed my Play & watch Nintendo from 1981
http://www.nooperation.com/games/bab...watch-fire.jpg
My mom gave it to me when i was 12 ... dude! I am just having as much fun as I did at this time!!!!