Yes it's possible, and seeing how your SATA HDDs info gets corrupted is possible too :rolleyes:
You are posting outdated info 9 months old. Also the info is wrong in quite some ways. 2-3MB L3 after we have 8MB L3 on Nehalem? And most likely some 12MB or so on Westmere? That would be pretty darn odd. Also it will be able to do atleast 12 DP Flops per cycle with AVX.
This is true, AMD often makes radical changes to their process ...
Nonetheless, it jogged my memory, AMD actually thickened the gate for Barcelona
http://www.eetimes.com/news/design/s...leID=202100946
They are 25% thicker than Intel, in an attempt to reduce leakage. I recall because I emailed John about this revelation from his reverse engineering, here is his response:
However, when I see a 4.7 Ghz AMD processor at 65 nm I will remember your post.Quote:
Hi, Jack,
Yes, AMD has opted to reduce gate leakage by increasing the thickness (for both the Althon and Barcelona), and compensate for the performance deficit in other ways.
cheers!
- john
No, not at all ... I guess I am being more general in assertion (no offense intended). My point is that the overall clock that me, you and everyone can choose from is not just a function of process technology, but a strong interaction between circuit design, transistor design, and process variation.
I see all too often people point to the 4.7 G Power6 and claim that since IBM can do it, so can AMD. This is a false assumption since Power6 was streamed down and based on an in-order design specifically to achieve high clock speed. Add to that that IBM purposefully altered the process technology specifically to increase the transistor performance by sacrificing leakage, and it is now clear how they achieved this remarkable clock.
This approach works for them because they are not trying to market Power6 into anything other than enterprise class servers and clusters. In which case, they are very application specific such that the throughput and performance produces a perf/Watt that makes such a beast overall effective even when needing to dissipate an ungodly amount of power.
The journal I referenced above is a very good series of articles that illustrate the balancing act companies make to produce the final product, from power constraints, to target performance goals, to the economics of scale (i.e. die size). What IBM was able to achieve is inconcievable for AMD to achieve. The x86 architecture has evolved into a RISC like core with a heavily worked over CISC front end, the OoOe method generates a very complex logic and functional set that generates a worst case scenario in a fashion such that their rate limiting circuit has much larger depth in FO4 delays than what IBM has in Power6.
You can even see it in some of the emperical data of the posts we see in the AMD section, where you see a perfectly stable platform at a particular clock for 32-bit modes, but craps out when trying 64-bit ... simply because you have now added the extra work of the logic for the 64-bit extensions, the total delay for the device is now greater than the period of the clock and, hence, some instability.
I have generally looked at these two players as Intel having the edge on the process manufacturing talent and AMD having the edge on the design talent. One of AMD's philosophies is to 'load it down' so to speak, they throw alot of great technology (architecturally) into their designs ... consequently, their designs come in much more complex, and as a rule of thumb, compexity is inversely proportional to Fmax.
Jack
If you have references, I would really love to see those. I have not been able to find that type of data readily. Now, true... P3 nor Willamette got to these clocks (your FO4s are probably correct, I have not seen the data myself), but they were also not built on a 65 nm process with a 1.09 nm gate.
...
For those interested, an FO4 delay is short for fan out of 4, which is basically the time it takes to flip a simple set of 4 inverters fed by a single inverter. A more interesting and technical explanation here: http://www.realworldtech.com/page.cf...1502231107&p=1 The overall clocking though is dependent not only upon the depth of the FO4 delays, but on the actual speed of those delays
Here are some historical physical times I have been able to find (googled) for a single FO4 delay for Intel, which is process driven (from quib from the ITRS):
Intel, 1998: FO4 delay = 33ps, CV/I = 2.57ps: ratio = 12.84
Intel, 1999: FO4 delay = 31.5ps, CV/I = 2.00ps: ratio = 15.75
Intel, 2000: FO4 delay = 21.3ps, CV/I = 1.64ps: ratio = 12.99
Intel, 2001: FO4 delay = 18ps, CV/I = 1.34ps; ratio = 13.33
The average ratio between FO4 delay and CV/I device delay found
experimentally is then 13.73.
http://www.itrs.net/Links/2003ITRS/L...FO4Writeup.pdf
In today's CPUs, the balance between pipeline depth and total FO4 delays per stage is a critical balance determining clocking capabilities.
that's very optimistic, but Intel are comfortably ahead now, will still be comfortably ahead at the end of '08 and in '09, and there will be nothing that can compete with nehalem. Intel will still have a wide spread of products after nehalem launches, because LGA775 will still be going strong.
so i'd bet the pricing will be like the pricing of the initial x6800, or the first extreme-edition quads, or the 8800 Ultra when it was launched. and even if the cheapest nehalem-CPUs are affordable the total system cost will make many people weep... DDR3 will still be expensive, and a decent nehalem-overclocking-mobo will cost :mad: :eek: :shocked: :shakes: ...
Not only that , but IBM doesn't care about the yields ( not that they were great in the first place )
Out of 200 CPUs on a wafer , IBM can discard 190 too leaky/low clock parts and still end up with a massive profit ( when you sell a CPU for $20k or more ).In the end , they hardly sell 50k CPUs a year.
And some compare that to Intel or AMD which sell hundreds of millions at $100.Obviously , x86 CPUs are binned for maximum yield.
i said PCI-e not SATA the two are typically on diferent controllers too if I'm nto mistaken. SATA/RAID on the ICH/southbridge and PCI-e on the NB.
I've never really had experiences with increased PCI-e speed corrupting my SATA drives though. I mean I do know data would inevitably have to go across PCI-e but.. ehh it's 3:30 in the morning I have a mid term tomorrow, good night.
^^ yep
SATA is handled by the Ibexpeak on the ones with integrated ondie PCIe on.
Now SATA is handled with ICHs and PCIe by northbridges, and guess what? When you up the PCIe frequency beyond 120 or so you get massive corruption of your SATA hard drives. The exact reason? I don't know, but it happens (maybe upping PCIe freq you also increases DMI freq). Ask all the Xtreme Legends in the 3DMark section, will they run 120+ PCIE and a SATA drive? NO WAY... they always use IDE drives because SATA ones go nuts.
So we will see, but even if the PCIe controller is on die in Nehalem and SATA handled by Ibexpeak, I bet upping PCIe beyond a certain limit will give you SATA corruption.
Ever used that Auto Overclocking feature for the PCI-E bus? It can still interfere with data coming from and to the South Bridge. Meaning it can flucx with DMI transfers both ways. The north bridge is a hub, data doesn't come in or out independently like a switch (if it were a networking device). The NB being a hub is a larger problem than the FSB that many here complain about.:rolleyes:
k8 and pentium dual-cores are a commodity
expecting a nehalem CPU/mobo/ram purchase to be affordable or good value for money in the absence of competition is like expecting the qx9650 to be <$500, or expecting nvidia not to release an overclocked 8800 GTS as the 9800 GTX. it ain't gonna happen :yepp:
cheaper nehalem CPUs and motherboards will still be turning up years after the launch. look at the pentium dual-core range, launched a year after core2
So if Intel dont sell Nehalem in volume at relatively low prices. What will they make money on when people aint upgrading? Or will they earn all the money by selling a very very very low number of CPUs and chipsets?
Competition aint doing much really. It is giving us some 20% pricecuts and superheated GPU/CPUs. Its a joke to think competition is the key factor to everything.
No, its simply about volume and the right price where the lower price and higher volume ends up in the highest profit margin. And they also need innovation and new reasons to sell you a new product to keep the sold consumer base in a continual replacement. Else it would end in the gadget segment. And thats AMDs, nVidia and Intels biggest fear ever.
From what i collect of the Nehalem launch... it'll be similar to the Conroe launch.
First out are standard desktop processors in the $250-$1100 range. A quarter later out are standard server processors in the $300-$1500 range. After a quarter or two, double-die processors start to ship in the $500-$1100 and $600-$1500 price range respectively. At about this time the line extends down to $100 and $200 respectively. Later on is a refresh and die-shrink, and availability of new sub-$80 processors.
I'll personally be holding off on Nehalem until we can get Gainestown or Beckton--or possibly even after the 32nm shrink for whatever replaces Beckton.
you're dreaming :rolleyes:
if i were an intel shareholder and Intel did what you just described i would go ape:banana::banana::banana::banana:
Indeed, intels biggest competitor isn't AMD but rather own their installed products - they have to give a reason to get people to upgrade, be it increased performance for enthusiasts, increased portability and battery life for laptops/MID's/mobile phones, or lower power consumption to lower power bills, or allow smaller, quieter form factors.
Its easy to get people to buy penryn over phenom, but getting someone to buy nehelam when they have a conroe at home is an altogether more difficult task.
After all for most people a pentium 4 system can still let them do most of what they want, and they'll likely only upgrade when their computer can't do a task they deem vital, or when hardware fails.
Basically it's not selling people new computers who have Conroe--I'ts selling people new computes who have K8 or still using Netburst. In reality, Nehalem and Conroe aren't competing, since Conroe was basically aiming to have upgrades over sub-2.8GHz P4s (PGA478, Northwood, Wilamette) and Nehalem is for all other P4s and PDs (Prescott, Smithfield, and Presler) and all K8s. It won't be until late Nehalem and after Nehalem does Intel aim to replace computers with Conroe, at least on the desktop space. Remember average consumers have desktops aimed to last for 4 years, and laptops aimed to last 2-3 years. Most people don't have money to throw at new computers--they use it for specific tasks and they don't need much. Then again, a P4 is more than enough to check email and browse the internet, as well as play flash games and whatever else most people want to run.
Quote:
Originally Posted by Donnie27
I don't think any of us here really know what is going on in the minds of Intel corporate folks. However, I would guess that Intel would actually prefer to stop or at least hinder overclocking if they could, at least in the lower bins. As long as it doesn't cost them too much to do so. I don't remember anyone taking them seriously when they claimed that the reason they introduced multiplier locking was due to concerns about chip remarking. That is what they said, but I believe they were a lot more concerned about about losing money to people who would otherwise pay more for a higher clocked chip. What did you expect them to say? Would you expect them to actually admit that it was also an inexpensive way to stop overclockers? That would be bad PR. If they choose to take a few extra steps to make it difficult or impossible to overclock nehalem, you can bet that they are going to have some PR approved reason. The question is are you going to believe that as well?
I don't think Intel is going to make any kind of huge effort to prevent overclocking, but if it is easy and inexpensive for them to do so I think they might give it a try. 'Good will' means very little when you have the fastest chips by a large margin. They are well aware that that is exactly the sort of offer that 'enthusiasts' like us simply can't refuse. As a percentage of the market I cannot imagine that overclockers represent a large portion of their income anyway. Which is why it is not worth it to them to spend much in either preventing or allowing it. Still, I would guess that it would slightly enhance their bottom line if overclocking became impossible due to an architectural change.
Personally I would have paid up to $500 or so for my new CPU, but I only had to pay $200 for my E8400. It clocks about as high as anything else and my apps are mostly single threaded. Why should I pay more? Also keep in mind that overclocking has become a lot more popular than it was when they introduced the multiplier locks. I am sure they are not unaware of that.
Intel did complain about Shady VAR's selling overclocked systems. No WE can't read their minds but I never tried to. You should do the same? It would be VERY SILLY for Intel to sponsor Fugger's Demo and then do as you suggest;) Believe what you like, no matter how far fetched it is. This small market can't influence Intel or AMD's bottom line=P The so called Black additions is Marketing and nothing else. They still don't overclock worth a danged.
It would be a waste of time for Intel or Anyone else to worry about legit overclockers as compared to some Jerk selling a 2.4GHz as a 3GHz. There were plenty of Bogus Companies selling Counterfeit everything from fake MS mice, re-badged RAM, overclocked processor, Windows all the way back to 3.11 and even DOS LOL!
Contradicting your own statements uh? But you're dead wrong about Good-Will. Intel spent too much time and money gaining that BACK from AMD. Even as A64 was barely better and X2 was CLEARLY better, Intel kept Good-will right up until Prescott. Many folks loved their Northwood C's.
If Intel or AMD expect to see any real profits up front* in this market, they're in trouble IMHO.
Then you're unaware of how much higher the higher Multiplier Processors can go:D They hit the wall much later than the Cheaper models. The problem with what you're saying here is that Nehalem *should start out faster clock for clock. Meaning it doesn't have to overclocked as hard. Then we'll see what the Dual Core version does.
Last but not least, as was proven at IDF, Intel and most of the folks there are VERY AWARE of overclocking and this site.