We'll see about that when the benchmarks are out. In particular, AMD, although failures in terms of absolute performance, positioned their products nicely to offer competitive, alternative solutions to intel.
Printable View
Funny, we talk to a lot of cloud customers and the 6000 is just as interesting as the 4000 to them.
Here's the problem with ARM. It's really low power, but it is also really low performance.
So, while you can get really low power, you end up adding more and more servers. Suddenly, physical infrastructure becomes the problem.
Wouldn't you rather have a 35W (TDP) 6-core in a 2P than 12 individual 1P ARM servers?
The real issue is that ARM does not support 64-bit and it is limited to small micro servers. That becomes a management nightmare.
Some time, in the future, I can tell you a really funny story about this.
Thankfully, your exchange remains:
Quote:
Turbo (improved) in a server chip. Yeah!!!
That feature was totally unexpected and is a welcome one. I wonder if it accounts in anyway in the 50% improvement in performance. I hope not!!!!
F.
Last edited by franzius; 08-23-2010 at 11:02 PM. Reason: just clarifying my thoughts, man
http://www.semiaccurate.com/forums/s...75&postcount=4Quote:
Originally Posted by JF-AMD
If that is FUD, well, it came from you...
wow this thread has been derailed for some time now... apologist really brings hilarity in this thread haha
@JF
will it be possible or feasible if a company says that they would want a bobcat solution for there server/cloud needs?
Ok this is how i see the sandy bridge vs Bulldozer thing:
Sandybridge - Core + SMT "2T"
Bulldozer - Module "2T"
Technically i find bulldozer quite a refresh and very well made. But i had access to sandy bridge some time ago and well its no slouch either. The main feature in sandy bridge is the flexibility which i have been say's for months, relatively speaking bulldozer has less flexibility "Tough i have to see proper documentation before being 100% sure but it seems so"
Now if you look at how the module works with its power requirements is the reason why i say its equal to a single core +SMT of sandy bridge this is not in respect to performance but in design. The module implements Turbo, power gates, etc with respect to a single module and not a single core whereas the SNB implements it on core level.
I know the die size of SNB per core "+L2" but dont know about the die size of bulldozer's modules "+L2", i hope its close because SNB die size is not huge and it may be possible for Intel to make even higher count chips "This is the flexibility i was talking about". Now if we take the example of the 8c/16t sandy bridge vs 8m/16t bulldozer it is apparent to realize that all 16t of the bulldozer need to be fed for good gains but in situations where the thread count is <=8t the sandy bridge has a very good variable's for better performance.
Turbo wise i am very interested in knowing how much the bulldozer can do "its a module based service means the whole module will be oced not just a single core" the sandybridge is very good in turbo you people can expect some chips t come very near to the 4ghz mark "with turbo". Also you will be interested to know that intel's turbo implemented can be further increased with minor increase in TDP. If the turbo was core wise i do think AMD would win because most likely the AMD cores are smaller than the SNB core.
I have talked to all of the major OEMs about their cloud needs and Bobcat is "interesting" in the "I don't want to disregard any potential technology" way. But nobody is seriously making plans around it.
The problem is that with low power, those solutions bring along a lot more servers. So you lose power, but then you have to add proportionally more servers and managing the physical assets becomes more of a problem than power consumption.
AMD confirms the AM3+ as a necessity for a Zambezi product:
http://www.planet3dnow.de/cgi-bin/ne...&id=1282840508
It looks like they had a difficult choice to make and they went with more performance which is ,in the end,a right thing to do.Present AM3 system owners can use their AM3 CPUs in the new boards so not everything is broken compatibility wise. Use the old chip with the new board when AMD launches it and then just slide in the new 8 core Bulldozer when it comes next year.Not the perfect solution for present owners ,but if it is for performance reasons then it's understandable .Quote:
"The existing G34 and C32 server infrastructure will support the new Bulldozer-based server products. In order for AMD’s desktop offering to fully leverage the capabilities of Bulldozer, an enhanced AM3+ socket will be introduced that supports Bulldozer and is backward-compatible with our existing AM3 CPU offerings."
"When we initially set out on the path to Bulldozer we were hoping for AM3 compatibility, but further along the process we realized that we had a choice to make based on some of the features that we wanted to bring with Bulldozer. We could either provide AM3 support and lose some of the capabilities of the new Bulldozer architecture or, we could choose the AM3+ socket which would allow the Bulldozer-base Zambezi to have greater performance and capability.
The majority of the computer buying public will not upgrade their processors, but enthusiasts do. When we did the analysis it was clear that the customers who were most likely to upgrade an AM3 motherboard to a Bulldozer would want the features and capability that would only be delivered in the new AM3+ sockets. A classic Catch-22.
Why not do both you ask? Just make a second model that only works in AM3? First, because that would greatly increase the cost and infrastructure of bringing the product to market, which would drive up the cost of the product (for both AMD and its partners). Secondly, adding an additional product would double the time involved in many of the development steps.
So in the end, delivering an AM3 capability would bring you a less featured product that was more expensive and later to market. Instead we chose the path of the AM3+ socket, which is a path that we hope will bring you a better priced product, with greater performance and more features - on time.
When we looked at the market for AM3 upgrades, it was clear that the folks most interested in an AM3-based product were the enthusiasts. This is one set of customers that we know are not willing to settle for second best when it comes to performance, so we definitely needed to ensure that our new architecture would meet their demanding needs, for both high performance and overclockability. We believe they will see that in AM3+."
I know I've said this before, it's just that it makes me so mad...
Yeah but this doesn't explain why they didn't launch AM3+ with this years 870/880G/890GX/890FX chipsets.:down:
This would have been the only way to make some decent use of the remaining compatibility: You buy an AM3+ board for your AM3 CPU in 2010, and will be able to upgrade to BD in 2011.
I don't even own an AM3 board, I just feel sorry for all people who bought a new board this year and thought they would be able to use it with BD.
You need a new board for SB, but you also need a new board for BD which also happens to show up later. Tough choice.
What pisses me is that the 8 series are not that great from the 7 series as it is so why launch a minor upgrade from the 7 series if you are not supporting the next chip that will come in months ?
What AMD could have done was to make a single cip compatible with both sockets "AM3 and AM3+" but disable the special abilities when use with the AM3 just like the case of AM2+ and AM2 ehh.
This move by AMD means that anyone who thinks of getting a bulldozer will also take a look at high end sandy bridge because he will have to buy a mobo and a cpu in any case.
EDIT: Forgot to add that the G34/c32 is suppose to get the added abilities also they were launched before AM3 and this means that AMD could have done something to retro fit the AM3 hell they could have delayed the AM3 and released a AM3+ type version i dont think most people would have cared. :(
Come on, READ MY POST!
I said, AMD could've launched AM3+ with this years 870/880G/890GX/890FX boards, they showed up the same month as G34.
http://www.anandtech.com/show/2952
I'm not talking about the AM3 launch, you're making things up.:shakes:
Is this an official confirmation or what? There is not AMD official link on that page.
Perhap they didn't want to unveiling any secret of bulldozer through the chipset & bios.
(I remember that there is an ASUS AM3 motherboard bios update several months before, inside the bios(source code or something else)there's a suspicious 8-core-cpu supply information.
It's possible, but in this situation they will lose customers to Intel instead, since now you need a new board no matter which CPU you chose. People expected that it would work with AM3, and now they're pissed off.
Besides, compatible BIOS wouldn't need to show up until next year, and they would still have this situation if the original AM3 would have been used with BD.
It's the worst of both worlds:
- BD doesn't work in AM3 boards, because they made a new socket, but
- It's still not a new socket, no LGA, no PCIe, no added memory channels, nothing. I'm not saying that AMD would add all these features in BD from the start, but they could've added the pins for future use.
AM3+ is mechanically the same as the seven year old original K8/Sledgehammer/Opteron/Athlon FX-51 socket 940, you can't fit any of those features into it electrically, you need more pins.
Now we're stuck with yet another AM iteration, which obviously will need a replacement quite soon anyway, because it will be outdated before it gets launched.
as a platform AMD has done an incredible job. simplistic upgrade paths, and low cost boards with good quality.
when i bought my 790FX board 3 years ago, i had NO idea i was going to put a 4ghz hex into it (i was expecting just a 3.6ghz quad)
and i honestly forget what the 800series boards brought anyway, just some USB3 and SATA6gbs in a few higher end models?
also there are almost always combo sales for buying a cpu+mobo on newegg. im expecting you can have one you like for 30-40$ less, if you wait for the cpus to come out before upgrading to AM3+
if they want to make us really happy, tell us what AM3+ will offer that we couldnt get otherwise
and is it possible to build a socket that has more pins, but the same pin layout to be compatible with older?
Still not a new socket? Well they've already said they had to do AM3+ and make bulldozer incompatible with AM3 to ensure a performance boost(EDIT: More performance boost compared to sticking to AM3, I mean). No LGA, no PCI-E, is that something to complain? Do tell me if there's a performance difference beyond the margin of errors in putting the PCI-E lanes on the chipset or on the processor. Memory channels, yes, that's only one thing though.
Replacement probably comes when the next generation comes. Heck, do we know if AM3+ is the same socket as AM3? We know that AM2+ and AM2 are the same sockets. Nothing that's stopping them from adding a wee bit more pins around the corners here and there and still maintain compatibility like how AM3 processors can fit into AM2+.
At least AM3 can fit into AM3+. Kinda like AM2+ and AM3, only backwards lol, though people prefer to change processors rather than mobo.
Mats, how can you know which pins are used in current am3 processors? maybe there is some pins already for future use.
PS. you have awesome avatar... i know it is building on front of sun, but it really always looks like something differrent first :D
Mech Man: Honestly, I don't know. I'm sure there's some doc about that at amd.com
Yeah, it's and awesome building! :D The best thing is that it was seriously made for an asian language institution at some university in Brazil (I think).
I don't think there is unused pins anymore.
First K8 was socket 940 & DDR1 dual channel. AM3 is 941 pins with DDR3 dual channel with advanced power management.
So i think there is no pins available from a long time ^^.
It might very well be that there is no free pins anymore. But, there is som space for pins in current processor. So additional pins could be added without loosing compability with older processors. It is shame still that they did not introduce it with 800 series chipsets.
I guess you missed the reviews that showed how irrelevant hyper threading is in the desktop world, more times hurting that doing good, or else you wouldn't be praising SB 16 threads.
And what will stop AMD from bringing 16 threads to desktop other then the lack of need to do so?
Yeah i remember this old days, when we was overclocking AthlonXP 2600+ mobile @ 2.9ghz and when AMD released the first Hammer chip. It was insane :D
Please point out such reviews. Secondly, I suppose you realize HT improves through-output. Testing only the amount of time needed to complete a job and not the actual work done will give a false impression on HT.
For example : task 1 takes 30s to complete and processes 100 work units. I activate HT and now I get 34s. From your POV HT is hurting. In reality you have another task, task 2 which did some actual work in that timeframe besides task 1. Let's say task 2 did 30 work units.
Overall we have : 13% more time wrts to task 1, but 30% more work done overall.
Reviews fail miserably at pointing out this. They only show you : in game X with HT enabled you go from 180 fps to 170s. Conclusion : HT sucks. Maybe their brain sucks, not HT.
A socket the size of your palm ?Quote:
And what will stop AMD from bringing 16 threads to desktop other then the lack of need to do so?
1. SB has significantly improved hyperthreading.
2. Reread the exchange. I didn't mention HT, the other guy objected that HT made it unfair to compare Zambezi (8 core) with desktop Sandy (8 core).
3. Interlagos on the desktop... there would be some concerns with cost, the socket / mem bandwidth, low power devices vs turbo frequency, size of that segment. I doubt it, but not impossible.
It goes both ways really. It depends on if throughput or latency is more important to your workload. Game engines aren't going to benefit much from more parallel throughput at the cost of latency. They need a lot of serial tasks done in sequence on the CPU side of things. A video encoder on the other hand probably couldn't care less since its workload can be broken up nicely into independent blocks of data and will probably see an improvement overall even if each work unit takes longer than it otherwise would.
if the review is about gaming, then the conclusion could be for people making a gaming rig. sure an i7 might just flat out rock, but what if you wanted a 2c/4t cpu, in some games its fine, in others its not going to be as good as 4c/4t cpus.
im simply replying to the example and how its possible that the conclusion is relevant.
my personal feelings, HT is a bandaid that is well used, it should not be around forever, but long enough to have a purpose.
Don't get me wrong, I don't particularly care for hyperthreading either. I'm just trying to keep things objective.
there are many ways to look at and evaluate a problem that is constantly evolving.
first we had single or duel cores. it was a simple black and white difference. now that we have some programs still begging for more mhz, and some that have been optimized for every core available. the answer isnt so clear what will provide the best performance for every situation. and every meaning more than just single threaded and unlimited threaded.
i say bandaid because i honestly believe that a better solution will be found within a few years and intel will no longer rely on HT (like reverse HT sounds pretty awesome if it can get working). i did say it has a point and a purpose in the current scheme of things. so dont think i hate it.
lulz just about every Nehalem review where the comparison was made. You asking for them like they didn't exist is hilarious and pathetic at the same time
Can i ask you what other imaginary task are you referring to? And what imaginary 30% more work done? And if it made so much more work how come it ends up encoding in more time in a multithreaded app?
roflocopter...
So what has HT done for the game beside dropping the playable rating? 30% more work in a imaginary task that still led to less fps? HT is adorable.
This doesn't make any sense as both HT and more cores require the software to do precisely the same thing in order for either to be used. One could say Intel was applying a bandaid hoping the extra threads would be used with the same argument you've supplied for more cores if the assertion were true in the way you've put forth.
ok lets start over, your first post says:
i believe you have the idea way to generalized for an adequate comparison.
2 cores were added because the power efficiency improvements AMD has made allows it to add so much more. when we first got a 125W cpu, it was 3ghz. soon we will have 3.5ghz at the same envelope. and btw we now have hexies at 3.2ghz, which is faster than the original fastest quads.
if you wanted something more than a quad, then your right, reduce mhz, add cores. but its a purchase choice, not a bios toggle that says if your maxed out with a quad or hex
however we can start to argue what the turbo does, turning 6 slower cores into 3 faster ones, which is relying still on the tdp being the same. what if instead of re-releasing the 1055T at 95W, they left it at 125W, but it would 4 turbo cores at 3.5ghz.
owners of the 1090T have full control over turbo, they can set the number of cores, and even what the max multi is for each core at turbo, and at stock. if one of them decided to do such a thing and convert their 3.2x6/3.6x3 into 3.2x6/3.5x4, and benchmarked it, i bet it would still fall into the same tdp range.
AMD did try to prevent apps from becoming slower with the hexies by adding turbo, but since it wasnt built from the ground up that way, they were a little limited. duel core apps like most games, should notice an increase
or we can also just argue that every Thuban clocks better than Deneb and there is no excuse if your cpu ends up being slower then deneb.
basically im trying to point out that such a generalized statement really dosnt make it easy to prove if true, since there are so many different ways one can solve such an issue.
Lame excuse. HT hurts past, present and with 99% probability future games too, period. HT is good for certain things, and bad for others, this is one of them. Don't try to hide this fact. We still don't know how HT is done in SB, so I'll not comment on that.
You can hear all the bullcrap that X game supports Y number threads blablabla. The truth is that most games only load 1, 2 or maybe 3 threads to the max, the others are just for minor things that are done in a short period of time.
One way to think of hyperthreading is that it is about as far down the "spectrum of sharing" that you see in a BD module as you can go. That is, imagine that the 2 integer halves of each BD module have even more of their resources shared.
As a (HT sharing-level) consequence, the second thread does not add as much additional performance.
But also as a (HT sharing-level) consequence, when not running a second thread, the first thread gets even more of the "module's" resources to itself, and performs better.
In the BD-level sharing, the non-shared resources are made less wide, to conserve power, and allow for more overall cores and throughput in a heavily loaded situation, at the cost of some single-thread performance.
So, really, the two concepts are about different levels of resource sharing per pair of threads.
In SB, the core (or "module", if you want) has been given more resources (e.g. load added to the store port, store added to the load port: meaning you can also do 2 loads or 2 stores instead of just 1 load & 1 store) which should particularly help HT performance.
Edit:
To expand on this, going the other direction from BD-level sharing, you arrive at 2 completely separate cores per "module".
So there are a number of knobs to turn:
1. How many resources are shared?
2. How wide/narrow are the shared resources?
3. How wide/narrow are the UNshared resources?
4. How many modules on the part?
And then you have largely power-related (some die size) constraints, and you want to optimize for various workloads.
So you can argue about what the best settings are for the knobs, but really, a BD "module" sits on the same design spectrum, just in between separate cores and an Intel HT core. Meaning, I suppose, that it is not some qualitatively different thing, so rather than "OMG Hypertheading sucks!11 BD modules rock!!!!!1", perhaps more refined arguments about the choice of design knob settings would make sense?
In this context, my new sig can be expressed as: Turning down knob 1 means that the constraints & optimization require knob 3 to be turned down, as well, unless you want to mess with 4, which you don't. :)
I'm just referring to the stock factory situation. So while Thuban's extra cores doesn't hurt much in gaming performance, in reality neither does Hyperthreading. Looking at this review by IXBT, enabling Hyperthreading has essentially no impact on gaming performance:
http://ixbtlabs.com/articles3/cpu/ar...2009-5-p8.html
I got bored here at work and put together a little table/chart of what i expect to see for core scaling with BD, relative to that of known Deneb and Thuban. keep in mind i did not try to compare IPC between any of the chips, just difference within itself should be noticed. the idea is relative efficiency between old and new stuff
Left to Right is # of threads running
Top to Bottom is the core # (in the second column) and their performance relative to a single core
and at the bottom is the sum of all cores, followed by a chart.
for BD
my assumption is that for every 80% core turned off, the other core will get about 5-10% stronger for IPC (extra L2), and then 15-20% turbo, so 25% total increase. and then as its using less than 4 threads i added 5% more turbo per drop
for Thuban i put in 15% gains for 3 or less threads since its close enough, some chips its up to 19%, others 12%
I compared the HT disabled 4 core scores with the HT enabled 8 core scores. I considered Turbo to be a small factor given that its a i7 950 which will only Turbo up at most 2 multipliers and the games are generally multi-threaded which lessens the chance of Turbo activating.
I don't understand this whole discussion of pro-Intel's trying to derail BD modules and pro-AMD trying to derail Hyperthreading.
It all comes down to how many threads can be processed in a given die size and power envelope.
If SB can run 16 threads within 300mm2 @ 100W power consumption, while AMD can do exactly the same with 8 BD modules, they both look equally as attractive to me in a MARKETING point of view, which ignores all the very fine architectural differences.
Also, when it comes to performance per core:
For Intel, if they can do HT in 1 core without sacrificing die size, then WHY NOT?
If AMD can do 50% better than Hyperthreading by adding only 20% more die space, then WHY NOT?
Both look like very elegant solutions to me regardless in which way you look at it. What I'm really looking foward to seeing in the future is REVERSING this solution = Using more than 1 core to run 1 thread. THAT is an interesting and EFFICIENT method to boost your overall performance and utility.
Reverse hyperthreading, it's stay a myth for now :D
Well, I would take back my stress if thats the case.
I hardly ever make any technical statements in this site as I am not any qualified engineer or specialist. So I understood he was mocking my opinion.
apologies to the readers for any drama.
Back on the topic, if reverse hyperthreading is so hard to achieve, were all the news that Intel is officially announcing its development just Fud?
i would not consider that reverse hyperthreading. anaphase is a thread than looks at memory access patterns from the real thread and prefetches data based off of what algorithm it uses. most applications have a high cache hit rate and most of the latency is actually waiting on cache.( L3 isnt that much faster than a good DDR3 burst read or write.)
Apart from being extremely difficult to solve(the "Reverse HT"),there is an easier way of upping single thread performance - dynamically increasing clocks based on load levels,the road both intel and AMD are taking.Of course,there is a continuous work on the core level improvements with the new generation of chips,those help too but are not cheap and easy like clocks.
Bulldozer and HT are both ways of looking at supporting larger numbers of threads. People are getting caught up in semantics and dicing/slicing the architectures instead of focusing on what is the performance, what is the power consumption and what is the price.
Those are the 3 that matter to customers.
that's already been done on itanium. it has 64 clockspeeds. the gain in efficiency was ~25%.
the 4 things that will bring single thread perf up is faster and smarter caches, faster and smarter branch prediction, faster main memory and increased clockspeed. note that two of those are speculative. it is likely that working on interconnects and parallelism will gain more performance.
Why slicing the architectures ? We look for know how it works, if performance it's gonna be great or not.
Ok we can't really know how is the process.
For moment AMD don't give real performances numbers, intel has done it with SB, preview already on anantech ...
We are customers, if performances power consumtion and price are not good, we won't buy it.
For moment we are slicing architectures, to get an idea, what's next on AMD side that's all.
If AMD need wait before doing some previews, or some real bench to the world, we don't have any choice, we just wait, and try to know, what is best our next upgrade.
:rolleyes:
The problem with your logic is that you're forgetting Bulldozer will not fully compete against these SB shown in Anand, they will actually have to take on an even faster and better equipped processor and platform. The previewed SB is meant to take on Llano.
So in order avoid giving intel info on the competition of their unreleased SB chips, AMD is keeping quiet.
IMO they could and should give some benchies on Ontario and Bobcat, as well as Llano. I don't see any reason to hold those back as they are sooner to be released and their competition is already layed out.
There are low power graphics cards released every year that is a lot faster than it's predecessor.
But it's always possible to gain lots of performance while using more power. So if they can double the performance by doubling the heat output of that low power card, they will. You can still buy these low power cards, it's up to you.
What if (then) they come up with an idea that gives us twice the performance of GTX480 while only using 90 watts? They will use the same technology to make a card six times faster while consuming 270 watts. And you will still complain that they should make this monster in the same 90W enveloupe.
true, but looking deeper will provide insight to why it is faster. people are curious about what your company makes! all we do is buy a peace of etched silicon and it just magically runs faster?
another potential reason for these discussions is to predict what these companies will do next. hardware geeks think its cool to be an insider for tech products. what needs to improve? performance? duh. how does one do that? use a dual rail self resetting domino bypass multiplexer.:p: it gets technical and nerds love it!
fyi, i buy hardware that i think is interesting and cool. performance matters but i'd really like to know what's going on in there, maybe a story about the design too. there are many failure and success stories behind chips. it's pretty cool.
I find that hard to believe. That midrange 2400 they tested is very close to the 980X, that's very fast (although with HT, which it won't have, but no turbo). While I do think BD will be fast, I can't imagine a Llano being close to the 980X, which is what you indirectly say.
Remember that the LGA1366 i7 950 will be replaced by a LGA1155 i7 2600, according to the article.
I don't think we'll see any nice price i7 920 successors for the LGA 2011.
http://images.anandtech.com/reviews/...ew/roadmap.png
Somethings not right with that chart ^^^^^
These Perf Numbers suggest an i7 2600k will smash an i7 880..
@JF-AMD: would it be possible to get us access to the lead client side marketing guru ???? since xs is mostly about client side cpu's it would make more sense ... or could we get some info from time to time that is relayed by the client side
You obviously haven't read the preview, right?:yepp:
You're missing my point. I'm not the one who's missing LGA and PCIe, I'm talking about AMD. They're going to add it, so why not add the pins now?
AMD wouldn't upset many people by doing this since the most anticipated compatibility is gone anyway.
Exactly! If BDs first socket was made the right way, it could continue to work with the next gen BDs, but now we got AM3+ which is neither enough compatible with old boards, nor is modern enough to stay for some generations.
Been there.. . :D http://www.xtremesystems.org/forums/...8&postcount=43
the next socket would most likely arive with the 22nm node ....
Which would make AM3+ relatively shortlived just because AMD won't make a completely new socket this time, and that's my point.
It's not the end of the world, upgrading CPU doesn't always work well for various reasons. You can put a X6 in a four year old 690G motherboard, but you probably won't..;)
But when will the details of Bulldozer's decode finally be revealed
The design of those parts is fixed. Silicon samples are out there already. AMD is keeping quiet because BD silicon is not yet in a state to be sent out for review. They just taped the thing out last quarter... you can't expect them to be sending out A0 silicon for review..
It is typically 9 months to a year after first tape out that products finalize and have enough volume to lanch. My guess is that we will see first leaked BD numbers around marchish timeframe of next year.
For example, Barcelona was demonstrated in Decemberish of 2006 (taped out like August-septemberish of 06) and we saw first products around novemberish of 07...
I don't think it matters really, regardless of when it comes it is going to be fun to dissect it.
Both Intel & AMD need new board???
I am sure they read this site, but they really don't comment on things in public.
I live in a different world. Server people need to know about technology 12-18 months before they deploy. There are decisions that they need to make about the technology in order to put their plans in place. And when they buy, they buy thousands of servers on a very regular cadence. Telling a server customer what is coming a year or two from now causes them to say "great, let me put that into my plan, I will start buying them 90-120 days after they are available because I have to run them through qual and testing first."
Giving a consumer benchmarks today can stall the market. It is a different world.
lol they are the apple of the silicon world.....though i do not care much for apple.....
Intel has sent out about 500 samples of sandy bridge 8c in march - april and had no leaks but about a month or so back about 1100-1200 more sample were sent out along with documentation and there were leaks.
There were a whole tray of 32nm 6 cores stolen from Intels R&D center just months ago for which everyone had a inspection done.
The thing is that when small quantities are used the security is better and overall accountable but when the quantity increases security decreases. Also ES cpu's fetch high prices in the market, people who want them are willing to pay heavily not only in terms of money but also with system stability and performance because of the ES nature of the cpu.
Most samples i get form Intel are given with NDA and under a strict one piece per load, i cant have more than one cpu at a time. This makes leaks very hard in the early schemes of things.
http://blogs.amd.com/work/2010/08/23...ge-4/#comments
AMD guy promised bulldozer review next week (see comments)
Quote:
when is round 2?
John Fruehe August 26, 2010
In legal review now, probably early next week.
Review? Far from it, just more answers to questions, akin to this: http://blogs.amd.com/work/2010/08/23...ons-round-one/
Info can get twisted so easily on the internet. And next AMD would be blamed for not giving "a full review as promised". :P
It is round 2. They are actually through legal and will be posted on monday. I post them in the "AMD" forum because I see it as information, not "news".