I hope the B3 stepping proves to be more overclockable. If all they did was fix the TLB errata, and didn't do anything to resolve the horrible stability issues with 4 memory modules or Vista 64-bit booting...
I hope the B3 stepping proves to be more overclockable. If all they did was fix the TLB errata, and didn't do anything to resolve the horrible stability issues with 4 memory modules or Vista 64-bit booting...
Work in progress... Koolance PC4-1026SL Case, ASUS M3A79-T Deluxe BIOS 0602 - Video, CPU, Memory - Water Cooled
Phenom II X4 940 @ 3.6GHz with Koolance CPU-340 (o/c limited by 4 memory sticks maybe?)
4x mushkin 2gb 991593 in OCZ FlexXLC Water Blocks at 5-5-5-12-2T 2.2V DDR2-960
3x ATI Radeon HD 3870 @ 960/1250 @ 1.675V, custom capacitor mods, with Koolance VID-387, Catalyst 8.10
2x WD VelociRaptor SATA 10k 300GB RAID 0 on SB750(200MB/s throughput!)
Silverstone Zeus 1200W (single rail mode) Power Supply
Dell 3007WFP-HC, LG DVD, Razer Keyboard + Mouse, Vista Ultimate 64-Bit
FSB 240, CPU 1.4V 15x, Mem 2.2V, NB 1.3V 10x
System Showcase - Koolance PC4-1026SL Case, MSI K9A2 Platinum with 1.6B3 BIOS
Northbridge, Southbridge, VRM, CPU, Video, Memory - Water Cooled (picture)
Phenom 9850 CPU @ 2888MHz with Koolance CPU-340
4x mushkin 2gb 991593 in OCZ FlexXLC Water Blocks at 5-5-5-12-2T 2.1V DDR2-960 (Sandra XII SP2c Mem Bandwidth 11.68GB/s)
4x ATI Radeon HD 3870 @ 975/1250 @ 1.675V, custom capacitor mods, with Koolance VID-387, Catalyst 8.6 HF1 | 16156 3dMark06 | screenshot
2x Fujitsu MAX3147RC SAS 15k 146GB RAID 0 (156MB/s throughput!)
SilverStone OP1200 (90A Single Rail) Power Supply
Dell 3007WFP-HC, LG DVD, Razer Keyboard + Mouse, Vista Ultimate 64-Bit
FSB 240, CPU 1.45V, Mem 2.1V, NB 1.3V, HT 1.3V
Thanks d412k5t412
Problems... it's probably just you Vista guys and the poor driver pickup
It needs to pickup the driver within Vista properly or it would read that error, unless you haven't rebooted.
MagnumMan your issues are probably with just Vista 64b and not the CPU itself. A known issue. Heck, Vista has a known issue installing with 4GB RAM, gives a BSoD on a perfectly stable system. As for clocking, don't expect it to improve, I doubt it will and nor was the timescale for it to (the main reason is to fix errata in hardware).
Here's one BSoD Vista is known to give if you have more than 3GB RAM upon installing and the fix: http://support.microsoft.com/kb/929777/en-us
Anyway, me thinks I'm picking up another 5000 BE now for personal use in the 780G, since the cheapest prices compared are as follows:
AMD 5000+ BE 2.6G £50
AMD BE-2350 2.1G +£7
Phenom 9500 2.2G +£48
Intel E8200 2.66G +£63 (not yet available)
Intel E6750 2.67G +£74
This isn't free, yes, UK is always more expensive
Quite a cool purchase me thinks, 3.3G should be stock volts and at todays rate being from US, that sets me back $101 compared to $251 for E6750.
That's similar to what I had asked but earlier they said yes, then no they can't because it's a limitation and now yes again. Hence why I mentioned it earlier and then withdrew that.
Ah well, you never know. Was using a TT120 before as well as a Zalman 9700 when I had this error with 9500/9600BE, didn't make any difference to it.And cmon, stay with the dark side....you know you wanna do it ;P. Only thing I can recommend is to not use the stock cooler this time. Nirvana 120 premium, TRUE, or something of that caliber, should easily handle the heat.
I believe anyone can understand this much if sincere, honest and technologically understanding. But I've known from internal sources since June on such facts, so I have no reason to conjecture other scenarios. The IMC-CPU delta, in volts and speeds is a major issue for K10h, as well as the actual bugginess of it. A1 step had a major bug where it did not boot because the IMC failed to initailze and IIRC B0 had the same bug. The IMC is also a power/temp hog and it was a major limit in final clockspeeds (very low yields at higher speeds, higher caused much inconsistent buginess). You can drop CPU speed/volts to 1000/1V low from stock and still won't get as big power saving as you will get by dropping IMC volts alone.On the other hand, the multiple cpu death thing that seems to revolve around the NB/IMC kinda gives hard evidence of a hotspot on the die causing probably the TLB and part of the reason the phenom were released with the IMC/L3 being clocked so low.
Some onliners think AMD K10h is really crap. Most that I've seen do not know what they are talking about when they say this. All of them cannot even make a 8086, they have no room to talk yet, until they can understand what they're talking about, I'm talking about practical experience not flanderish dabbling as a toy amateur or sentimental chat. This ignoring the obvious haywire trolls, who I skip over reading. Onliners need to keep a level head, they seem to be too wounded up in speed numbers. In actual fact, AMD K10h design is far ahead of it's time, go ask a processor engineer, don't take my word for it, no really, go and ask one - this is where AMD lost out. It is a tomorrows things, fabrication node and even software is just not optimized and ready for such an architecture yet. For any design, you essentially need good speeds.
AMD K8 always started at low speeds, early Q4 2003 130nm Sledgehammer followed later in Q2 2004 by Clawhammer. Being the top most expensive line, their top speeds were FX51 2.2G, FX53 2.4G and FX55 2.6G at high TDPs. Later, by mid 2005 90nm San Diego top bin was FX57 2.8G. By mid-Q2 2005 you then had release the Manchester and Toledo X2 cores at 90nm, their top bin being FX60 2.6G... by then Intel had Pentium 670 at 3.8G retail already, a lineup that already overclocked over 8GHz if cooled properly [1]. Then in mid-Q2 2006 you saw 90nm Orleans and Windsor for AM2 with DDR2 support and by then, Intel had released their dual-core Pentium D 960 3.6G. By then Intel had many Tejas CPU samples running stable 5.8G air at over 230W TDP using a 40 step pipeline [1]. AMD's top bin was FX-62 2.8G by 2006 with 90nm 6000+ offering 3.0G by Q1 2007 and 6400+ being the highest speed bin at 3.2G by late 2007. Brisbane 65nm K8 topped out retail now in Q2 2008 with 3.0G 5800+.
[1] Meaning, the only thing holding Intel CPU's back was TDP. Lower node, TDP tweaks with much cooler operation, and they could retail 6GHz air CPU's since 2003.
AMD always had started at around 2GHz whilst Intel started at around 2.8GHz.
Why should this change now?
-As you can see, since 5 years now, AMD never ever had high speeds with their CPU design and material choices. Always sub-2.4G with a new arch, after 2 years to get only 3.2G.
-Intel on the other hand always had high speeds, up to 8GHz possible with their design and material choices. Only being limited by current leakage and TDP.
Difference?
Netburst was fast at low performance per clock.
K8 was slow with fast performance per clock [compared to Netburst].
Overall Desktop: K8 won.
Why? Netburst was poor per clock.
Weaklinks?
If AMD improved their arch to achieve high MHz, they succeed Intel by a long margin.
The day Intel improved their arch to achieve better per clock perf. they will annhilate AMD by a long margin.
5 years on, what happened?
Intel patched their weakest link, clock per clock perf. with Core. Core 2 did not overclock like Netburst, it is a hotter core per MHz compared to single core Netburst but per clock it is much faster. My Core 2's needed more than 1.4V for 3.7G stable but my Pentium only needed 1.2V for 3.8G stable. By G0 step, the only thing holding Core 2 retail 4.0GHz C2Q/D back was TDP, not yields, nothing else. 9 months on and 45nm Core 2.5 releases, improves this by the 35% improvement a node shirnk should bring by nature and lets you get higher frequencies with a tad more perf, coming closer to Netburst speeds again, but still far off esp at the higher end. My Netburst 2.8G chip did 1.2V 3.8G air stable, Core 2 could not do that and neither can Penryn yet. They are trying to reach those speeds of 4G retail till this day. Penryn 45nm can still not do that. They are once again stuck at high TDP with their top bins. They can go 3.5G dual core quite easily but not more than this nor more than 3.4G for quad staying under 150W TDP, and that's with their highest bins, Core 2 arch is too hot for that even at 45nm. Unless they can do something magical, Nehalem native will be worse oc, hotter and higher TDP for lower clocks than Penryn simply due to the design. Depends on them and that is why they will try to push till 32nm and not 45nm.
For Intel to get 5GHz retail, is no achievement, it is old Netburst replay. They are still playing catch up on Netburst with speeds yet even at less than half the node of 130nm. Intel Penryn is still Core 2, I would name it Core 2.5 in performance, as it adds but does not much more than a die shrink+cache+instructions+tweaks would add. It was there to make way for Nehalem and to achieve higher clock speeds they struggled at 65nm with. It's the per clock perf. that matters.
For years AMD built up slowly to only reach 3.2G retail... this is where they lost out. Whilst AMD did not improve their per clock perf. Intel did and now it had both strengths combined to leave AMD in the dust, since AMD could never compete with Intel oc/speeds anyway. 2.0-3.2G retail has always been it's forte while 2.8-3.8G retail has been Intel's plus 8G overclockability. AMD can no way win Intel unless: a) they improve oc b) they improve per clock perf. drastically. Both of these will now require a major change in the core design and materials, not tid bits.
AMD's IC material/design choice is not for high speeds/oc's, it never has been, but Intel's always has. K8 could never even come close to P4 speeds. Nothing yet even comes close to P4 speeds and things are far off the 5.8G Tejas chips Intel had for a while at it's labs. Do I care about speeds? No I don't, only those inexperienced with high speeds might or not understanding. Give me a 10GHz 200% clock for clock faster than Penryn Nehalem chip and I still feel the same as I feel on a 1.3GHz K7 whilst doing common desktop usage. Our workplace 16-core 3G X7350 systems would destroy SkullTrail in Cinebench 10 by far even if you oc'd the latter. Exactly, so what. I expect this of a Netburst replacement by nature but clock speed doesn't matter at all to me, per clock performance does. Intel has not advanced since Core 2 since 2 years, the limitations of Core 2 still exist and it's looking like what AMD did since 2003 again - become complacent. AMD on the other hand, took a step back from even K8 oc with K10h. Sure, I think users are forgetting AMD has never been Intel, they could never oc high, nearly all the oc's were only 200-800MHz that were considered very good. Now Phenom does not change this and users start to act weird, well they need to come to reality. The best oc'er I've seen on air is the Opty 165 and 5000+ BE by AMD, and even that, after months to mature and tweak, it only reaches 3.3-3.5G air from 2.6G and 1.8G-2.8G. A clock difference of +700-1000MHz. Heck, at launch many of the K8 until Opty's could not do even 200MHz oc's. At launch, most Phenoms can do +300MHz oc's and many 9500/9600BE's have done +400-700MHz, with a native quad. Again, is this poor oc compared to AMD chips since 2003? No it's not for s 1st step, compared to their chips, it's very good. It is only poor if you compare it to the competition, Core 2 and Penryn, and that because of per clock perf. People want Phenom to be Core 2, but it isn't. It will never be in my opinion, for long. The path AMD chooses is always low power/higher per clock perf. compared to Intel which focuses on highest clock/ok per clock perf. With Phenom, AMD did not get high per clock perf. compared to Core 2. With Phenom, AMD chip lost the little oc ability it had at least under cold.
Merit has to go where it's due. At 65nm, their chip was too high in current leakage to get the required speeds... they are the best chip tweakers at every node, they can surely get much out of these with time too, but for these next 9 months, they won't be able to level even their 90nm and 65nm K8 offering speeds. This is a bad sign, it means the chip actually is limited in architecture and/or materials, so it won't scale in speed with further die shrinkages unless they implement drastic favorable changes. Intel on the other hand will not have to worry about getting higher and higher clock speeds as fabrication level shrinks, it will be second nature.
Native approach was was always risky but allowed AMD time and much experience to resolve all for the future now and be far better experienced and prepeared for 45nm. Intel never did this for the exact same reason, they knew it was the best approach, they doubted it as impossible as their best engineers couldn't do it at 65nm. They were not even able to get B3 2.0GHz native Kentsfield out, the problems in their samples were so much, heat, very low clocks and poor yields with high wafer defect density meant it was no go. AMD pulled it off, props. Intel even skipped 45nm for it, that's how much of a problem they had with it and now they're trying to skip to 32nm for this reason in case you hadn't noticed why. So far, we have Core 2 and Core 2.5 out with K8 and K9 out there competing. Nehalem regardless of perf. has the right to be named Core 4, whilst Shanghai might very well merit the name K10, finally.
Do I think you'll get Netburst oc's with it? No
What am I looking for in it? Good clock per clock perf. (more than 10%), major IMC perf. and core to core perf. improvement, no major bugs, retail minimum upto 2.8G launch and 3.2G by post 6 months, oc's to 3.4G tops on air, good yields, availability, timed release, no cold bug uptil at least -100C, good affordable prices, not like FX57 or QX series = win.
Do I think 45nm K10h will level Penryn perf.? No, not by what I've seen,
I also want Nehalem to not have the problems I'm pretty sure it will have if released at 45nm for desktop, mainly due to heat, cost, price and not being suited to the available software code around. I'm hoping they compete well with each other, because I'm sick of stupid 500% markup prices which makes them very wealthy at our loss.
I think your chip did 1.24V 2.7G stable, right? Mine only did 1.26V 2.656G stable and 1.28V 2.7G stable maximum last it started. GBT PSU costs 1/2 more than my 5000+ BE here, yup.And as far as all that can be tweaked thats why I enjoy mine so much as well, granted *knock on wood* I haven't had the processor death problem yet. Granted I've also been able to OC at lower voltages than you have as far as I can tell though as well which probably helps. I kinda wish those ODIN Psu's didn't cost so much.
I like a balanced life, the middle path. But I've been programming and oc'ing before, been member here 3 times since March 2002 first time, hence I remember this place, online IT, and it's happenings very well over time. Same with many other places, like once-upon-a-time the now ruined THG.I think I'm actually about the same way when it comes to showing interest in computing in reality. Now IF I'm talking to another person that's interested, then yeah I'll talk about it. But same here it's by no means my life. Though I've actually recently started to play games on my computer again, which is something I hadn't done much of since I got the phenom build goin. I gotta laugh, I've been the same way about roleplaying gaming for a long time as well. I enjoy RPGing, but I don't talk about it outside of the game, or in the rare occasion it comes up in conversation, even though thats about a 50/50 whether I'll talk about it. Bad part is I've had friends on the other hand, that do nothing but talk about gaming 2/3 of the time, thats what kinda turned me off to being such a dork about it.
Should do 900MHz easily. Even the 2900 does this and it's much hotter. What's the full card name or MFG P/N? I might pick one up, sounds interesting.And no I haven't tried to OC the Toxic edition yet, but let me tell ya the VaporX cooler is freakin amazing. Card Idols at around 40-45c, at that temp the fan rarely runs if at all, full load in half life 2 episode 2 with high setting it stays below 60c, and even then the fan isn't going above 50%, and thats without ATiTool, if I use ATiTOol it runs even cooler. Thats at factory default 800mhz core, should easily be able to go much faster, but I'm a noob when it comes to OCing vid cards so not sure how high I could get, should hit 825 easy, since that was what the atomic which is the same card was clocked at, only difference between the two is the name on the card, and package contents. You don't get the $30 etailer thing, or the metal briefcase, you get a 6ft hdmi cable instead of a 9ft, you get valve blackbox free (can't remember if atomic was orange box or blackbox.
KTE
i tryed the P0J bios, and it blue screens.
the BSOD Error is :
Stop 0x0000007B (0xFFFFFA6000SAF9D0, 0xFFFFFFFFC0000034, 0x0000000000000000, 0x0000000000000000)
now i see it ref. to
BSOD with 0x0000007B in crcdisk.sys
and
After you install a device or update a driver for a device, Windows Vista may not start
i am trying a few things (based on various posts on this stop error) but it seems that the bios changes things enough on the disk hardware that vista freaks out.... more updates later.
UPDATE :
so, assuming it's a driver issue, i'm looking up drivers for the promise RAID.
MSI has driver version :
Promise FastTrak TX4650/2650 : Microsoft Windows miniport driver 1.1.1030.7 -- NOT DIGITALLY SIGNED
PROMISE has driver version :
Promise FastTrak TX4650/2650 : Microsoft Windows miniport driver 1.1.0.4 -- MAY BE SIGNED, CAN'T TELL 100% (device manager changes it's mind, depends on when i look at it)
and, then found out Windows Destroy my computer automated service (READ : Windows Update) has/wants to (depends on how you install) :
Windows Promise FastTrak TX4310 (tm) Controller (x64) : DriverVer=02/15/2007, 2.06.1.326 <--which to me doesn't look like it should even work at all..... but it's installed.
I would love to just use the MSI driver, but not signed means no x64 use.... unless they allow that driver signing thing turned back off...
That BSoD is a corrupt HDD, IRQ or I/O port address confict existing between both controllers, or corrupt/unreadable/unrecognized HDD driver.
Have you first tried booting off disk and running Checkdisk and checking your HDD's are OK?
Tried checking for conflict in IRQ?
Tried Last Known Good Config?
I don't know what is digitially signed for Vista there as I didn't install the driver for Vista, sorry. You're going to have to troubeshoot it through.
BTW, in case you guys didn't know, there is a new mATX AMD chipset coming dubbed 790GX and it has a faster IGP than what 780G has. Also the 780G and SB700 each have a 1.5W idle TDP.
And some new CPU's coming out (now-ish) are:
Brisbane:
A64 4050e (G2/45W) 2100MHz
A64 4450e (G2/45W) 2300MHz
A64 4850e (G2/45W) 2500MHz
X2 4600+ EE (G2/45W) 2400MHz
X2 5600+ (G2/76W, maybe a 65W varient too) 2900MHz
X2 5800+ (G2/89W) 3000MHz
Sempron (Brisbane):
2100 (G1/65W) 1800MHz
2100 (G2/65W) 1800MHz
2200 (G2/65W) 2200MHz
it's the same HHD/info that i am running now. if it was corrupt, i wouldn't be typing this.. lol
safe mode = same bsod
last known = same bsod
checkdisk = no issues
I HAVE to have said driver installed in some way, because of the RAID-0 i am running.
sure, i could shutoff the raid and go normal, but I like my raptor RAID-0, if i could just throw everything on the SB600, would be nice and easy methinks.
That doesn't matter, HDD corrupt does not mean you can't use it. It fails slowly, run the HDD stability test from the MFG.
Driver, HDD or controller is bad.safe mode = same bsod
last known = same bsod
checkdisk = no issues
Have you tried it?I HAVE to have said driver installed in some way, because of the RAID-0 i am running.
sure, i could shutoff the raid and go normal, but I like my raptor RAID-0, if i could just throw everything on the SB600, would be nice and easy methinks.
true, sorry bout my quick comment, ran tests from WD, 100%
i am thinking driver incompats with new bios version or something, going to try to switch drivers.
ohh yea, no driver in this case = no raid-0, vista x32/x64 won't even install, and none of those 3 drivers show signed in a SP1 intergrated disc.
to be honest, i have no clue how raptors perform in non raid, i've been raid for year. have never looked back till now.
@KTE yeah, I'm around THG under the same username. There are still some good people on the Forums, if you can ignore the Intel Trolls. Like Technology Coordinator just depends on the day you catch him, usually he's pretty level headed though, there are a few others too, but there are plenty of Intel trolls there. Not to mention the quality of their reviews at THG has been sliding quickly down the crapper, seeing that with their cpu cooler roundup, and almost every recent Phenom review.
I say k10 athlon is just fine, I think a lot of the sites are doin something wrong, because I get numbers better than theirs 90% of the time once I get the hardware figured out. But AMD will outright tell you, the base athlon architecture was never meant for high clock speeds, it was meant for high ipc, which I think is why they're moving up to a longer pipeline with the next arch after deneb.
I honestly think Intel is gonna start having higher TDP's on their processors with Nehalem. AMD's been keeping fairly competitive out power usage, with a processor that has an IMC/nb on die, which is actually pretty impressive. I don't think Intel is gonna hit the speeds that people think they will though, I also don't think Intel is gonna release a retail processor clocked at higher than 3.6ghz.
http://www.newegg.com/Product/Produc...82E16814102732
That is the video card, costs a bit more than the regular 3870s, but the package contents value makes up for it. 6' HDMI cable, full version of 3dmark06, PowerDVD7 and PowerDVD Suite, and Valve Black Box, plus the other regular goodies. Hell a 6ft hdmi cable costs about $35 at the Walmart where I work at the very least.
And with the meltdown problems that some are having with the e8400's I'm thinking OCing going any higher than the TDP of the processor is gonna start going the way of the dodo soon. People are frying their penryns because they can't measure the thermal properly, or don't realize even though the die is smaller, they still generate heat. They're generating more heat per square mm in fact than a conroe. And without cooling that can easily handle quick heat dissipation well.... you get the picture.
Now on another thing, I'm curious whats going to set the FX82 apart from the 9850BE. Just wondering if the FX's aren't going to be cherry picked BE's that had the ability to clock the NB/IMC higher at stock voltages. Thats about the only way I think they could get more performance out of them to justify the FX label. And does anyone know if the fx91-92 that are socket 1207+ are gonna be 4x4 compatible? Would be interesting if they could get the performance up on em, to compete with the super high end Skulltrail.
And yeah, in 1.1b3 I was running 2.7ghz by 2ghz, 1.262v (1.248v actual with C&Q disabled) core VID, 1.1v NB/IMC VID, was the same for 2.6ghz core, and was doing 2.4ghz NB/IMC at 1.250v VID. Currently running 2.6ghz core, at 1.250VID under bios P0J, 1.240v actual, 1.04v under C&Q mode, voltages are set to auto. So yeah, Part of the Phenom stability equation I'm really starting to believe has to do with bios maturity.
Last edited by Mathos; 03-16-2008 at 03:17 PM.
AMD Phenom X4 9850BE
ZeroTherm Nirvana 120 cpu cooler
MSI K9A2 Platinum Bios P.0J
4GB Mushkin (2x2) DDR2 1066 (PC8500) CL5-5-5-15 2v
Sapphire Toxic edition Radeon HD3870
2 x 320GB Seagate Barracuda 7200.10 in Raid 0
80GB Western Digital Caviar IDE For driver and file backups.
Raidmax RX-700SS 700w psu (possible weak link in OC equation)
Sorry, I can't do anything since I can't test those drivers, OS and setup yet.
Ask MSI for new drivers here, they should reply quick: http://ocss.msi.com.tw/index.php?mod=questions&dop=list
I don't read the forum at all, not since late 2006 and don't read INQ/FUD either for a while now. Worthless. But along with many others that is. I can't be bothered with trash, there's enough in life to handle without more stupid geeks playing God over the net. 1 forum is enough for me to post in since I don't have the time, with too much to do in life. My profession in the major sciences and no scientist I know out of many all over the world has even the amount of time I do to post from work, they have enough to do and learn about away from even family life.
True that, the design is only supposed to be for high perf. per watt at low clock speeds sub 3G, in opposition to what Intel intended. But they didn't achieve high perf. per watt with K10h while Intel has more of it. With the reviews, same here. The biggest muckup so far I've seen in reviews was with Phenom, they really needed a lot more time to understand and experiment before giving us reviews of a product they didn't understand. Usually I get more perf. at exact settings than all reviews but for those using x86_x64 where I use x86. Then you have SP1 which is even worse if used.I say k10 athlon is just fine, I think a lot of the sites are doin something wrong, because I get numbers better than theirs 90% of the time once I get the hardware figured out. But AMD will outright tell you, the base athlon architecture was never meant for high clock speeds, it was meant for high ipc, which I think is why they're moving up to a longer pipeline with the next arch after deneb.
I don't read into what people guess much but 2.8G native at 45nm should be at 150W TDP and no less. They have heat problems with native designs esp. if they put a 35-55W IMC with 3 DRAM controllers in there. 32nm, 35% reduction is standard but won't be achieved since they're moving from MCM to Native, 136-150W 3.2G 45nm MCM, so let's say Native+IMC 3G at 130W TDP. I doubt you'll have more than 3.4G native quad release for a while yet without plus 150W TDP, but speed doesn't matter, Perf. per MHz does. If they can achieve more than 1.1x the Penryn perf. per clock with equal settings, they've improved and more than 1.3x will be very good (not just in one specialized non-realistic bench).I honestly think Intel is gonna start having higher TDP's on their processors with Nehalem. AMD's been keeping fairly competitive out power usage, with a processor that has an IMC/nb on die, which is actually pretty impressive. I don't think Intel is gonna hit the speeds that people think they will though, I also don't think Intel is gonna release a retail processor clocked at higher than 3.6ghz.
With K10h, I doubt Shanghai is the one we'll want but that to me will be Budapest.
Ah yeah, hard to find here and same price here as a Q6600+5000+ BE+3800 EE OR a 9500+9600 BE.
For me as a non-gamer, it's no point. My 3850 and 2600 give me good enough per if I want to dabble in a quick game at all medium settings with 2x or 4x AA. I will pickup another 3870 soon though, I was thinking about how four of them would be earlier just to test MSI board and Phenom. Depends if I get another Phenom first. Has anyone tried quad CrossFire on the MSI board that you know of? Hows the performance compared to 2x 3870X2?
Not sure. Just a higher bin part it seems, higher official MHz. Most people don't oc so that would appeal to them as well as to those who only oc 100-300MHz. If they hit 3G with sub 1.4V stable, and 9850 I doubt will hit 2.85G unlike how we hit 2.6G stable with BEs fine, then people will flock to upgrade from previous Phenoms, X2s and general AMD platform buyers. Here's what's releasing in a week or two (I would say April 8th for stock) including Phenom 9050, Kuma coming quicker than expected and 5600+ Black Edition: http://www.digitimes.com/mobos/a20080317PD209.htmlNow on another thing, I'm curious whats going to set the FX82 apart from the 9850BE. Just wondering if the FX's aren't going to be cherry picked BE's that had the ability to clock the NB/IMC higher at stock voltages. Thats about the only way I think they could get more performance out of them to justify the FX label.
They're not releasing. 2P systems are postponed till Deneb at the earliest. 1P FX are AM2+. Roadmap is getting pushed little by little further back and it seems the FX might not even launch if they push things back by 3 more months.And does anyone know if the fx91-92 that are socket 1207+ are gonna be 4x4 compatible?
Similar volts to mine. 2691MHz at 1.225VID/1.248V for perfect stability and 1863 NB at 1.038VID. More than this required voltage increase. That's two days before I cleared CMOS and it died.And yeah, in 1.1b3 I was running 2.7ghz by 2ghz, 1.262v (1.248v actual with C&Q disabled) core VID, 1.1v NB/IMC VID, was the same for 2.6ghz core, and was doing 2.4ghz NB/IMC at 1.250v VID. Currently running 2.6ghz core, at 1.250VID under bios P0J, 1.240v actual, 1.04v under C&Q mode, voltages are set to auto. So yeah, Part of the Phenom stability equation I'm really starting to believe has to do with bios maturity.
I asked the distributor and they're saying that this CPU is not known to die or be faulty and it's extremely rare. They are having problem believing it's my 2nd one dying in a row and insisting I check with a new MB. 9850 they say is 5th April earliest possible.
My second K9A2 caught fire and took my 3600X2 with it. All other components are happy in my BadAxe2, so I know nothing was at fault but the board. I sent it back for a refund, seriously considering the M3A-MVP Deluxe Wifi. Still need something with 4 PCI-E...
Interesting read, will be hard to stay away from shopping till shanghai releases.
I'm surprised of the triple-cores TDP, 65W is nice. I remember I saw a power consumption comparison in a preview showing only marginal differences between triple and quad cores (idle). Maybe they did not have an triple core and used the bios or the windows bootflag to run a triple system.
I think its 65W ACP
You are right it's ACP nowadays. Do you thinks it's 95W TDP vs. 65W ACP?
Here is the review I talked about:
http://www.fudzilla.com/index.php?op...1&limitstart=1
To stay at that level of liable weblinks.
New 8 socket opteron server from HP
http://www.theinquirer.net/gb/inquir...-amd-quad-core
EDIT: I'm curious how this machine equiped with eight future 45nm quad core opterons will compare against a quad socket hexacore nehalem system.
Last edited by justapost; 03-17-2008 at 10:48 AM.
How did it catch fire? How bad? Where? What was the setup, settings and apps you were running when it did? First person for K9A2 there, I've had two and ran over 200W TDP without a problem on a 330W PSU.
It will still be the highest load you as a customer can get from the hottest chip in the bin running TPC.
Usually 68W ACP is 79W TDP with AMD CPU of the same step.
B2 there vs. B3 step new.
Yep, saw that. HP Proliant DL875 G5, 24-cores. 2.5G Barcelona would do nicely in that, 2360 SE, if they ever release them.New 8 socket opteron server from HP
http://www.theinquirer.net/gb/inquir...-amd-quad-core
EDIT: I'm curious how this machine equiped with eight future 45nm quad core opterons will compare against a quad socket hexacore nehalem system.
Crisped the VRM. It puffed some smoke from under the heatsink, but before, at startup, it was flickering my G15 REALLY bad, and then it smelled horrible. The chip may can be salvaged, as I've done it before,but I've been burned twice by boards that failed in like two days. Mithril has work to be doing, and this isn't helping it get there.
Hmm... never heard of that apart from with faulty boards, especially with the low power CPU you had running and I've never used extra fan cooling either just to test the longevity of the VRMs, with no problem. Bad luck I suppose.Originally Posted by Kayin
Some Relevant News Around:-
Here's the new Athlon X2 4850e 45W I plan to get for my 780G build. AMD has dropped the "64" moniker after the Athlon name for the new X2's... initial review shows it oc'd to ~3100MHz stable: http://www.hardspell.com/doc/hard/69064.htm
New AMD FAQ explaining 45W and more: http://www.amd.com/us-en/Processors/...00.html#117799
Speak of the devilQ: What desktop processor lines will transition to the new model numbering conventions? Will AMD Athlon™ FX processors be included?
A: AMD plans to apply the new model numbering conventions only to new AMD Athlon™ X2 dual-core, AMD Athlon™ or AMD Sempron™ processors.
AMD plans to apply slightly different model numbering conventions to AMD Phenom™ FX processors that are similar to the same incremental progression as the previous AMD Athlon FX solutions and not to follow the new model number conventions. Consumers can expect to see AMD Phenom FX processor solutions following this trend as product enhancements continue.
Upcoming AMD Phenom FX quad-core processors may have model numbers for processors for AM2+ socket platforms as well as processors for dual-socket 1207+ platforms.
Check this out, IBM recently made photonic inter-core switches for SOI CPUs, already tested within a fully functional CPU, which have a major energy consumption reduction as well as tiny, nano sizes (capacity of 2000 within 1mm² IIRC), but also, 40GB/s per switch bandwidth between the cores http://domino.research.ibm.com/comm/...ics.index.html
And carbon nanotubes have beee found as faster interconnects than traditional copper for CPUs: http://www.newelectronics.co.uk/arti...rm-copper.aspx
PC Power Efficiency Testing:-
I found no better resource than my home nation Department of Energy for all information on this, as they have set up the ENERGY STAR rating and 80% PLUS SMPS ratings for this exact reason, as well as nearly all the major semiconductor and electrical firms working in alliance with them to set these accepted criteria, which includes Intel and Via providing mutual guidelines. So since I'm about to test system power now (got a new Phenom and GBT Odin) I read up on everything needed and thought I should share these bits from all the most official bodies.
Here's some info on how the ENERGY STAR rating is worked out and applied to our PC's, what it's looking for (very briefly) and it's conditions. It tells us how the major national energy laboratories, electrical firms, Government advisories and sub-segments measure and categorize these things and what they see as accurate ways with energy measurements.
Aim: Their ultimate aim is to reach Standby power of 1W for all electrical devices (P.Bush ordered this initiative for all Government sector computing equipment in 2001).
Intel's comment on PC power efficiency metrics => Energy Star Computer Program Discussion Guide: Version 4.0, Tier 2.0 dated November 9, 2007, United States Environmental Protection Agency, United States Department of Energy:
Originally Posted by How we should test PC efficiencyUUT = Unit Under testingOriginally Posted by Applications we should use to test power efficiency
Also, a friend sent me a link today, 10 months too late, confirming exactly what I had told you long back, almost in August '07 - funnily by Fuad with Intel engineers. Intel saying K10h native design+IMC was the way to go all along which they could not physically make and get working at 65nm or they would've: http://www.fudzilla.com/index.php?op...=6353&Itemid=1Originally Posted by System Power Consumption Efficiency Testing Directives
Once again, it is very clear and obvious a long while back esp. if you have trustworthy knowledge by informed contacts. 2G native K10 at 65nm was doubted immensley as literally impossible at the Institute of Electrical and Electronics Engineers Annual Conferences since 2006, that's why there was huge interest and hype in K10h developments and ongoings by most in this field. It was mission impossible for even the worlds best engineers. If you know any manufacturing level employees at Intel personally, ask them and they'll tell you how difficult it is to manufacture and why. And... Nehalem isn't the name of what's coming up as an upgraded mutation of Core 2 either AFAIK.
Tried that linpack+specview thing:
linpack+specview 242W DC
prime95+3dmark06 241W DC
linpack 235W DC
prime95 200-210W DC
And finaly the winner
linpack+3dmark06 260W DC
Everything measured with Odin, P5K-E + qx6850, so it's a little OT.
hmmmm 3 new bios's have been posted to the MSI FTP site, all within the last couple days.
A7376AMS.P0G
A7376AMS.141
A7376ACI.103
Has anyone tried any of them yet?
AMD Phenom X4 9850BE
ZeroTherm Nirvana 120 cpu cooler
MSI K9A2 Platinum Bios P.0J
4GB Mushkin (2x2) DDR2 1066 (PC8500) CL5-5-5-15 2v
Sapphire Toxic edition Radeon HD3870
2 x 320GB Seagate Barracuda 7200.10 in Raid 0
80GB Western Digital Caviar IDE For driver and file backups.
Raidmax RX-700SS 700w psu (possible weak link in OC equation)
A7376AMS.P0G
A7376ACI.103
A7376AMS.141
Those are the ones most recently posted on the ftp site, some were put up today, and one was done on the 16th.
AMD Phenom X4 9850BE
ZeroTherm Nirvana 120 cpu cooler
MSI K9A2 Platinum Bios P.0J
4GB Mushkin (2x2) DDR2 1066 (PC8500) CL5-5-5-15 2v
Sapphire Toxic edition Radeon HD3870
2 x 320GB Seagate Barracuda 7200.10 in Raid 0
80GB Western Digital Caviar IDE For driver and file backups.
Raidmax RX-700SS 700w psu (possible weak link in OC equation)
thanks for the link
have anyone tried the 141?
AMD Athlon 64 x2 6000+ AM2 CCB8F 0740 FPMW
MSI K9A2 Platinum v1.2
4 x 1GB Corsair XMS2 (rev 5.1 and rev 5.2)
Zalman CNPS9700LED
PowerColor ATi HD 3870 512MB DDR4 256bit PCIe 2.0
Corsair TX750 Watts (12v @ 60A on a single rail)
80 GB WD SATA I (primary)
250 GB WD SATA II (backup)
AMD7376ACI.103 includes a PCIe frequency setting aned TLB disable. It does not include northbridge multiplier settings. That's the only thing missing now... I have a rig on stock coolers running FSB 230 CPU 2530 Mem 920 4-5-5-15-2T Vcpu 1.376 Vnb 1.275 (seems to go to 1.25) Vht 1.225 (seems to go to 1.25) Vmem 2.2 4x 1gb OCZ Pc6400 FlexXLC CL3 PCIe 115 HTMult 9x = 2070MHz just passed 20 hours Prime95 w/ affinity & 900MB ea. BIOS CAI.103seems pretty good, just missing the northbridge multiplier...
Work in progress... Koolance PC4-1026SL Case, ASUS M3A79-T Deluxe BIOS 0602 - Video, CPU, Memory - Water Cooled
Phenom II X4 940 @ 3.6GHz with Koolance CPU-340 (o/c limited by 4 memory sticks maybe?)
4x mushkin 2gb 991593 in OCZ FlexXLC Water Blocks at 5-5-5-12-2T 2.2V DDR2-960
3x ATI Radeon HD 3870 @ 960/1250 @ 1.675V, custom capacitor mods, with Koolance VID-387, Catalyst 8.10
2x WD VelociRaptor SATA 10k 300GB RAID 0 on SB750(200MB/s throughput!)
Silverstone Zeus 1200W (single rail mode) Power Supply
Dell 3007WFP-HC, LG DVD, Razer Keyboard + Mouse, Vista Ultimate 64-Bit
FSB 240, CPU 1.4V 15x, Mem 2.2V, NB 1.3V 10x
System Showcase - Koolance PC4-1026SL Case, MSI K9A2 Platinum with 1.6B3 BIOS
Northbridge, Southbridge, VRM, CPU, Video, Memory - Water Cooled (picture)
Phenom 9850 CPU @ 2888MHz with Koolance CPU-340
4x mushkin 2gb 991593 in OCZ FlexXLC Water Blocks at 5-5-5-12-2T 2.1V DDR2-960 (Sandra XII SP2c Mem Bandwidth 11.68GB/s)
4x ATI Radeon HD 3870 @ 975/1250 @ 1.675V, custom capacitor mods, with Koolance VID-387, Catalyst 8.6 HF1 | 16156 3dMark06 | screenshot
2x Fujitsu MAX3147RC SAS 15k 146GB RAID 0 (156MB/s throughput!)
SilverStone OP1200 (90A Single Rail) Power Supply
Dell 3007WFP-HC, LG DVD, Razer Keyboard + Mouse, Vista Ultimate 64-Bit
FSB 240, CPU 1.45V, Mem 2.1V, NB 1.3V, HT 1.3V
agreed, P0G and 141 neather boot for me (BSOD still, new install)
running stock on ACI.103 right now..
Accoring to TLB-disable program (v.1.04) the TLB fix is set to disable in the bios, and the MSR's are still set, this is most likly SP1 in vista. though,
lost 200 k/sec on winrar from 133, WTF? why does it keep getting slower? LOL
edit : after a reboot, it's only lost 100 k/sec went from ~1200, to ~1100
just an update, and a happy one... this is autotune, fresh install, working on getting back up and running-and a warning to others, as well...
My issue was with the heatpipe-it hit PCI-E0, which made the sinks lift and resulted in crashing. Fixed, as you can see...
official 1.4v is out
click me here
so far so good...
AMD Athlon 64 x2 6000+ AM2 CCB8F 0740 FPMW
MSI K9A2 Platinum v1.2
4 x 1GB Corsair XMS2 (rev 5.1 and rev 5.2)
Zalman CNPS9700LED
PowerColor ATi HD 3870 512MB DDR4 256bit PCIe 2.0
Corsair TX750 Watts (12v @ 60A on a single rail)
80 GB WD SATA I (primary)
250 GB WD SATA II (backup)
Bookmarks