To many sockets ?? Compared to what..Intel ?Originally Posted by NickK
To many sockets ?? Compared to what..Intel ?Originally Posted by NickK
Everything extra is bad!
No, I'm talking about PCI-X 2.0. Its for servers only as it has massive pin counts making it totally unfeasible for desktop usage because of cost but it is quite fast. You'll see PCI-X 266 and 533 out soon IIRC. Backwards compatibility will ensure that the PCI-X standard is on servers for a long loooong time.Originally Posted by saratoga
Again, for desktops such a setup is stupid. But socket F is not for desktops, its for servers that can often have fibre channel and multi Terabyte (even Petabyte some times) RAID 5 hard disk setups, the I/O controllers for these systems are quite complex already... How the hell you gonna buffer 10 TB of data buddy?Originally Posted by saratoga
HT hubs are expensive though and take up lots of mobo real estate and have to be supported with drivers and such for the end device (usually a Promise or Adaptec RAID controller). If they did decide to put PCIe on die/package for socket F it'd offer a seamless I/O increase for every CPU added to the system which is quite nice IMO. Again this is all if, if, if... I've already stated this is unlikely due to cost issues but its hardly a stupid idea.Originally Posted by saratoga
Quanticles: AMD has already stated they'll have quad core Opterons in 2007.
Order: Sure for servers there are going to be applications where there is no such thing as enough system bandwidth, and PCIe goes a long way towards addressing current limitations quite nicely IMO. I was referring to the consumer market space though, PCIe really is overkill for that market IMO.
ValkyrieLenneth: Uhh, its the other way around there bud. Nothing wrong with integrated from a performance or stability standpoint at all.
Ulgy & Grey: Apparently the K9 core is dual core K8's, guess we'll have to wait til' K10 for a new arch. from AMD, which from the slides I've seen sounds like it'll be a super K8 with very low voltage req. with virtualization tech. as well as support for DRM (Trusted Computing).
http://www.dvhardware.net/article5372.html
Lithan: Well duh, if you run stuff past stock you're almost sure to get all sort s of wierd issues. If all that happened with AMD chips at stock then sure you'd have a reason to complain, but AMD does not cater to the OC'ing market and niether does Intel. There ain't no such thing as a consistant overclock from either company and there never has been, it always has been, is, and will be a crap shoot.
Lol - true. However they have the same problem.Originally Posted by Ubermann
Still using the ADA4400DAA6CD CCBWE 0517MPMW
Correct. As I stated, the normal consumer has no need for nor should be tried to convinced that the extra bandwidth is something they should pay extra money for. However, with fewer AGP boards out there and more makers turning to the PCI-E architecture, it is only a matter of time before the cost of a PCI-E board and its peripherals is less than the aging AGP.Order: Sure for servers there are going to be applications where there is no such thing as enough system bandwidth, and PCIe goes a long way towards addressing current limitations quite nicely IMO. I was referring to the consumer market space though, PCIe really is overkill for that market IMO.
Q6600 @ 3.4GHz - 1.4v, 4GB PC26400, Asus P5B-Deluxe WIFI, BFG 8800GTX OC2, (4x)Seagate 15k.5 SAS RAID0 on Adaptec 4805 w/128MB cache, approx 1.3TB SATA300, Gateway 24" LCD.
Can't you already get ULV Pentium M's that run off less than 1v?Originally Posted by Ugly n Grey
I'm not sure exactly, but even so keep in mind that the Pentium M isn't a fully-featured processor and is based on a rather archaic architecture. (That said, I'm running one on my W1n as we speak.)Can't you already get ULV Pentium M's that run off less than 1v?
Q6600 @ 3.4GHz - 1.4v, 4GB PC26400, Asus P5B-Deluxe WIFI, BFG 8800GTX OC2, (4x)Seagate 15k.5 SAS RAID0 on Adaptec 4805 w/128MB cache, approx 1.3TB SATA300, Gateway 24" LCD.
http://216.239.59.104/search?q=cache...irements&hl=en
looks like you can
for me the most suprising thing is how much extra heat output you get for the faster 533 bus dothans as oposed to the same speed but lower bussed versions ie. 760 vs 755 - so a less bandwidth hungry processor would probably up the heat even more - but intel have 65nm moving into mass production very soon.
Last edited by onewingedangel; 07-23-2005 at 10:34 AM.
Interesting.
Thats an incredibly low speed, though. I think there are ARM chipsets that push more hertz at a similar voltage. I'm going to see if those Intel numbers have been reproduced by a 3rd party.
Q6600 @ 3.4GHz - 1.4v, 4GB PC26400, Asus P5B-Deluxe WIFI, BFG 8800GTX OC2, (4x)Seagate 15k.5 SAS RAID0 on Adaptec 4805 w/128MB cache, approx 1.3TB SATA300, Gateway 24" LCD.
M2 probably could do it at 940 pins. Realize that going from DDR --> DDR2 doesn't take many extra pins (20 for 2 channels?) AND that the current Socket 939 has very many essentially unused pins that either have GND or core VID on them, which the 940 gets by without. By comparing the 939 pinout to the 940 pinout, you'll see that 939 currently has at least 100 pins to spare.Originally Posted by turtle
http://episteme.arstechnica.com/grou...r/697006514731Originally Posted by Order
Also the socket s1 for laptops has manages dual channel ddr2 on 638 pinsOriginally Posted by terrace215
see:
http://anandtech.com/cpuchipsets/sho...spx?i=2476&p=3
&
http://www.xbitlabs.com/news/cpu/pri...510043057.html
so theres obviously a lot of room for more features on those extra pins for socket m2, let alone socket f.
Thats a very cool utility.
I'm going to test it out myself.
Thanks a lot for the info!
Q6600 @ 3.4GHz - 1.4v, 4GB PC26400, Asus P5B-Deluxe WIFI, BFG 8800GTX OC2, (4x)Seagate 15k.5 SAS RAID0 on Adaptec 4805 w/128MB cache, approx 1.3TB SATA300, Gateway 24" LCD.
There are quite a few for AMD too, though.Originally Posted by Ubermann
AMD - 754, 939, 940, M2, F
Intel - 423, 478, 479 (mobile only), 775
So unless Intel has more P4 sockets on it's way, it looks as if AMD is in the lead. If we count the current ones, it's even unless you count 479 as a "real socket".
[DFI nF4 Ultra-D @ 6/15 BIOS] // [Venice 3000+ @ 2.8GHz @ 1.472v] // [2x512MB SP BH-5 @ 2-2-2-8 200 MHz 2.63v... for the moment] // [Sapphire X800XL Ultimate] // [OCZ Powerstream 520W]
Super PI 1M record: 29.969 // Suicide screen record: 3 GHz // LBBLE 0517 DPMW batch ~300 on air // Stable enough?
TB 900 + Venice 2800 folding 24/7, TB 1200 folding whenever it's on, profile
IMO: The more sockets, the better . It means the makers are churning out more advanced wares. (Obviously this could go the other way and leave the consumer asking: "why the do I have to buy a whole new MB for an incremental upgrade?!")
Q6600 @ 3.4GHz - 1.4v, 4GB PC26400, Asus P5B-Deluxe WIFI, BFG 8800GTX OC2, (4x)Seagate 15k.5 SAS RAID0 on Adaptec 4805 w/128MB cache, approx 1.3TB SATA300, Gateway 24" LCD.
Its not clear that these standards will be widely used however. The complexity of parallel PCI makes its expensive to impliment, and the performance verses PCI-E isn't there. Furthermore, PCI-E's uptake has been extremely fast, giving it volume.No, I'm talking about PCI-X 2.0. Its for servers only as it has massive pin counts making it totally unfeasible for desktop usage because of cost but it is quite fast. You'll see PCI-X 266 and 533 out soon IIRC. Backwards compatibility will ensure that the PCI-X standard is on servers for a long loooong time.
Though I agree people will be using PCI-X for a long time to come (much like PCI).
Err do you not understand what buffering means? Its the process where you store data until you have enough to fill a packet. You don't buffer the entire hard disk, just a few KB until the packet can be sentAgain, for desktops such a setup is stupid. But socket F is not for desktops, its for servers that can often have fibre channel and multi Terabyte (even Petabyte some times) RAID 5 hard disk setups, the I/O controllers for these systems are quite complex already... How the hell you gonna buffer 10 TB of data buddy?
Anyway you're completely missing the point: HT is a low latency interconnect. PCI-E is not. This is a design choice. Bringing the controller on board doesn't change that. In fact, it makes virtually no difference.
So whats the point? Save 50ns on a 5000us transfer? BFD.
And I disagree about desktops. A GPU MIGHT be able to see the difference, someday. A RAID controller will not. You're already well into the milisecond range when you bring RAID into the picture. At that point it could take 1 ns to transfer or 1000. You'll never know the difference since the disk is so slow and the queues are so deep.
Since when do people building dual and quad socket systems care about 15 dollars for a PCI-E bridge? Conversely, they will care about the increase in CPU pin count, which IS a problem when you're already routing signal and power lines for multiple sockets. Nothing like throwing in a few hundred GHz range signal pins into an already crowded multiple socket board.HT hubs are expensive though and take up lots of mobo real estate and have to be supported with drivers and such for the end device (usually a Promise or Adaptec RAID controller). If they did decide to put PCIe on die/package for socket F it'd offer a seamless I/O increase for every CPU added to the system which is quite nice IMO. Again this is all if, if, if... I've already stated this is unlikely due to cost issues but its hardly a stupid idea.
you are counting future sockets too and 754 is for mobile in the amd and amd has sockets for server usersOriginally Posted by exscape
but in the other side how many chipset intels have making almost useless the older ones?
Incoming new computer after 5 long years
YOU want to FIGHT CANCER OR AIDS join us at WCG and help to have a better FUTURE
The point is it happens. It's an issue with these chips that other chips don't have. When you always overclock, whether an issue happens at stock or only when overclocked doesn't much matter to you.Originally Posted by mesyn191
Only the stupidest humans believe that the dogma of relative filth is a defense.
Lithan: Sure it does. If it happens at stock then I'm not getting what I paid for, if it happens when I OC its cuz I modified the hardware knowing full well that it could cause problems.
Saratoga: Yea I know what buffering is and the basic concept behind it, but still for larger and larger volumes you will generally need a larger and larger buffer. If this wasn't true we'd all be doing fine with 8KB L1 cache's and off die L2's or even no L2 at all on our CPUs...
And yes you can expect single disk latencies to be horrible, but across a large RAID5 array its quite possible for the controller to be the bottleneck (same goes for the buffer too...).
Also the CPU package pin costs were what I stated as being the original problem with putting PCIe on the CPU !! It does cost more though than just $15 for the I/O controller BTW (well, for a good one anyways, the Adpatec ones can cost a arm and a leg, and 2 or 3 of those on 1 mobo can drive the price sky high), and that doesn't inlcude all the other issues that I mentioned like mobo space (very limited on server boards) and TCO issues like having to deal with 3rd party drivers and such. Generally speaking the simpler the better.
OOOOooo
*thinks*
FX-whateverisoutthen+dual channel ddr2+780gtx sli running on its own pci-e bus through the procesor.
God I will be broke.
AMD X2 3800+
DFi LANPARTY UT NF590 SLI-M2R/G
2 x 1Gb Crucial PC8500 [Anniversary Heatspreaders ]
Custom Watercooling on the way
Thermalright XP-90 right now
27" 1080p HDTV for monitor
Originally Posted by The Inq
Originally Posted by mesyn191
Why? If you will only ever run it in a situation where it has problems, why would a situation that you will never use it in mean anything to you? That's like saying that Intel pentium-D's are passively cooled chips because if run in absolute zero (a theoretical absolute zero where matter still exists) they can be passively cooled. Amds dont have a cold bug when stock cooled obviously, but the point that was made was that a sli controller onboard could worsen the cold bug that exists when cooled with phase change. This is true. You asked what bug he spoke of and I answered you and now you say that bug doesnt matter? Well of course it doesn't matter to 99.9% of people. But it matters to people here because many of them want to use Phase change on these chips. I'm not trying to badmouth the chip. I'm just explaining the issue he was talking about because you seemed to not know about it. You won't have a very strong arguement in "xtreme systems" if you say that a cpu works flawlessly for everyone because it works flawlessly under stock cooling and at stock speeds.
Only the stupidest humans believe that the dogma of relative filth is a defense.
Lithan: "You won't have a very strong arguement in "xtreme systems" if you say that a cpu works flawlessly for everyone because it works flawlessly under stock cooling and at stock speeds."
I didn't say that and looking at what I posted I don't see how you could've come to the conlusion that this was what I meant....
So, one more time...
AMD has only to garuntee that thier chips work at the advertised speeds they sell them at, which they do and thier chips do run perfectly fine without any memory controller errors at all, AT STOCK!!!!
If you run out of stock specs AMD offers no garuntee, hence while sub zero cooling may matter to you (and me BTW, I have a VapoLS) AMD could give a crap less and I wouldn't expect them too either since they state quite explicitly what the operating specs are for thier chips. So I really can't understand how you can come off blaming AMD for any stability issues with thier chips at sub zero temps or while OC'ing, I sure don't and you shouldn't either...
Eh, if you want to include server sockets for Intel it'd be:
socket 423, 478, 775, 479, 603, 604, and 700 (Itanium).
Sorry Mesyn, I assumed since you responded to me that you were the person I had posted in response to. My post mentioning these problems was in response to this exchange...
Originally Posted by ValkyrieLenneth
Originally Posted by Order
Since all chips before onboard mem controller worked fine @ subambient temps (except a VERY rare exception), people consider the "cold bug" just that, a "bug". So I was telling Order what "bugs" valkyr was refering to. I wasn't just popping in to randomly badmouth the cpus.
Only the stupidest humans believe that the dogma of relative filth is a defense.
OK, my bad
Gotcha. I didn't mean to sound standoffish.
Q6600 @ 3.4GHz - 1.4v, 4GB PC26400, Asus P5B-Deluxe WIFI, BFG 8800GTX OC2, (4x)Seagate 15k.5 SAS RAID0 on Adaptec 4805 w/128MB cache, approx 1.3TB SATA300, Gateway 24" LCD.
Bookmarks