Page 3 of 3 FirstFirst 123
Results 51 to 75 of 75

Thread: AMD good news if its true

  1. #51
    XS News
    Join Date
    Aug 2004
    Location
    Sweden
    Posts
    2,010
    Quote Originally Posted by NickK
    If there was one thing I would say to AMD...

    tooo .... maaannnyyy.... soooccckkketttss....

    Joe public is going to get confused with branded cpu products that have S754, S939, S940, Sxxxx sockets - when is an opteron not an opteron? When it's got the wrong socket..
    To many sockets ?? Compared to what..Intel ?
    Everything extra is bad!

  2. #52
    Xtreme Enthusiast
    Join Date
    Jun 2004
    Location
    lake forest, CA
    Posts
    787
    Quote Originally Posted by saratoga
    The faster version of PCI-X is called PCI-E.
    No, I'm talking about PCI-X 2.0. Its for servers only as it has massive pin counts making it totally unfeasible for desktop usage because of cost but it is quite fast. You'll see PCI-X 266 and 533 out soon IIRC. Backwards compatibility will ensure that the PCI-X standard is on servers for a long loooong time.

    Quote Originally Posted by saratoga
    So you save 40 nanoseconds on a transfer that takes 100 microseconds. That'll gain you about 1/10 of a 3dmark. Real big deal there. On board memory controllers work because DRAM latency is so critical that the few tens of nanoseconds you save makes a huge difference. But saving nanoseconds on a protocol thats designed for a few orders of magnitude higher latency doesn't make sense. Not when devices can just buffer around the latency.
    Again, for desktops such a setup is stupid. But socket F is not for desktops, its for servers that can often have fibre channel and multi Terabyte (even Petabyte some times) RAID 5 hard disk setups, the I/O controllers for these systems are quite complex already... How the hell you gonna buffer 10 TB of data buddy?

    Quote Originally Posted by saratoga
    You can already do this with a current Opterons though, and how many do you see with multiple HT links for IO? Not many (any?), because its very hard to saturate even one HT link, let alone a few of them.
    HT hubs are expensive though and take up lots of mobo real estate and have to be supported with drivers and such for the end device (usually a Promise or Adaptec RAID controller). If they did decide to put PCIe on die/package for socket F it'd offer a seamless I/O increase for every CPU added to the system which is quite nice IMO. Again this is all if, if, if... I've already stated this is unlikely due to cost issues but its hardly a stupid idea.

    Quanticles: AMD has already stated they'll have quad core Opterons in 2007.

    Order: Sure for servers there are going to be applications where there is no such thing as enough system bandwidth, and PCIe goes a long way towards addressing current limitations quite nicely IMO. I was referring to the consumer market space though, PCIe really is overkill for that market IMO.

    ValkyrieLenneth: Uhh, its the other way around there bud. Nothing wrong with integrated from a performance or stability standpoint at all.

    Ulgy & Grey: Apparently the K9 core is dual core K8's, guess we'll have to wait til' K10 for a new arch. from AMD, which from the slides I've seen sounds like it'll be a super K8 with very low voltage req. with virtualization tech. as well as support for DRM (Trusted Computing).

    http://www.dvhardware.net/article5372.html

    Lithan: Well duh, if you run stuff past stock you're almost sure to get all sort s of wierd issues. If all that happened with AMD chips at stock then sure you'd have a reason to complain, but AMD does not cater to the OC'ing market and niether does Intel. There ain't no such thing as a consistant overclock from either company and there never has been, it always has been, is, and will be a crap shoot.

  3. #53
    Xtreme Member
    Join Date
    Jun 2003
    Location
    England, UK
    Posts
    162
    Quote Originally Posted by Ubermann
    To many sockets ?? Compared to what..Intel ?
    Lol - true. However they have the same problem.
    Still using the ADA4400DAA6CD CCBWE 0517MPMW

  4. #54
    Xtreme Member
    Join Date
    Nov 2004
    Location
    Fairfield, Connecticut
    Posts
    485
    Order: Sure for servers there are going to be applications where there is no such thing as enough system bandwidth, and PCIe goes a long way towards addressing current limitations quite nicely IMO. I was referring to the consumer market space though, PCIe really is overkill for that market IMO.
    Correct. As I stated, the normal consumer has no need for nor should be tried to convinced that the extra bandwidth is something they should pay extra money for. However, with fewer AGP boards out there and more makers turning to the PCI-E architecture, it is only a matter of time before the cost of a PCI-E board and its peripherals is less than the aging AGP.
    Q6600 @ 3.4GHz - 1.4v, 4GB PC26400, Asus P5B-Deluxe WIFI, BFG 8800GTX OC2, (4x)Seagate 15k.5 SAS RAID0 on Adaptec 4805 w/128MB cache, approx 1.3TB SATA300, Gateway 24" LCD.

  5. #55
    Xtreme Addict
    Join Date
    Apr 2005
    Location
    Wales, UK
    Posts
    1,195
    Quote Originally Posted by Ugly n Grey
    AMD has pumped more money into low voltage computing in the past eight quarters than people are aware of. They have purchased intellectual property and have been working hard to cross license technology to help them achieve this goal (more than two cores on a chip). I bet they are first to market with voltages in the sub 1v range and four cores with DDR2 mem interface. It may be a K9 solution, won't be as long as K10.
    Can't you already get ULV Pentium M's that run off less than 1v?

  6. #56
    Xtreme Member
    Join Date
    Nov 2004
    Location
    Fairfield, Connecticut
    Posts
    485
    Can't you already get ULV Pentium M's that run off less than 1v?
    I'm not sure exactly, but even so keep in mind that the Pentium M isn't a fully-featured processor and is based on a rather archaic architecture. (That said, I'm running one on my W1n as we speak.)
    Q6600 @ 3.4GHz - 1.4v, 4GB PC26400, Asus P5B-Deluxe WIFI, BFG 8800GTX OC2, (4x)Seagate 15k.5 SAS RAID0 on Adaptec 4805 w/128MB cache, approx 1.3TB SATA300, Gateway 24" LCD.

  7. #57
    Xtreme Addict
    Join Date
    Apr 2005
    Location
    Wales, UK
    Posts
    1,195
    http://216.239.59.104/search?q=cache...irements&hl=en

    looks like you can

    for me the most suprising thing is how much extra heat output you get for the faster 533 bus dothans as oposed to the same speed but lower bussed versions ie. 760 vs 755 - so a less bandwidth hungry processor would probably up the heat even more - but intel have 65nm moving into mass production very soon.
    Last edited by onewingedangel; 07-23-2005 at 10:34 AM.

  8. #58
    Xtreme Member
    Join Date
    Nov 2004
    Location
    Fairfield, Connecticut
    Posts
    485
    Interesting.
    Thats an incredibly low speed, though. I think there are ARM chipsets that push more hertz at a similar voltage. I'm going to see if those Intel numbers have been reproduced by a 3rd party.
    Q6600 @ 3.4GHz - 1.4v, 4GB PC26400, Asus P5B-Deluxe WIFI, BFG 8800GTX OC2, (4x)Seagate 15k.5 SAS RAID0 on Adaptec 4805 w/128MB cache, approx 1.3TB SATA300, Gateway 24" LCD.

  9. #59
    Banned
    Join Date
    Jul 2004
    Posts
    1,125
    Quote Originally Posted by turtle
    Socket M2 is 940 pins, which is the desktop derivative of the next socket, and yes, all that's basically new about it (from what i've gathered) is DDR2 support through that extra 1 pin. Who knows though, there might be other new stuff.

    Socket F is what the next-gen Opterons are, and they are 1207 pins. Like what was mentioned earlier, that's prolly what'll get the built-in pci-e. I doubt socket M2 could do it if it's only 940 pins.

    On a completely un-related note, but to round out the group, Socket S1 will replace Socket 754 in next-gen laptops. It has 638 pins.

    Well, supposedly anyway.
    M2 probably could do it at 940 pins. Realize that going from DDR --> DDR2 doesn't take many extra pins (20 for 2 channels?) AND that the current Socket 939 has very many essentially unused pins that either have GND or core VID on them, which the 940 gets by without. By comparing the 939 pinout to the 940 pinout, you'll see that 939 currently has at least 100 pins to spare.

  10. #60
    Xtreme Addict
    Join Date
    Apr 2005
    Location
    Wales, UK
    Posts
    1,195
    Quote Originally Posted by Order
    I'm going to see if those Intel numbers have been reproduced by a 3rd party.
    http://episteme.arstechnica.com/grou...r/697006514731

  11. #61
    Xtreme Addict
    Join Date
    Apr 2005
    Location
    Wales, UK
    Posts
    1,195
    Quote Originally Posted by terrace215
    M2 probably could do it at 940 pins. Realize that going from DDR --> DDR2 doesn't take many extra pins (20 for 2 channels?) AND that the current Socket 939 has very many essentially unused pins that either have GND or core VID on them, which the 940 gets by without. By comparing the 939 pinout to the 940 pinout, you'll see that 939 currently has at least 100 pins to spare.
    Also the socket s1 for laptops has manages dual channel ddr2 on 638 pins

    see:

    http://anandtech.com/cpuchipsets/sho...spx?i=2476&p=3
    &
    http://www.xbitlabs.com/news/cpu/pri...510043057.html

    so theres obviously a lot of room for more features on those extra pins for socket m2, let alone socket f.

  12. #62
    Xtreme Member
    Join Date
    Nov 2004
    Location
    Fairfield, Connecticut
    Posts
    485
    Thats a very cool utility.
    I'm going to test it out myself.
    Thanks a lot for the info!
    Q6600 @ 3.4GHz - 1.4v, 4GB PC26400, Asus P5B-Deluxe WIFI, BFG 8800GTX OC2, (4x)Seagate 15k.5 SAS RAID0 on Adaptec 4805 w/128MB cache, approx 1.3TB SATA300, Gateway 24" LCD.

  13. #63
    Xtreme Member
    Join Date
    May 2005
    Location
    Sweden
    Posts
    166
    Quote Originally Posted by Ubermann
    To many sockets ?? Compared to what..Intel ?
    There are quite a few for AMD too, though.

    AMD - 754, 939, 940, M2, F
    Intel - 423, 478, 479 (mobile only), 775

    So unless Intel has more P4 sockets on it's way, it looks as if AMD is in the lead. If we count the current ones, it's even unless you count 479 as a "real socket".
    [DFI nF4 Ultra-D @ 6/15 BIOS] // [Venice 3000+ @ 2.8GHz @ 1.472v] // [2x512MB SP BH-5 @ 2-2-2-8 200 MHz 2.63v... for the moment] // [Sapphire X800XL Ultimate] // [OCZ Powerstream 520W]
    Super PI 1M record: 29.969 // Suicide screen record: 3 GHz // LBBLE 0517 DPMW batch ~300 on air // Stable enough?
    TB 900 + Venice 2800 folding 24/7, TB 1200 folding whenever it's on, profile

  14. #64
    Xtreme Member
    Join Date
    Nov 2004
    Location
    Fairfield, Connecticut
    Posts
    485
    IMO: The more sockets, the better . It means the makers are churning out more advanced wares. (Obviously this could go the other way and leave the consumer asking: "why the do I have to buy a whole new MB for an incremental upgrade?!")
    Q6600 @ 3.4GHz - 1.4v, 4GB PC26400, Asus P5B-Deluxe WIFI, BFG 8800GTX OC2, (4x)Seagate 15k.5 SAS RAID0 on Adaptec 4805 w/128MB cache, approx 1.3TB SATA300, Gateway 24" LCD.

  15. #65
    Xtreme Enthusiast
    Join Date
    Feb 2004
    Location
    Tucson, Az, USA
    Posts
    978
    No, I'm talking about PCI-X 2.0. Its for servers only as it has massive pin counts making it totally unfeasible for desktop usage because of cost but it is quite fast. You'll see PCI-X 266 and 533 out soon IIRC. Backwards compatibility will ensure that the PCI-X standard is on servers for a long loooong time.
    Its not clear that these standards will be widely used however. The complexity of parallel PCI makes its expensive to impliment, and the performance verses PCI-E isn't there. Furthermore, PCI-E's uptake has been extremely fast, giving it volume.

    Though I agree people will be using PCI-X for a long time to come (much like PCI).

    Again, for desktops such a setup is stupid. But socket F is not for desktops, its for servers that can often have fibre channel and multi Terabyte (even Petabyte some times) RAID 5 hard disk setups, the I/O controllers for these systems are quite complex already... How the hell you gonna buffer 10 TB of data buddy?
    Err do you not understand what buffering means? Its the process where you store data until you have enough to fill a packet. You don't buffer the entire hard disk, just a few KB until the packet can be sent

    Anyway you're completely missing the point: HT is a low latency interconnect. PCI-E is not. This is a design choice. Bringing the controller on board doesn't change that. In fact, it makes virtually no difference.

    So whats the point? Save 50ns on a 5000us transfer? BFD.

    And I disagree about desktops. A GPU MIGHT be able to see the difference, someday. A RAID controller will not. You're already well into the milisecond range when you bring RAID into the picture. At that point it could take 1 ns to transfer or 1000. You'll never know the difference since the disk is so slow and the queues are so deep.

    HT hubs are expensive though and take up lots of mobo real estate and have to be supported with drivers and such for the end device (usually a Promise or Adaptec RAID controller). If they did decide to put PCIe on die/package for socket F it'd offer a seamless I/O increase for every CPU added to the system which is quite nice IMO. Again this is all if, if, if... I've already stated this is unlikely due to cost issues but its hardly a stupid idea.
    Since when do people building dual and quad socket systems care about 15 dollars for a PCI-E bridge? Conversely, they will care about the increase in CPU pin count, which IS a problem when you're already routing signal and power lines for multiple sockets. Nothing like throwing in a few hundred GHz range signal pins into an already crowded multiple socket board.

  16. #66
    Xtreme Cruncher
    Join Date
    Mar 2005
    Location
    venezuela caracas
    Posts
    6,460
    Quote Originally Posted by exscape
    There are quite a few for AMD too, though.

    AMD - 754, 939, 940, M2, F
    Intel - 423, 478, 479 (mobile only), 775

    So unless Intel has more P4 sockets on it's way, it looks as if AMD is in the lead. If we count the current ones, it's even unless you count 479 as a "real socket".
    you are counting future sockets too and 754 is for mobile in the amd and amd has sockets for server users

    but in the other side how many chipset intels have making almost useless the older ones?
    Incoming new computer after 5 long years

    YOU want to FIGHT CANCER OR AIDS join us at WCG and help to have a better FUTURE

  17. #67
    Bulletproof
    Join Date
    Jan 2003
    Location
    Shun low, K?
    Posts
    2,553
    Quote Originally Posted by mesyn191
    Lithan: Well duh, if you run stuff past stock you're almost sure to get all sort s of wierd issues. If all that happened with AMD chips at stock then sure you'd have a reason to complain, but AMD does not cater to the OC'ing market and niether does Intel. There ain't no such thing as a consistant overclock from either company and there never has been, it always has been, is, and will be a crap shoot.
    The point is it happens. It's an issue with these chips that other chips don't have. When you always overclock, whether an issue happens at stock or only when overclocked doesn't much matter to you.
    Only the stupidest humans believe that the dogma of relative filth is a defense.

  18. #68
    Xtreme Enthusiast
    Join Date
    Jun 2004
    Location
    lake forest, CA
    Posts
    787
    Lithan: Sure it does. If it happens at stock then I'm not getting what I paid for, if it happens when I OC its cuz I modified the hardware knowing full well that it could cause problems.

    Saratoga: Yea I know what buffering is and the basic concept behind it, but still for larger and larger volumes you will generally need a larger and larger buffer. If this wasn't true we'd all be doing fine with 8KB L1 cache's and off die L2's or even no L2 at all on our CPUs...

    And yes you can expect single disk latencies to be horrible, but across a large RAID5 array its quite possible for the controller to be the bottleneck (same goes for the buffer too...).

    Also the CPU package pin costs were what I stated as being the original problem with putting PCIe on the CPU !! It does cost more though than just $15 for the I/O controller BTW (well, for a good one anyways, the Adpatec ones can cost a arm and a leg, and 2 or 3 of those on 1 mobo can drive the price sky high), and that doesn't inlcude all the other issues that I mentioned like mobo space (very limited on server boards) and TCO issues like having to deal with 3rd party drivers and such. Generally speaking the simpler the better.

  19. #69
    Xtreme Member
    Join Date
    Jun 2005
    Location
    A..T..L
    Posts
    415
    OOOOooo

    *thinks*

    FX-whateverisoutthen+dual channel ddr2+780gtx sli running on its own pci-e bus through the procesor.

    God I will be broke.
    AMD X2 3800+
    DFi LANPARTY UT NF590 SLI-M2R/G
    2 x 1Gb Crucial PC8500 [Anniversary Heatspreaders ]
    Custom Watercooling on the way
    Thermalright XP-90 right now
    27" 1080p HDTV for monitor
    Quote Originally Posted by The Inq
    We expect the results to go officially live prior to Barcelona launch in September. µ

  20. #70
    Bulletproof
    Join Date
    Jan 2003
    Location
    Shun low, K?
    Posts
    2,553
    Quote Originally Posted by mesyn191
    Lithan: Sure it does. If it happens at stock then I'm not getting what I paid for, if it happens when I OC its cuz I modified the hardware knowing full well that it could cause problems.

    Why? If you will only ever run it in a situation where it has problems, why would a situation that you will never use it in mean anything to you? That's like saying that Intel pentium-D's are passively cooled chips because if run in absolute zero (a theoretical absolute zero where matter still exists) they can be passively cooled. Amds dont have a cold bug when stock cooled obviously, but the point that was made was that a sli controller onboard could worsen the cold bug that exists when cooled with phase change. This is true. You asked what bug he spoke of and I answered you and now you say that bug doesnt matter? Well of course it doesn't matter to 99.9% of people. But it matters to people here because many of them want to use Phase change on these chips. I'm not trying to badmouth the chip. I'm just explaining the issue he was talking about because you seemed to not know about it. You won't have a very strong arguement in "xtreme systems" if you say that a cpu works flawlessly for everyone because it works flawlessly under stock cooling and at stock speeds.
    Only the stupidest humans believe that the dogma of relative filth is a defense.

  21. #71
    Xtreme Enthusiast
    Join Date
    Jun 2004
    Location
    lake forest, CA
    Posts
    787
    Lithan: "You won't have a very strong arguement in "xtreme systems" if you say that a cpu works flawlessly for everyone because it works flawlessly under stock cooling and at stock speeds."

    I didn't say that and looking at what I posted I don't see how you could've come to the conlusion that this was what I meant....

    So, one more time...

    AMD has only to garuntee that thier chips work at the advertised speeds they sell them at, which they do and thier chips do run perfectly fine without any memory controller errors at all, AT STOCK!!!!

    If you run out of stock specs AMD offers no garuntee, hence while sub zero cooling may matter to you (and me BTW, I have a VapoLS) AMD could give a crap less and I wouldn't expect them too either since they state quite explicitly what the operating specs are for thier chips. So I really can't understand how you can come off blaming AMD for any stability issues with thier chips at sub zero temps or while OC'ing, I sure don't and you shouldn't either...

  22. #72
    Xtreme Enthusiast
    Join Date
    Jun 2004
    Location
    lake forest, CA
    Posts
    787
    Eh, if you want to include server sockets for Intel it'd be:

    socket 423, 478, 775, 479, 603, 604, and 700 (Itanium).

  23. #73
    Bulletproof
    Join Date
    Jan 2003
    Location
    Shun low, K?
    Posts
    2,553
    Sorry Mesyn, I assumed since you responded to me that you were the person I had posted in response to. My post mentioning these problems was in response to this exchange...

    Quote Originally Posted by ValkyrieLenneth
    The more integrated things, the more unstable and bug it is.... i wonder if AMD should fix the mem control first before they try to add anything else into their CPU

    Quote Originally Posted by Order
    Whats wrong with the memory controller other than it being an incredibly unfair advantage?

    Since all chips before onboard mem controller worked fine @ subambient temps (except a VERY rare exception), people consider the "cold bug" just that, a "bug". So I was telling Order what "bugs" valkyr was refering to. I wasn't just popping in to randomly badmouth the cpus.
    Only the stupidest humans believe that the dogma of relative filth is a defense.

  24. #74
    Xtreme Enthusiast
    Join Date
    Jun 2004
    Location
    lake forest, CA
    Posts
    787
    OK, my bad

  25. #75
    Xtreme Member
    Join Date
    Nov 2004
    Location
    Fairfield, Connecticut
    Posts
    485
    Gotcha. I didn't mean to sound standoffish.
    Q6600 @ 3.4GHz - 1.4v, 4GB PC26400, Asus P5B-Deluxe WIFI, BFG 8800GTX OC2, (4x)Seagate 15k.5 SAS RAID0 on Adaptec 4805 w/128MB cache, approx 1.3TB SATA300, Gateway 24" LCD.

Page 3 of 3 FirstFirst 123

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •