PDA

View Full Version : ASRock shows full Z77 lineup and monster X79 Extreme11 board



Micutzu
03-05-2012, 08:24 AM
ASRock came at CeBIT with a bunch of new boards, and except the full Z77 lineup we also caught on camera a few shots of the new monster X79 Extreme11 - 16 CPU phases, 14 S-ATA ports thanks to an LSI controller and two PCI-e 3.0 PLX switch chips, 48 lanes each. X79 Champion is also another fresh board.

Full story: Asrock Z77 and X79 Extreme 11 @ lab501 (http://lab501.ro/stiri/cebit-2012-asrock-z77-si-un-nou-monstru-x79)

http://lab501.ro/wp-content/uploads/2012/03/X79-Extreme-11-1.jpg

http://lab501.ro/wp-content/uploads/2012/03/X79-Extreme-11-2.jpg

http://lab501.ro/wp-content/uploads/2012/03/X79-Extreme11-4.jpg

http://lab501.ro/wp-content/uploads/2012/03/X79-Champion.jpg

Mats
03-05-2012, 08:42 AM
Thanks for posting!

This looks interesting (http://lab501.ro/wp-content/uploads/2012/03/Z77E-ITX.jpg):
http://lab501.ro/wp-content/uploads/2012/03/Z77E-ITX-580x309.jpg (http://lab501.ro/wp-content/uploads/2012/03/Z77E-ITX.jpg)

BigRigDriver
03-05-2012, 08:43 AM
Impressive!

K404
03-05-2012, 09:13 AM
If they perform as good as they look........

:D

ASR are beginning to take this stuff seriously, by the looks of things :)

Callsign_Vega
03-05-2012, 10:47 AM
Nice, this Extreme11 might end up in my Quad-Kepler build. Could anyone tell me does ASRock have decent quality hardware and BIOS? I wonder if it would be risky going with something like this versus the Asus R4E.

Also, I wonder if those PLX chips that make quad-16x PCI-E 3.0 possible would add latency and wouldn't be any faster than the native quad-8x PCI-E 3.0 found on the Asus R4E while using 4x GPU's.

Kain665
03-05-2012, 10:55 AM
PCI-E switches don't add any real PCI-E lanes. The slots may be 16x electrically, but the PCI-E controller still only really has 40 lanes.

Not to mention, that LSI controller probably takes 4 lanes itself, you're probably looking at 32 lanes or so available for expansion cards.

Callsign_Vega
03-05-2012, 11:03 AM
PCI-E switches don't add any real PCI-E lanes. The slots may be 16x electrically, but the PCI-E controller still only really has 40 lanes.

Not to mention, that LSI controller probably takes 4 lanes itself, you're probably looking at 32 lanes or so available for expansion cards.

It allows the 4x GPU's to talk to each other over a faster connection. But yes, talking to the CPU will still be limited by the standard limit, but that is not where the bottleneck is. That is the whole reason things like the NF200 and PLX PEX 8747 were created.

On another note found this:

http://pr.zwame.pt/2012/03/asrock-presents-brand-new-technology-kits-at-cebit-2012-asrock-presents-brand-new-technology-kits-at-cebit-2012/

The first and only 1155 MB that can do Quad-SLI? ASRock Z77 Extreme9.

It says it's the first that can do 8x/8x/8x/8x on Z77 using a onboard PLX PEX 8747 PCI-E 3.0 bridge chip. This board might even be better than the Extreme11 as the CPU should be faster for gaming versus X79/SB-E.

http://www.overclock.net/image/id/1939336/width/600/height/460 (http://www.overclock.net/image/id/1939336/width/600/height/460)

Cyras
03-05-2012, 11:06 AM
Leaked Specifications

http://saved.im/mtg3mzu1zjex/z77series.jpg

BababooeyHTJ
03-05-2012, 11:10 AM
That Z77 Extreme 6 looks like a really nice motherboard. There isn't an midrange full ATX motherboard like the extreme 4?

xdan
03-05-2012, 11:33 AM
About Extreme11:

PC enthusiasts will also be amazed with the onboard two PLX PEX 8747 bridges. PLX bridge offers solid PCIe 3.0 lanes for PCIe devices and is optimized to support high performance graphics. The two charming bridges make the X79 Extreme11 to be the world’s 1st motherboard supporting 4-Way SLI and CrossFireX at PCIe Gen3 x16 / x16 / x16 / x16 mode!

Now about Asrock bioses. They all have problems with running memory at high frecuencys with tight latency...
For example running 2133 CL 6/7-9-7.... But for high-end models they have no problems fro all, for example Fatal1ty Z68 Professional see here : http://lab501.ro/placi-de-baza/asrock-fatal1ty-z68-professional-gen3-review/5
In rest even on a 99$ mb -P67 PRO3 you can run SB 5ghz easy and even above.
And there was some succes with LN on 2011 - Extreme 9
http://hwbot.org/submission/2245122_rsannino_cpu_frequency_core_i7_3960x_5670. 61_mhz
Although guys on hardwarecanucks didn't woked because they had a C0 rev. cpu not new C1/C2.
http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/51675-asrock-x79-extreme9-motherboard-review-15.html

Also something really nice about X79 Champion:

Luxurious features extend to the onboard voltage measurement points. V-Probe™ is a 7-set of detection points laid on the Fatal1ty X79 Champion, allowing overclockers to easily and quickly get the accurate voltage readings via a multitester.
From the Callsign_Vega link.

Kain665
03-05-2012, 12:06 PM
It allows the 4x GPU's to talk to each other over a faster connection. But yes, talking to the CPU will still be limited by the standard limit, but that is not where the bottleneck is. That is the whole reason things like the NF200 and PLX PEX 8747 were created.

*snip*



Cards need to talk to PCI-E controller first.

Callsign_Vega
06-03-2012, 07:36 PM
Thanks for contacting ASRock.
Official launch date of X79 Extreme11 would be late July.
Yes, sure, it will be showcased at COMPUTEX 2012.

Best Regards,
ASRock Inc.

Woohoo! Just in case someone comes out with Eyefinity 6 type 7970's or I go with four 680 Classified I'll get the Extreme11.

Generic user #2
06-03-2012, 09:56 PM
pretty sure you can't go 4x GTX 680s ;)

Callsign_Vega
06-03-2012, 10:20 PM
pretty sure you can't go 4x GTX 680s ;)

Considering I've already had 4-way 680 SLI since launch, I'd have to disagree. ;)

Callsign_Vega
06-04-2012, 03:37 PM
Production form:

http://cdn.overclock.net/4/47/47aae31c_X79-Extreme11.jpeg

zanzabar
06-04-2012, 05:06 PM
It allows the 4x GPU's to talk to each other over a faster connection. But yes, talking to the CPU will still be limited by the standard limit, but that is not where the bottleneck is. That is the whole reason things like the NF200 and PLX PEX 8747 were created.


the plx and nforce (other than the 100) do not allow cards to talk to each other through the routing chip. all that they do is allow the controller to assign bandwidth. the only use for the chips was back in the day when the chipsets did not support splitting channels to run 2 cards (see asus blitz) or for NV to change 8x pci-e2 to 16x pci-e1.1 with the 100 and the 200 fixed the problem that NV had with drivers not working properly with the cards in 8x mode (now fixed.) the cards now only talk to each other over the cross channels (the top of the card with the bridge) and the pci-e is used to stream data and send commands. there is an exeption with the lucid controller since it has lucid multi gpu and dose not get the benefit of bridges and has the benifit of emulating hardware. it can do things like multiplex data (send the same data to all cards at once at full 16x) and will let the cards talk to each other from the controller but that chip is not a pci-e controller and has features that make it alot more than a chipset and more like a device that controls the gpus making them detect as one card.

the only reason that any controller is used now is to convert 16x pci-e3 to 32x pci-e 2 or to stay with NV licensing for quad sli. if you look at any board that has a matching with and without controller for 8x8x8x8 v 16x16x16x16 the without a controller has won

Callsign_Vega
06-04-2012, 09:53 PM
the plx and nforce (other than the 100) do not allow cards to talk to each other through the routing chip. all that they do is allow the controller to assign bandwidth. the only use for the chips was back in the day when the chipsets did not support splitting channels to run 2 cards (see asus blitz) or for NV to change 8x pci-e2 to 16x pci-e1.1 with the 100 and the 200 fixed the problem that NV had with drivers not working properly with the cards in 8x mode (now fixed.) the cards now only talk to each other over the cross channels (the top of the card with the bridge) and the pci-e is used to stream data and send commands. there is an exeption with the lucid controller since it has lucid multi gpu and dose not get the benefit of bridges and has the benifit of emulating hardware. it can do things like multiplex data (send the same data to all cards at once at full 16x) and will let the cards talk to each other from the controller but that chip is not a pci-e controller and has features that make it alot more than a chipset and more like a device that controls the gpus making them detect as one card.

the only reason that any controller is used now is to convert 16x pci-e3 to 32x pci-e 2 or to stay with NV licensing for quad sli. if you look at any board that has a matching with and without controller for 8x8x8x8 v 16x16x16x16 the without a controller has won

This is entirely wrong on many levels. I've worked with extreme 4-way GPU setups for many generations from both manufactures and have talked to AMD technicians. Virtually all of the data that goes on between the card's swapping frames in AFR goes over the PCI-E bus. The Crossfire and SLI bridges are much lower bandwidth and are used primary for synchronization data, not bulk bandwidth transfer.

If you've read the white paper on the PLX chip, you'd see the chip exactly allows the GPU's to communicate with each other at a higher bandwidth. That's the whole point. Granted, some switching latency will be introduced. But when you are running a lot of GPU's that drop down the slot speeds it really matters. You think all motherboard manufactures put PLX chip's on their 3-4 way GPU boards for the hell of it? It's because if they didn't they'd run like utter crap. Take the Z77 Sniper 3 for example that Sin reviewed. With your line of logic, if the board ran at 4x/4x/4x/4x in 4-way SLI it would be faster than using the PLX chip to get 8x/8x/8x/8x/ That couldn't be further from the truth. If you read his benchmark's, even with slower PCI-E 2.0 GPU's the highest performance garnered was using the PLX chip to attain 16x/16x. The 8x/8x native came in lower. And this is using only 2 GPU's. This Extreme11 is built for four GPU's which really stresses everything.

The Extreme11 will be awesome for those of us running 4-way GPU and extremely bandwidth intensive high resolution Eyefinity/Surround. But this board is only needed for such a configuration, it would be a waste for anything less.

PatRaceTin
06-07-2012, 07:27 AM
i like extensive option in asus rog bios

Callsign_Vega
07-30-2012, 11:27 AM
Extreme 11 released: http://www.newegg.com/Product/Product.aspx?Item=N82E16813157327

RedBull78
07-30-2012, 11:46 AM
Extreme 11 released: http://secure.newegg.com/Shopping/ShoppingCart.aspx?Submit=view

Not there...

http://www.newegg.com/Product/Product.aspx?Item=N82E16813157327

600 dollars? Lmaoooooooooo wow are they insane or something? who with half a brain would wanna dish out all that money just for a MOBO that looks no diff then most other mobo's...gold plated?

Callsign_Vega
07-30-2012, 11:52 AM
Um, mobo's sell by their features list not their colors. This board has features no other X79 board has.

koc
07-30-2012, 11:52 AM
Yes, for only $599.99 :down:

Nikolasz
07-30-2012, 12:30 PM
I think all boards need to be Deluxe WITH Extreme ! only for 99,99 !

Callsign_Vega
07-30-2012, 01:59 PM
lol, just how cheap do you people expect a high-end chipset MB with a $300 RAID controller and two PLX chips on it to be?

screwtech02
07-30-2012, 04:34 PM
For that $$$ it NEEDS INTEL nics too..... Honestly, WTF?? :shakes:

Callsign_Vega
07-30-2012, 05:15 PM
Is Broadcom LAN pretty crappy? Does it have the CPU do most of the work?

lowfat
07-30-2012, 05:18 PM
For that $$$ it NEEDS INTEL nics too..... Honestly, WTF?? :shakes:

One of the first things I noticed too. Boards in this price range should have Intel NICs all around.

Callsign_Vega
07-30-2012, 05:27 PM
You guys think there would be a difference in say playing an online game in which your streaming 10 kb/s lol between the NIC's and your Router? I'd imagine any real difference would be seen streaming mega amounts of data at Gbit speeds over a LAN. (something way-way faster than your internet connection).

Callsign_Vega
07-30-2012, 09:17 PM
I found a review of the LSI card that uses the same exact chip as on the Extreme 11:

Ah yes:

http://thessdreview.com/our-reviews/sata-3/lsi-sas-9207-8i-pcie-3-0-host-bus-adapter-quick-preview/

The M4 actually worked better single drive on the LSI versus X79. Then of course 8x M4's over 4100 MB/sec! :eek:

zanzabar
07-30-2012, 09:26 PM
I found a review of the LSI card that uses the same exact chip as on the Extreme 11:

Ah yes:

http://thessdreview.com/our-reviews/sata-3/lsi-sas-9207-8i-pcie-3-0-host-bus-adapter-quick-preview/

The M4 actually worked better single drive on the LSI versus X79. Then of course 8x M4's over 4100 MB/sec! :eek:

so they charge you an extra $250-300 to put a $275-300 card in the board. also why are they saying it is pci-e3, there are no pci-e3 2011 cpus and there may never be any since the next socket is scheduled currently a couple months affter the IB-e is supposed to be out since its delay.

lutjens
07-30-2012, 10:03 PM
Impressive looking and fully featured boards. I still have this nagging, lingering mindset from many years ago of Asrock as being a lower quality motherboard manufacturer though, but with more examples like this and some good reviews, I might be tempted to try one.

zanzabar
07-30-2012, 10:29 PM
Impressive looking and fully featured boards. I still have this nagging, lingering mindset from many years ago of Asrock as being a lower quality motherboard manufacturer though, but with more examples like this and some good reviews, I might be tempted to try one.

that was only since pegatron owned asus so they did not let them compete. now that they are separated pegatron still has some nice boards, but they do not have brand as a feature so they are cheaper.

Callsign_Vega
07-31-2012, 12:44 AM
so they charge you an extra $250-300 to put a $275-300 card in the board. also why are they saying it is pci-e3, there are no pci-e3 2011 cpus and there may never be any since the next socket is scheduled currently a couple months affter the IB-e is supposed to be out since its delay.

Oh not this silly-ness again. SB-E has been running fine at PCI-E 3.0 speed since it has released. All of these PCI-E 3.0 RAID controllers running on X79 that completely demolish PCI-E 2.0 speed caps must be in everyones imagination! Better explain how the LSI card got that fast of speed with 8x M4's then. They even talk about this in the review:

"So, as it turns out, we can make the most of 8 Crucial M4 256GB SATA III SSDs. Read and write throughput scales beautifully. 2.15GB/s for sequential writes is exactly 8x the seq. write speed of one M4, while 4.18GB/s is exactly 8x the seq. read speed. Had this been a PCIe 2 device, read scaling would have stopped with five drives or so, being that overhead limits theoretical bandwidth by 20%. The M4s only write half as fast as they read, so they come in under the 3GB/s limit of PCIe 2, but scale well nonetheless."

zanzabar
07-31-2012, 01:05 AM
Oh not this silly-ness again. SB-E has been running fine at PCI-E 3.0 speed since it has released. All of these PCI-E 3.0 RAID controllers running on X79 that completely demolish PCI-E 2.0 speed caps must be in everyones imagination! Better explain how the LSI card got that fast of speed with 8x M4's then. They even talk about this in the review:

"So, as it turns out, we can make the most of 8 Crucial M4 256GB SATA III SSDs. Read and write throughput scales beautifully. 2.15GB/s for sequential writes is exactly 8x the seq. write speed of one M4, while 4.18GB/s is exactly 8x the seq. read speed. Had this been a PCIe 2 device, read scaling would have stopped with five drives or so, being that overhead limits theoretical bandwidth by 20%. The M4s only write half as fast as they read, so they come in under the 3GB/s limit of PCIe 2, but scale well nonetheless."

this says 2

http://ark.intel.com/products/63697/Intel-Core-i7-3930K-Processor-%2812M-Cache-up-to-3_80-GHz%29
http://ark.intel.com/products/63696/Intel-Core-i7-3960X-Processor-Extreme-Edition-%2815M-Cache-up-to-3_90-GHz%29
http://ark.intel.com/products/63698/Intel-Core-i7-3820-Processor-%2810M-Cache-up-to-3_80-GHz%29

even the intel reps do not say that they are pci-e3, so i am not sure if it is unofficial or just that pci-e2 is good enough. i also do not care if the raid card says it is pci-e3 it is the MB that bugs me since there are no parts for it and they make it seam like it is all running pci-e3 but then intel says 2 and my brain is ready to explode.

Callsign_Vega
07-31-2012, 03:38 AM
Those minimum specifications don't mean that they don't support PCI-E 3.0. Look at the date of the certification for SB-E. It was entirely certified before PCI-E 3.0 GPU's were released for testing. You cannot certify something without testing it, so even though SB-E has been designed from the ground up for PCI-E 3.0, Intel fell back to a 2.0 certification to launch the chip.

http://www.anandtech.com/show/5264/sandy-bridge-e-x79-pcie-30-it-works

http://news.softpedia.com/news/Intel-Sandy-Bridge-E-CPUs-Are-Almost-PCI-Express-3-0-Compatible-228126.shtml

Not to mention Intel advertises it's own X79 boards as PCI-E 3.0:

http://www.intel.com/content/www/us/en/motherboards/desktop-motherboards/desktop-board-dx79to.html

Christopher
07-31-2012, 02:03 PM
I can vouch that PCIe 3 works as intended on the 3930k and 3820 -- the nonsense about them not being fully Gen3 compliant is just that. I know what ARK says, but they work beautifully regardless, and it was merely timing of the release of Gen3 products and the SB-E familly that led to this certification issue.

It's all good. Incidentally, if you want to find out for real, strap 8 6gbps SSDs to a 9207 or 9217 and see what happens. If you get more than 3.3 GB/s, then it doesn't work. If you get 4.3GB/s, it does work. The 9207 is good for ~700K IOPS, and re-establishes the x8 Gen3 speed limit at 4.3GB/s.

Kain665
07-31-2012, 03:48 PM
Eh, 4.3GB/s is quite low for 8x 1GB/s lanes.

Christopher
07-31-2012, 04:18 PM
Eh, 4.3GB/s is quite low for 8x 1GB/s lanes.

The 9207 is basically 33% BW and 33% more IOPS if you can saturate it. It's unclear whether the 4.2GBs limit is from the interface or the SAS2308/drivers/something else. Gen3 encoding is more efficient, but that's different. There is still a good bit of overhead, but until 12gbps hits, it won't matter much anyway.

NapalmV5
08-05-2012, 03:19 AM
2x molex to power quad sli ? not again youve got to be kidding!

here you go asrock i fixed it for you

4x 16x pcie3 properly powered.. 8pin pcie per slot

sin0822
08-05-2012, 11:15 AM
i don't understand why they used an LSI card instead of using the C606 chipset like GB did which adds 8 SAS ports naively. It is a server chipset, so i am guessin gin the end X79 +LSi is cheaper than C606.

I say good luck ASRock!!!!! You are going to need a very high-layered PCB for all those traces for the 24-phase VRM, and the dual PLX, and all that sas, however more layers isn't always better as it will reduce the amount of copper per layer, thus you need higher copper count. I would guess this board is 8+ layer with 1.5 or 2oz copper, most likely whichever did the job just right. This layering is most likely why it was delayed and some things changed. It is just way too much crap.

CallSignVega let us know how it runs! I know you have had your heart set on all those PCI-E lanes for your quad GPu setups, let us know how it goes. NapalmV5, no just no.

sholvaco
08-05-2012, 01:41 PM
^
The SAS ports on the C600 series are 3Gb/s only.

NapalmV5
08-05-2012, 02:15 PM
NapalmV5, no just no.

hey id be happy with just 2x 8pin but 2x molex/sata or 1x 6pin pcie is far from proper pcie powering @ 3x-4x gpus

Greg83
08-05-2012, 02:23 PM
isn't that why 2 6 pins and 2 8pin on high end gpus
cause a board like this is never gonna have GPU's pulling that 100w through the board.

zanzabar
08-05-2012, 03:03 PM
isn't that why 2 6 pins and 2 8pin on high end gpus
cause a board like this is never gonna have GPU's pulling that 100w through the board.

the cards are allowed to pull 75W from the board, that means that normally you can pull 150W with 2 cards but with overclocking that goes up, and with quad that means you are going 300W+ from the board.

Greg83
08-05-2012, 03:46 PM
but what card exist that will actually pull the volts from the pci-e itself. isn't it 6 pin 75w and 8 pin 150w and 75w board. so already at 225W from 6 and 6 pins.
i am just going off of the system voltages itself with his "high end" gpus only going at 200w on average requiring multi gpu cards and overclocking to use that voltages.

given that it has a dual 8 pin. for any real world scenario of basic overclocking and multi gpu, this will be plenty of power. really its 8 way sli / crossfire that would be necessary or some extreme overclocking which can require a hardware mod on the gpu often anyway.

zanzabar
08-05-2012, 03:48 PM
most cards use the board power 1st, and or use the pci-e power for the gpu pwm and will run the ram and other stuff from the board power. if you look back at something like the 9800gtx it will pull about 100W from the board and only like 50W from the pci-e connector.

Greg83
08-05-2012, 03:50 PM
makes sense , more power regulation going through the board for the more sensitive parts.
hopefully it doesn't ruin wr's. could just imagine the cost of this board.

sin0822
08-05-2012, 07:09 PM
most cards are specced to pull 75W from the slots i think.

schlova, gotcha i didn't know these were 6GB/s. Still without the extra 4X between the CPU and the PCH how it is going to transmit all that data. C606 still has that 4x right? but it was scrapped form X79? Or is that LSi controller hooked up to the PLX which is hooked up to the CPU's PCI-E lanes?

sholvaco
08-06-2012, 01:33 AM
most cards are specced to pull 75W from the slots i think.

schlova, gotcha i didn't know these were 6GB/s. Still without the extra 4X between the CPU and the PCH how it is going to transmit all that data. C606 still has that 4x right? but it was scrapped form X79? Or is that LSi controller hooked up to the PLX which is hooked up to the CPU's PCI-E lanes?

Yeah 75 W is the max you can pull from a slot according to PCIe spec.

The LSI is hooked directly to the CPU via an x8 link. The two PLX manage the expansion slots, each via x16 to the CPU.

According to Intel's own diagram (http://www.intel.com/content/www/us/en/chipsets/server-chipsets/server-chipset-c600.html) the top two Patsburg SKUs still support the x4 uplink. X79 was supposed to feature the uplink and 4 SAS 6Gb ports until Intel pulled their support just a couple of months before launch, so I have no doubt the features are still on package. In fact ECS launched a board (http://www.ecs.com.tw/ECSWebSite/Product/Product_Detail.aspx?DetailID=1309&CategoryID=1&DetailName=Feature&MenuID=151&LanID=0) with the 4 SAS ports and possibly the uplink enabled. I never looked up on how that went but then the whole product looked majorly beta...

sin0822
08-06-2012, 10:08 AM
that would mean that you can't do 4-way with 16x per slot then? Right? So how much is it per slot? 3x16x and 1x8x?

sholvaco
08-06-2012, 10:39 AM
You can run 4 way at "x16" per slot.

Top PLX feeds the top three slots, the split is either x16/x0/x16 or x16/x8/x8.

The bottom PLX takes care of the four slots bellow at x0/x16/x0/x16 or x8 for all.

Each PLX has it's own gen 3 x16 link to the CPU.

zanzabar
08-06-2012, 10:55 AM
You can run 4 way at "x16" per slot.

Top PLX feeds the top three slots, the split is either x16/x0/x16 or x16/x8/x8.

The bottom PLX takes care of the four slots bellow at x0/x16/x0/x16 or x8 for all.

Each PLX has it's own gen 3 x16 link to the CPU.

how dose each have its own 3x16x link, there is only 32x from the cpu then 12x from the chipset (not 100% sure on the chipset,) but the chipset lanes cover the onboard devices and one 4x slot on most boards.

edit- sorry missed the gen in 3 x16

sholvaco
08-06-2012, 11:26 AM
how dose each have its own 3x16x link, there is only 32x from the cpu then 12x from the chipset (not 100% sure on the chipset,) but the chipset lanes cover the onboard devices and one 4x slot on most boards.

"gen 3" as in PCIe 3.0.

The CPU integrated PCIe controller features forty PCI 3.0 lanes. On this board sixteen of these lanes go to the first PLX, sixteen go to the second PLX and the remaining eight lanes go to the LSI.

The chipset features eight PCIe 2.0 lanes and is linked to the CPU via a dedicated link (DMI) whose speed is equivalent to PCIe 2.0 x4.

Callsign_Vega
08-06-2012, 02:55 PM
http://i119.photobucket.com/albums/o139/callsign_vega/BlockDiagram.jpg

It is 2 Oz copper board also, seems like decent build quality so far. Haven't tried to overclock it yet. Been on a mega RAID tangent since I got it LOL.

NapalmV5
08-06-2012, 10:42 PM
^ perfect for 4x pcie2 gpus

so the only mobo that fully links 4x 16x pcie3 to the cpus is the dual s2011 asus which provides zero additional pcie power to the slots :shakes:

i guess the only mobo left to upgrade to is srx @ 3x 16x and are powered by a 6pin pcie

will just have to wait on unlocked xeons

zanzabar
08-07-2012, 12:17 AM
unless you get some ES parts you will never get an unlocked SB-e xeon. also there is no advantage to getting plx or other pci-e controllers over having just 8x on every slot, and the added latency can do more harm than helping from the controllers. the only time is has ever benefited was when NV could not work properly at 8x or when you run a raid card in there on the same buss since the data use is no symmetric like with gpus when each needs identical data all of the time and then a bit of sync data.

NapalmV5
08-07-2012, 01:08 PM
but even the es parts are locked.. no ?

so since even ib-e xeon may very well be locked.. well i guess that just leaves x79.. maybe x79a when 8core gets released ? 2x 16x + 8x for raid card.. oh well its gonna have to do.. and its gonna be a lot easier to liquid cool

current pcie3 gpus are gonna be fine on this asrock xtreme 11 but not the upcoming ones

zanzabar
08-07-2012, 02:09 PM
but even the es parts are locked.. no ?

so since even ib-e xeon may very well be locked.. well i guess that just leaves x79.. maybe x79a when 8core gets released ? 2x 16x + 8x for raid card.. oh well its gonna have to do.. and its gonna be a lot easier to liquid cool

current pcie3 gpus are gonna be fine on this asrock xtreme 11 but not the upcoming ones

there is supposed to be a mythical batch of unlocked xeons ES 2011 parts that were sent out to verify board before the models were finalized. they may not exist and if they do i am sure no one will ever see one in the wild.

Callsign_Vega
08-07-2012, 05:24 PM
I disagree. Check sin's Sniper 3 motherboard review. Two 16x slots with the PLX on Z77 performed faster in SLI than two 8x native Z77 slots all config's in PCI-E 3.0. The extra bandwidth is more valuable than an tiny latency added by the PLX.

zanzabar
08-07-2012, 05:36 PM
I disagree. Check sin's Sniper 3 motherboard review. Two 16x slots with the PLX on Z77 performed faster in SLI than two 8x native Z77 slots all config's in PCI-E 3.0. The extra bandwidth is more valuable than an tiny latency added by the PLX.

there is no extra bandwidth so that makes no sense. the rampage 3 showed a 3% drop with the nf200 and is so far the only time that the same board could be tested with and without the lane controllers. i expect less off a drop from plx or lucid when compared to the nf200 but not a gain. that test http://sinhardware.com/index.php/reviews/motherboards/122-gigabyte/lga1155-z77/g1sniper-3/g1sniper-3-review/127-g1sniper-3-review?showall=&start=10 is using a pci-e2.0 graphics card and it is also not using the board with and without. the plx is supposed to be able to convert 16x pci-e to 32x pci-e 2.0 (i do not know if that board dose that, but that will give a boost with older gpus) and it dose not really show anything since the cpu and single gpu test is so far off.

CrazyNutz
08-07-2012, 07:14 PM
well actually there is GPU to GPU communication via PCIe, so there would be 2x the bandwidth available in this scenario with a bridge that has full lanes to each.

zanzabar
08-07-2012, 07:22 PM
well actually there is GPU to GPU communication via PCIe, so there would be 2x the bandwidth available in this scenario with a bridge that has full lanes to each.

i did not think that plx supported redirected data without it going to to the cpu 1st and it was only lucid but only in lucid mode that they could actually go gpu to switch to gpu. it also would not matter when you have 2 controllers as would have to bounce to the cpu no matter what like this board. then you are never steaming textures oteher than during loading so the gpu to gpu bandwidth even if it could bounce should be negligible. if it can run pcie-3 as 2 for legacy devices then i could see it helping, but other than that i do not get it.

sin0822
08-07-2012, 09:25 PM
zanzabar, PLX8747 takes PCi-E 3.0 16x and outputs 32x PCi-E 3.0, not 2.0. Sure there is that 16x only link from the CPU to the PLX, but that is why there is a delay. You can think of it as filling up a jug of water twice to deliver the double the amount, the extra time to fill is like delay, except not quite twice as long. So I am not running PLX to its full potential with a PCI-E 2.0 GPU, PCI-E 3.0 would show larger gains I would think. What is odd is that Single card performance was worse than NF200, there might be some tweak to be better at multi and not good at all for single, i am not sure what they did differently, however the PLX8747 is a brand new IC, it isn't half a decade old like the NF200, and PLX knows how to make bridge ICs. So to me it isn't a surprise if it does well, but last I heard there were major shortages. That whole PCI-E 3.0 spec with PLX can also cause issues with BCLK OCing with PLX above ~111mhz BCLK at least that is what i found.

The peer to peer communication is increased.

If anything any GPU that would use more bandwidth, such as most if not all PCI-E 3.0 cards compared to mine, would see bigger benefits.

Form my testing, and i did a crap load of testing, it did seem that 16x/16x with PLX was better than native 8x/8x, I compared different boards, that is why I used so many different boards to single out inconsistencies. Even with NF200 i saw the same thing with Z68, but to much less extent.

The gains are extremely limited with only 2 cards, however someone like callsignvega will see the benefits as he is running the most extreme setup i have seen, also your screen resolution also depends on it.


scholvaco- I didn't add correctly. Then yea 4-way at 16x is possible because there are 40x PCi-E lanes. I somehow confused myself, and thought it only had 36x lanes lol. X79 platform should have 40 PCI-E 3.0 lanes from CPU.
I made this to show native X79, E11, and UP5. AS they are kind of different, UP5 is basically how X79 should have been if Intel didn't disable SAS, that is why there are 40X lanes left over.
129178

Tell us how you like the E11 Vega, What you running on the RIAD?

zanzabar
08-07-2012, 09:44 PM
plx has a part that can convert 3 to 2 for legacy (so it gets more usable bandwidth.) i get the whole diagram but i do not see how it can help.

Callsign_Vega
08-07-2012, 10:32 PM
zanzabar, PLX8747 takes PCi-E 3.0 16x and outputs 32x PCi-E 3.0, not 2.0. Sure there is that 16x only link from the CPU to the PLX, but that is why there is a delay. You can think of it as filling up a jug of water twice to deliver the double the amount, the extra time to fill is like delay, except not quite twice as long. So I am not running PLX to its full potential with a PCI-E 2.0 GPU, PCI-E 3.0 would show larger gains I would think. What is odd is that Single card performance was worse than NF200, there might be some tweak to be better at multi and not good at all for single, i am not sure what they did differently, however the PLX8747 is a brand new IC, it isn't half a decade old like the NF200, and PLX knows how to make bridge ICs. So to me it isn't a surprise if it does well, but last I heard there were major shortages. That whole PCI-E 3.0 spec with PLX can also cause issues with BCLK OCing with PLX above ~111mhz BCLK at least that is what i found.

The peer to peer communication is increased.

If anything any GPU that would use more bandwidth, such as most if not all PCI-E 3.0 cards compared to mine, would see bigger benefits.

Form my testing, and i did a crap load of testing, it did seem that 16x/16x with PLX was better than native 8x/8x, I compared different boards, that is why I used so many different boards to single out inconsistencies. Even with NF200 i saw the same thing with Z68, but to much less extent.

The gains are extremely limited with only 2 cards, however someone like callsignvega will see the benefits as he is running the most extreme setup i have seen, also your screen resolution also depends on it.


scholvaco- I didn't add correctly. Then yea 4-way at 16x is possible because there are 40x PCi-E lanes. I somehow confused myself, and thought it only had 36x lanes lol. X79 platform should have 40 PCI-E 3.0 lanes from CPU.
I made this to show native X79, E11, and UP5. AS they are kind of different, UP5 is basically how X79 should have been if Intel didn't disable SAS, that is why there are 40X lanes left over.

Tell us how you like the E11 Vega, What you running on the RIAD?

Board it working great for me also. Although I do have a few complaints. Board is super tight/cluttered on it's items. They really should have made this a larger board size like the Big Bang XPII. The AE11 has the power and reset buttons, and the debug LED directly under GPU #4. Really silly location. Also, this board has the tightest RAM slot to PCI-E #1 slot spacing I've ever seen. I had to remove the back-plate off 7970 Lightning #1 and cut the edge of my RAM heat-sinks in order for them to fit. Now that is just ridiculously close spacing.

AE11 is handling my 3960X very sweet! Got it up to ~5.1 GHz. It's amazing how much cooler SB-E runs versus IB. My 3960X stays under 65 C in Prime95 with the following settings:
http://i119.photobucket.com/albums/o139/callsign_vega/AE113960XOC.jpg
This is a bit nicer overclock than I could get this chip to do on both the RIVE and the Big Bang XP-II. Maybe an ASRock convert. :D

Sin, I am surprised you have been able to get the bus speed up to 110 Mhz with a PLX motherboard. With the Sniper 3 and now the Extreme 11 they both cap at around ~3.7 to 4 MHz over. I've found the PLX to be quite sensitive to overclocking. Adding voltage to VTT doesn't seem to help either.

As for your last questions, my RAID setup on the LSI controller:

http://i119.photobucket.com/albums/o139/callsign_vega/8xLSIJBODWin7ITASSSD.png


plx has a part that can convert 3 to 2 for legacy (so it gets more usable bandwidth.) i get the whole diagram but i do not see how it can help.

The dual PLX AE11 setup allows peer-to-peer communications between card's #1 and #2 and between cards #3 and #4 for crossfire/SLI without transmitting anything over the dual 16x CPU lanes. Only when cards #3 and #4 have to talk to cards #1 and #2 does the data get sent to the CPU. Of course you have the regular texture fill and assorted traffic from the CPU to each card.

sin0822
08-08-2012, 08:09 AM
plx has a part that can convert 3 to 2 for legacy (so it gets more usable bandwidth.) i get the whole diagram but i do not see how it can help.

INPUT 16X OUTPUT 32X Both PCi-E 3.0, Even it if downclocks to PCi-E 2.0 on the bandwidth that doesn't help me, as you see in single card it still hurts.

callsignvega, I mean the quad GPUs, is it better than native X79?

Your RAID setup is sick lol.

You need a BIOS to retrain, or you need to use a PCI-E 2.0 GPU lol.

sholvaco
08-08-2012, 09:33 AM
zanzabar, PLX8747 takes PCi-E 3.0 16x and outputs 32x PCi-E 3.0, not 2.0. Sure there is that 16x only link from the CPU to the PLX, but that is why there is a delay. You can think of it as filling up a jug of water twice to deliver the double the amount, the extra time to fill is like delay, except not quite twice as long. So I am not running PLX to its full potential with a PCI-E 2.0 GPU, PCI-E 3.0 would show larger gains I would think. What is odd is that Single card performance was worse than NF200, there might be some tweak to be better at multi and not good at all for single, i am not sure what they did differently, however the PLX8747 is a brand new IC, it isn't half a decade old like the NF200, and PLX knows how to make bridge ICs. So to me it isn't a surprise if it does well, but last I heard there were major shortages. That whole PCI-E 3.0 spec with PLX can also cause issues with BCLK OCing with PLX above ~111mhz BCLK at least that is what i found.

The peer to peer communication is increased.

If anything any GPU that would use more bandwidth, such as most if not all PCI-E 3.0 cards compared to mine, would see bigger benefits.


I'm sorry man but PCIe switch chips can not bend the rules of physics, merely manage the available bandwidth in a way better suited for certain applications like multi GPU rendering where two or more cards are processing the same set of data.

Their main benefit is allowing more connections than the host controller does, but they also possess a couple of useful features that may benefit multi GPU rendering and technically provide x16 to multiple devices from a single x16 host link:

-the bandwidth to the host controller is assigned dynamically: The premise here is that your devices are not going to require concurent access to the interconnect 100% of the time. An obvious example would be alternative frame rendering where each graphics card only renders every other frame, idealy each only requires half of the scene updates from the CPU so both could receive data at full speed (alternating).

-broadcasting: The cards are rendering parts of the same scene working with identical graphic assets; why should they require individual CPU updates containing identical data? An advanced switch chip can multiply and send data over multiple ports to any applicable device; so instead of cards receiving individual updates over a halved interconnect, both can be updated from a single source at full speed (with a slight latency penalty).

While I can't vouch for real world effectiveness (nor frequency of use) of any of the above I know at least broadcasting has been demonstrated by Lucid Logix Hydra resulting in significant reduction of level load times.


INPUT 16X OUTPUT 32X Both PCi-E 3.0, Even it if downclocks to PCi-E 2.0 on the bandwidth that doesn't help me, as you see in single card it still hurts.

PLX may output PCIe 3.0 x16 to each device but cannot saturate both at the same time (unless broadcasting or peer to peer). If you were truly only using 2.0 cards, you were able to obtain better results than native x8/x8 because ivy bridge provided enough bandwidth (but not enough connections) for 32 PCIe 2.0 lanes, while the PLX provided the necessary number of lanes.

zanzabar: Any PCIe 3.0 controller can downgrade to 2.0 (and 1.0) protocols and transfer speeds since backwards compatibility is a de facto requirement.

sin0822
08-08-2012, 04:47 PM
Are you the same guy who thinks PCi-E quick switches causes delays?

Also i didn't say they create bandwidth, just described how they would better manage it. They act as a router, and a PLX8747 is much more complex than a ASM1480 or PI3PCIE3 quick switch.

paulbagz
08-08-2012, 06:19 PM
Vega;

Dat RAID.

-PB

sholvaco
08-09-2012, 08:03 AM
Are you the same guy who thinks PCi-E quick switches causes delays?

What does that have to do with anything I said above?

How does a lane switch work? Is it merely a signal router/repeater or does it operate under PCIe protocols? Every time you introduce additional data proccessing into a signal path you get a certain amount of delay. They may not be as complex as a fully featured switch chip but intercepting, processing and routing PCIe packets requires a certain amount of cycles. The question is whether this delay is significant enough to impact real world performance.

You've got a Gigabyte hookup and tons of hardware to play with. You could take an X79-UD3 and run some single card tests in x16 slots 2 and 4 (PCIEX8_1, PCIEX8_2) since the later is switched and the former not. Would you be so kind to prove me wrong, pretty please?

sin0822
08-09-2012, 08:12 AM
okay yea i will try it.

M.Beier
08-12-2012, 04:19 PM
, however more layers isn't always better as it will reduce the amount of copper per layer, thus you need higher copper count. I would guess this board is 8+ layer with 1.5 or 2oz copper, most likely whichever did the job just right.
lolwut? :)

Are you serious?

Callsign_Vega
08-12-2012, 10:15 PM
http://www.legionhardware.com/articles_pages/asrock_x79_extreme11,1.html

NapalmV5
08-13-2012, 03:42 PM
.....

nice! begs for an 8core/pcie3 cpu @ 5ghz

oh well will have to wait on haswell e

xdan
08-15-2012, 06:26 AM
New board in town, i think the best Z77 Asrock board to date:
Z77 OC Formula, BORN TO BE FAST
http://www.asrock.com/mb/photo/Z77%20OC%20Formula(m).jpg
Many special features:

OC Formula Power Kit
- Digi Power
- Dual-Stack MOSFET (DSM)
- Multiple Filter Cap (MFC) (Filter different noise by 3 different capacitors: DIP solid cap, POSCAP and MLCC)
- Premium Alloy Choke (Reduce 70% core loss compare to iron powder choke)
- 12 + 4 Power Phase Design
OC Formula Connector Kit
- Hi-Density Power Connector
- 15μGold Finger (CPU and memory sockets)
OC Formula Cooling Kit
- Twin-Power Cooling (Combine active air cooling and water cooling)
- 8 Layer PCB
- 4 x 2oz copper
- GELID GC-Extreme Thermal Compound
Supports NickShih's OC Profile, Formula Drive, FAN-Tastic Tuning, Multi Thermal Sensor
Supports Fine-Tuning V-Controller, Interactive UEFI, Timing Configurator
Supports Rapid OC, PCIe ON/OFF, V-Probe
Added by my - Dual Bios(first time for Asrock i think), LN Bios Mode
http://www.asrock.com/mb/Intel/Z77%20OC%20Formula/

In stock on newegg at 239$ same as Z77 Professional, a very good price and now 10% discount promo
http://www.newegg.com/Product/Product.aspx?Item=N82E16813157328
This board made some 7ghz scores just check the hwbot.

NapalmV5
08-16-2012, 10:19 PM
NapalmV5, no just no.

;)

http://www.xtremesystems.org/forums/showthread.php?282529-Proper-PCIE-Alimentation-Motherboards-FPS-Boosted