Posted mine in the intel section.
Printable View
Posted mine in the intel section.
Check out the Intel section. Their results are pretty much the same as ours.
well itryed to run this proggy with one card and it doesnt go lol
i can only get it to work with CF
still dont know if my results are ok
Hi guys
Did some more testing :D
PHENOM II X4 955BE
DFI LP DK 790FXB-M2RSH
POWERCOLOR HD4870X2 @800-1000
HT Link Speed scale NB@2600 CPU@4000 Mem@400&533
http://img33.imageshack.us/img33/214...v01htlinks.jpg
By ageom at 2009-05-31
NB Scale CPU@4000 HT Link@2000 Mem@400&533
http://img33.imageshack.us/img33/830...v01nbscale.jpg
By ageom at 2009-05-31
CPU@4000 All Tests
http://img33.imageshack.us/img33/457...v01cpu4000.jpg
By ageom at 2009-05-31
NB, HT Link and Memory have a great impact on GPU->CPU , not so on CPU->GPU.
But... to continue.:rofl:
Well mine went all the way to 3072 with 1.2v, just when pcie gets increased past like 110 or so it starts to ask for voltage for some reason. The higher voltage supposedly kills the L3's latency according to everest, but I don't really pay attention to the latencies other than memory. I also find the same problem with memory speeds when the ddr voltage is set incorrectly, too much is just as bad as too little.
My NB can't go up 2800 on multi, and for 2800 I need all voltage I can get, I got some surprising tests, still working on them, have to do with HT Ref.:)
THANK YOU aGeoM!!!!
so i take it im doing ok with mine then ????????????
http://img24.imageshack.us/img24/4683/pciev.jpg
no problems with single card here - seem to be getting good results each way! pcie speed = 100mhz.
using a 770 chipset also, rather than 790 :)
btw: running from command prompt makes screenshots easier.
^That is with a SINGLE 3800 series GPU or a 3870x2? Also can you test with Cat 9.5?
PS: Thank you. It does run a lot better from the command prompt.
Ill run this when I get home. But for those of you getting low GPU to CPU speed, have you tried turning on/off the optimizations in bios to change the way htt sends information? They might do something to bandwidth. These are the options:
VC1 Traffic
Isochronous Flow Control Mode
UnitID Clumping 2/3 & B/C
2x LCLK
HT Link Tristate (disable should increase performance)
VC1:
This BIOS feature allows you to manually map a specific traffic class to the second (VC1) virtual channel of the PCI Express graphics port. This is the higher-priority virtual channel, so mapping a specific traffic class to it will increase bandwidth allocation priority for that traffic class. However, this is not a requirement
Isochronous Flow-Control Mode: This has to do with how information is passed between the CPU, the GPU and the RAM along the NorthBridge. It has been a part of the BIOS for HT since AGP 8X, but the option to enable or disable it is a fairly recent addition. When this option is enabled, it assigns the information a number, in the order it was received. Each bit of information is then processed in that order along the route. In toher words, there is no loss of information, but the processing in this orderly manner has drawbacks. If you choose to enable this feature, you will also need to enable UnitID Clumping and then under PCI-E COnfiguraiton and the NB-SB section of the BIOS, VC1 needs to be enabled as well.
UnitID Clumping: Simply put, it accounts for not all devices being equally quick at processing information. This allows each device to support a longer waiting line. VC1 accounts for a major drawback of Isochronous Flow-Control mode in that the flow control mode does not allow any information to break line. Everything must wait it's turn. Therefore, if one piece of info is intended for the CPU and in front of it is info the for GPU, the info for the GPU needs to be processed before the CPU info is processed; plus, if there is a waiting line of info to be processed onthe GPU, the CPU info is held up all that much longer. VC1 comes to the rescue by letting the CPU info break line, bypassing the GPU info jam to join the CPU info queue.
2xLCLK: This setting only affects HT 3.0, so Phenom's may benefit from it while with Athlon's, it just does not apply. LCLK stands for Latency Clock. The 2x means that instead of one full bandwidth HT Link you are requesting two half bandwidth HT Links. For performance, at times it is better to have a two lane highway; traffic flowing in both directions at the same time along the same strip of asphalt at 50mph, than it is to have a single lane highway along the same strip of asphalt with traffic lights controlling the directional flow at 100mph.
Tristating is a power saving feature in addition to ASPM linking. Whatever sections you want to enable Tristate in, you reduce the energy needed to run that area, but the downside is that you also reduce that area's performance
Your CF performance is right on par with everybody else. It looks like your system is running the program just fine. I'm curious as to why you can't run it when you disable Crossfire in CCC. Maybe one of the other CF users can help you out on that one.
PS: To run this program in the CMD prompt, 1) type CMD in your RUN menu 2) navigate to the "PCIe Speed Test v0.1" folder 3) type "PCIeSpeedTest" and then hit the ENTER key. (NOTE: These instructions assume you know how to use commands like "cd", "cd ..","dir", etc. PM me again if you have any issues)
that run was a single 3870xt, my other card died long ago :( ill try cat 9.5 soon :)
here is my run. Kinda crazy anomaly on the 134217728 line...
http://img197.imageshack.us/img197/7...ndwidth.th.jpg
EDIT: Ran it again, same thing:
http://img197.imageshack.us/img197/2...dwidth2.th.jpg
And a third time, this time I let it run a bit longer:
http://img142.imageshack.us/img142/8...dwidth5.th.jpg
^:shrug:
I'm at a loss for words now. The PCIe bus must just do whatever it feels like when it feels like it.
I vote the motherboard and AIB industries move to HTX(3) spec.
aGeoM: Thanks for your work on testing, and the graphs... Very Helpful!! :up:
I put together a new system to test over the weekend and I thought I would run this test to see what I came up with. Here are systems w/results:
GBT MA790XT-UDP4 - 720be (unlocked to X4) - DDR3 board - Single 3870XT
http://i30.photobucket.com/albums/c3...T_PCIetest.jpg
Asus M3A79-T Dlx - 940be - DDR2 - Single 4850 (posted earlier)
http://i30.photobucket.com/albums/c3...PCI-eTest1.jpg
Maybe I'm crazy, but it seems like there are either serious differences in the way mobo manufactures are implementing the PCIeLink, or this program still has some quirks...
Yes, the GBT board is running DDR3, but all the clocks/VC and Mem timings are substantially better on the Asus settup, something just doesn't seem to jive here... :confused:
EniGmA1987: FWIW, if you notice, I got the same wierd results @ 134mb as you did on the 79-T.
Swap your 3870 and 4850 and see if the numbers change between boards.
And yes I do see that you have that strange anomally in the 134 line as well on the M3A79-T. We need to get another person or two in here who has the same board that can verify if all the M3A79s do the same thing. If so, then what is different on our boards. If not, what options are different, and what bios is used between the different results?
Seems like 3870 cards have much better GPU>CPU performance than 4800 series cards. :shrug: I reckon it's a driver issue.
Hi all,
Was wondering if one of you could help me to get my 3870X2 to run at
PCI-E 2 I'm not sure how to do it.My system spec's are at bottom of page.
Any help would be appreciated.
ok i retested mine in the CMD and got my stats now and hes what i got
http://www.myalbumbank.com/albums/us...eedTestCmd.GIF
+1
+1, swap the cards. I will run mine again and take a look at the 134 line.
Please post a GPU-Z screenshot so we can be certain you AREN'T running at PCIe 2.0 right now. If you are, then we'll help you navigate your BIOS to get to where you need to make the change.
Very interesting. Thank you. We've noticed THREE things up to this point:
1) Single 4800 series GPUs don't have x16 GPU>CPU bandwidth
2) Multi GPU 4800 series setups do have full x16 GPU>CPU bandwidth
3) 3800 series single and multi GPU setups have full x16 bandwidth
These hold true across AMD and Intel systems. We should all bring this to the attention of ATI in a support email. If enough of us do it, maybe they'll notice.
I'm also curious to see the results of older drivers. Maybe this performance issue wasn't always there.
Hi Mechromancer,
How do i upload a picture to the site?
Hi Mechromancer,
Here is the image of GPU-Z