http://folding.typepad.com/
Take a look for your own damn selves!!!!! Finally!!!!
I'm too shocked to type intelligent comments right now so heres all I have: F'ing YAY!
Printable View
http://folding.typepad.com/
Take a look for your own damn selves!!!!! Finally!!!!
I'm too shocked to type intelligent comments right now so heres all I have: F'ing YAY!
Please fold with your Crossfire and 3870X2 setups. I'm very interested to know if they work right out of the box with this new client. Unfortunately there is no fancy GUI with this one but FahMon is projecting 1330PPD which ain't bad for a GPU. Thats at 3D speeds by the way. You will most likely have to force 3D clock speeds with ATITool 0.27 Beta 4. ENJOY!
Good one Mechromancer, I just looked couple of hours ago and there wasn't anything.
Thanx for the news... now fold on those who've ATIs :D
Would be interesting for those considering folding with gpus, that how much cpu cycles it's going to bind with gpu-client.
Actually folding with the SMP client and GPU client on a dual core IS NOT A GOOD IDEA! My PPD on both dropped dramatically. The GPU client needs an entire core to feed it information. You quad core owners shouldn't have any problems though. Luckily for me, the GPU clients puts out more PPD than my Opteron 185 with the SMP client :D. I'm now a full time GPU folder.
YAHOOOOOOOO! I've been waiitng for this for a whilw. Time to unlease hell!
Hope the PCIEX X1900 256mem pausing issue is fixed.
Heres the GPU2 FAQ: http://folding.stanford.edu/English/FAQ-ATI2
Unfortunately you can only run 1 GPU2 client on the 3870X2 at the moment. Crossfire has to be DISABLED to run two clients on a multi-GPU system also. At the moment, two HD3870's is looking like an economical option. Until the 3870X2 is fully supported in a future release, having 2+ separate cards is the way to go.
I think I'll overclock my PCI-Express bus to 125Mhz to see if I get more PPD due to more bandwidth between the CPU and GPU. I already have an overclocked Diamond HD 2900XT@800/900 from the factory. The onboard memory bandwidth should be more than enough so I'll just bump the core up to 850 or so later on today just to see what happens.
Please let us know how your HD 3870's perform as well. Also, faster CPUs should yield more GPU PPD. If you have an E8500 overclocked to hell then PLEASE let us know your PPD!
Now they only need to use 55% of the market share instead of the miserable 20%.Quote:
What about multi-gpu support and the -gpu flag?
Multi-gpu is supported through the -gpu flag provided on the command line. Similar to running multiple SysTray CPU clients, copy your \Application Data/Folding@home-gpu to Folding@home-gpu2/, and copy the shortcut and edit it to point to that new working directory. If you run the client from the command line, the current directory will be used for data and thus the .dll files must be there as well (see DLL issues below). The display must be active on the board you plan to use, and –gpu 0 will select the first board, –gpu 1 will select the second board, -gpu 2 the third board, and so-on. You will need to disable crossfire for multiple boards to be detected. Currently, only one client is supported on a 3870X2.
I see a quadcore running SMP + doule GPU2. SMP on idle and gpu's on low ^^
Woah! Now that's what I'm talking about. In ~10 days I'll be finished benching my HD3850's and I can then turn them over to science as they ease into retirement. While I own them I'll get them running F@H, it'll be 2 weeks or so till I can do it though.
Apparently the GPU client can work on ANY R6xx GPU. If an HD 2400 series card can do it then a 780G integrated GPU should be able to add a few hundred extra PPD. I wish a mobo manufacturer would come out with a decent 780G motherboard that doesn't blow MOSFETS with 125watt TDP CPUs. If somebody does soon, I'll build a 9850BE+780G+Crossfire 3870 system for maximum folding capability.
BTW, I think the GPU2 Client allows for more PPD than the PS3. :)
Thanks for the heads up!
:woot:
Even though I still have an X1950XT :p:
Lol yeah I dl and treid to run it on my ( not XS worthy ) 1600pro :P
toTOW on the Folding@Home Forums posted some VERY important information. The GPU client is heavily dependent on CPU speed. PERFECT FOR OVERCLOCKERS! Unfortunately for me, my weak Opteron 185 is once again holding me back from making high PPD :( .
toTow we really appreciate the info. Heres the link to the original post: http://foldingforum.org/viewtopic.ph...t=2020&start=0Quote:
To answer many of the potential questions, I will give you the result of benchmarking I did ...
toTOW wrote:Number from HD3870.
Q6600 @ 2.4 GHz (266 * 9), GPU @ 780 MHz : 1310 PPD
Q6600 @ 3.4 GHz (485 * 7), GPU @ 780 MHz : 1860 PPD
Q6600 @ 3.4 GHz (485 * 7), GPU @ 850 MHz : 1860 PPD
More numbers with the 2900XT ...
Dual Opteron 2212 @ 2.7 GHz : 1350 PPD (with GPU @ 739 and 847 MHz)
Q6600 @ 3.4 GHz (485 * 7), GPU @ 739 and 847 MHz : 1860 PPD
Q6600 @ 3.8 GHz (475 * 8), GPU @ 739 MHz : 1860 PPD
Q6600 @ 3.8 GHz (475 * 8), GPU @ 847 MHz : 2100 PPD
Q6600 @ 3.8 GHz (475 * 8), GPU @ 860 MHz : 2100 PPD
Yes, most people are going to be CPU limited with the GPU2 client ...
At any rate I'm going to be folding on my X1950XT again probably since I have to sell my phenom. I wonder if this version works any better than the previous one did with these cards? I'm guessing not...
I'm downloading now. I'll be firing this onto teh X2900XT. Now I could play games at 900/900. Will be interesting to see what clocks I can sustain this at.! I'll be back with results. WooHoo! :woot:
I'll give it a try on my 3850 later tonight. Time to start OCing the card to see when core clocks stop helping with a 4ghz q6600.
First Screenshot in....
I this is X2900XT at stock 3D clocks/voltages... or close.
It isn't heating much at all!!!
BTW. Thats also a E6850 @ 3.6Ghz.
Latest log file...
[18:24:33] Project: 2799 (Run 1, Clone 65, Gen 0)
[18:24:33]
[18:24:33] Assembly optimizations on if available.
[18:24:33] Entering M.D.
[18:24:47] Working on 582 p2799_N68H_AM03
[18:24:48] Starting GUI Server
[18:26:25] Completed 1%
[18:27:58] Completed 2%
[18:29:27] Completed 3%
[18:30:58] Completed 4%
Project is wurth 97points
[18:26:25] Completed 1%
[18:27:58] Completed 2% 1m33s
[18:29:27] Completed 3% 1m29s
[18:30:58] Completed 4% 1m31s
Avg frame time 1m31s = 91s ( x 100 frames = 9100s )
86400 / 9100 = 9.4945 wu/day = 97*9.945 = 920.97PPD
:down: I guess?
Edit: need more wu's though this one might stink.
Marvin!!!! The thing that worries me is ... I then set the card at 900/900. Same time needed roughly... 91s? :confused:
3850 @ 700/855
Q6600 @ 4ghz
Project 2799
[18:44:08] Starting GUI Server
[18:44:55] Completed 1%
[18:45:41] Completed 2%
[18:46:26] Completed 3%
[18:47:12] Completed 4%
[18:47:57] Completed 5%
[18:48:42] Completed 6%
[18:49:27] Completed 7%
[18:50:13] Completed 8%
[18:50:58] Completed 9%
[18:51:43] Completed 10%
[18:52:29] Completed 11%
[18:53:14] Completed 12%
[18:54:01] Completed 13%
Little over 45 sec per frame. :up:
Remember it just got released. Sometimes they put out really tiny units to begin with to be sure things are going smoothly before they put out the real deal.
Clock the cpu higher, this client needs it more then the last one even :D
It's also mentioned in the faq :up:
45s * 100 = 4500
86400 / 4500 = 19.2 wus/day = (19.2*97) 1862PPD
Still not so good but allot better :)
So.. 2900xt's suck?? Cpu's 400mhz diffrence can't account for that. The SS from riptide is abit small can't read it clearly but dual core with no other cpu processes running I assume ( I do see vmware's I guess for dimes? )
Edit: Dragonorta nailed it before me
Ah.... so... I'll postulate for now that this is nearly as much a new CPU client as it is a GFX client?? Cos the GFX is not under any pressure at all!
Also I should point out I'm on Cat 8.1
Win XP 32bit.
Also i noticed... that the client in taskmanager is NOT taking over 50% ie one core, when nothing else is grabbing it?? When I load up something else, the CPU is NOT 100% used. Something like 2-5% is not been used. This reminds me of the last client when combined with another project.
just adding my data to the thread.
2900pro 802/800 GPU client
Q6600 3420mhz (2 smp clients on affinity changer Vista 64)
good news and bad news...
good news is it only takes 4-5% cpu away from my dual smps.
good news is CC reports 50-51c full load gpu temp folding.
bad news is a measly 343ppd.
bad news is doesn't look like this thing is even pushing the 2900pro, temps of 50-51c sound like idle temps.
anyway, I'm gonna run it for a day and see what it does to the ppd on the X4 labeled pc in fahmon below ((the pc that has the 2900pro)... the smp ppd is hurtin right now because it's my main pc, and I've been using it this afternoon, which eats cpu away from smp. but for 343ppd, it doesn't seem worth it. my 1950gt in my old sempron pc was turning 550ppd on the old gpu client. it may be the whole affinity changer and smp combo, but the low gpu temps really make me think this thing is not using my gpu's potential. it's like it's putting along on battery power or something. plus if it's taking 300ppd or more away from my dual smps, then it's a no brainer. I won't be on it much longer.
http://img181.imageshack.us/img181/2660/80684870zf0.jpg
oh, btw, for those looking for the fahlog.txt file to point fahmon to, it's in your profile/app data. that threw me for a minute, since all the other clients put it in the working app directory...
I'm more of a dual smp fan. If I dedicate a core, that will severely handicap smp, and plus I'm not sure how affinity changer would deal with that.. plus fahmon is already showing about a smp 400ppd drop.. if I dedicate a core that will rise to about 1000ppd smp drop.. not worth it to me.
the same deal with my old 1950gt and the old gpu client, the ppd losses and gains cancel each other out.
I'll let someone with a slower dual core test it out as a dedicated gpu folder, my Q6600 is too efficient as a dual smp machine. my x2 4600 would be a better candidate, but it's not stealing my 2900pro from my main pc. it's on linux using onboard video as a dedicated smp and turns about 1100ppd at 2.6ghz. it's in my fahmon ss above.
but for a test and to get some accurate data, I'll stop my smp's and just let the gpu fold on the 2900pro at 827/874 and post some benches for thread data.
okey doke. killed the smp's and just ran the gpu for a 2900pro bench.
Q6600 3420mhz / 2900pro 827/874 catalyst 8.3
1250ppd and uses 100% one core (25% cpu on a quad). temps ares till low for the gpu, so I guess it really doesn't push it too much.
http://img186.imageshack.us/img186/2908/11131386lc2.jpg
They got this out incredibly fast. Next big thing is to utilize both gpus on 3870x2. So, it's like crossfire computing or making a tool that has control over the PLX chip to disable cf (to run 2 clients)?
...hmm, waiting for those "all in 1" times. Actually feels kind of bad because this is far from energy efficient way to make things better. I'm talking about crazy cpu usage for kind of nothing. Well... i guess this has been said before (haven't read these folding forums): a greater cause justifies my electricity bill ;) ...though my wife may disagree...
I have a 2900XT with 1GB DDR4 clocked @ 800/1000
and a AMD 64 @ 3500MHz.
[23:26:10] Completed 60%
[23:26:59] Completed 61%
[23:27:48] Completed 62%
[23:28:37] Completed 63%
[23:29:27] Completed 64%
I get around 49-50 seconds per 1%
Dunno how good that is, just for information.
Gius^^^^^^^^^ What Cat version you all using???
fixed cat 8.3
Cat 8.3 is a must i heard. I am using it.
Yep. Thought that was the issue. I was on Cat 8.1 LOL
I just read this on fah forum... for those interested in running smp with gpu2..
Quote:
Here's the best config I found for GPU2 + SMP on a quad core machine :
- I run one SMP client, and I answer "idle" when it asks me if I want idle or low priority (advanced options).
- I run one GPU client, and I select "Slightly higher" for Core priority parameter.
With this configuration, Fahcore_11 use one entire core, and the 4 other Fahcore_a1 use the 3 remainings cores.
I'd try 2x SMP in idle. 1x GPU2 on slightly higher. Or even 2x SMP plus 2x GPU2 if you really clocked it good or it's a 100% cruncher.
Q6600 3420mhz / 2900pro
TOTAL PPD COMPARISON for folding with just 2 smp's or 2 smps+gpu2.
notes*
-running 2smps with gpu2 on a higher priority core. (higher priority core selection only assigns 9-17% cpu, instead of a true dedicated core which is 25% on a qaud, so I see slightly less gpu ppd than when I run the gpu alone in above post for 1250ppd. the gpu is turning 1000-1200ppd on the slightly higher priority setting depending on where the cpu util fluctuates on the core assigned to gpu2-it seems to jump around between 9-17%) conclusion: the 10% loss in cpu core really only accounts for less than 100ppd loss on the gpu2 ppd. another note is my 2900pro oc's didn't make a difference in frame times, stock clocks produced the same frame timess as oc clocks on the gpu.
but for a quick summary of the below data:
2 smps with affinity changer by them selves = 4253ppd
2 smps (AC) and GPU2 set with higher priotity core = 4343ppd
in conclusion for me, it doesn't make any noticeable difference in maximizing ppd by running all 3 clients. they tend to cancel each other out concerning total machine ppd production. anyway, that's what I thought would happen, just needed to see it in action and make sure I wasn't dreaming...
*also note this is just a window of one test; I'm sure it goes up and down as wu's always do. in fact as I was typing this one of my smp clients took a big frametime hit and went down to 738ppd, probably something to do with AC; looks like one smp started hogging cpu time, and one was getting neglected.. it'll pick back up once AC catches on to it and moves priority to the other client.
Anyawy, here's the DATA I snapped from the test:
2 SMP's (AC) without GPU:
Project : 2653
Core : SMP Gromacs
Frames : 100
Credit : 1760
-- X4-A-Vista-Q6600-3.4 ghz --
Min. Time / Frame : 11mn 41s - 2169.24 ppd
Avg. Time / Frame : 11mn 50s - 2141.75 ppd
Cur. Time / Frame : 12mn 03s - 2103.24 ppd
R3F. Time / Frame : 11mn 56s - 2123.80 ppd
Eff. Time / Frame : 12mn 10s - 2083.07 ppd
-- X4-B-Vista-Q6600-3.4ghz --
Min. Time / Frame : 11mn 52s - 2135.73 ppd
Avg. Time / Frame : 12mn 00s - 2112.00 ppd
Cur. Time / Frame : 12mn 31s - 2024.82 ppd
R3F. Time / Frame : 12mn 23s - 2046.62 ppd
Eff. Time / Frame : 13mn 00s - 1949.54 ppd
2 smp's with GPU2 using higher priority on 1 core:
Project : 2799
Core : Unknown
Frames : 100
Credit : 97
-- X4-GPU 2900pro --
Min. Time / Frame : 1mn 06s - 1269.82 ppd
Avg. Time / Frame : 1mn 13s - 1148.05 ppd
Cur. Time / Frame : 1mn 19s - 1060.86 ppd
R3F. Time / Frame : 1mn 18s - 1074.46 ppd
Eff. Time / Frame : 17mn 23s - 80.35 ppd
Project : 2653
Core : SMP Gromacs
Frames : 100
Credit : 1760
-- X4-A-Vista-Q6600-3.4 ghz --
Min. Time / Frame : 20mn 10s - 1256.73 ppd
Avg. Time / Frame : 20mn 10s - 1256.73 ppd
Cur. Time / Frame : 20mn 10s - 1256.73 ppd
R3F. Time / Frame : 20mn 10s - 1256.73 ppd
Eff. Time / Frame : 12mn 27s - 2035.66 ppd
-- X4-B-Vista-Q6600-3.4ghz --
Min. Time / Frame : 13mn 01s - 1947.04 ppd
Avg. Time / Frame : 13mn 04s - 1939.59 ppd
Cur. Time / Frame : 13mn 07s - 1932.20 ppd
R3F. Time / Frame : 13mn 07s - 1932.20 ppd
Eff. Time / Frame : 12mn 54s - 1964.65 ppd
Eh, don't get too hung up on the PPD from this very early look at the new GPU2 beta client. And as you've seen, these are very small test WUs. Larger WUs will perform better, IMO. Also watch for client tweaks in peformance as new beta revs are released. IMO, the PPD will go up. :)
Well as it stands this is virtually a new CPU client. I'm getting about 60sec dead per % at 3.6 E6850. Changing the Card clocks does nothing from 743-900
that makes sense, 7im... the 97 point unknown wu's definitely act different than the 330 point wu's that the 1950's fold...
the 97 point beta's surprised me when load temps didn't look like the card was loading up, and oc clocks didn't affect frame times from stock clocks.
I'm sure the real production wu's will be different.. or at least hope they will once this client gets transferred to the main client dl page..
I agree Mike. I reckon the best is yet to come. I hope to see my card warp with pressure.
What I am wondering, I haven't read the official forums so maybe I'm so far off track I can't be saved no more, but seeing the low temps and all it seems like the cards aren't really working hard. I don't have an r6xx gpu but do they have 2d clocks still? Could it be their not engaging those because I can't believe load temps into the 50s if it's really flexing it's muscels :S
I read 3d is forced... kinda like sse is on smp...
but, yeah, I agree the 97 point wu's are not pushing the cards at all. even ocing doesn't increase frame times. so it's almost like these wu's are beta or dummy wu's; not really processing anything, just going through the motions and counting steps. otherwise oc's would improve frame times and the temps would be climbing and hitting load temps.
Maybe it's then my time to make pro-motivation post and encourage you all keep it going with this very first open beta test phase, and for god's sake report problems at official forums (oh well could stretch that to any client). It's very important I think and thanx 7im that you're keeping eye on us ;)
So I do not have anything else to do since don't own any ati card :shrug:
Fold on.
Just an Observation outta curiousity.
the current ATI cards run at much much slower MHz when in 2D mode.
meaning, if the 3d switch isnt flipped, you can run as many 3D Mhz as you like, but your probably still running in 2D mode.
which for ATI is usually around 300 mhz.
you might try forcing 3D Mode to work, by monitoring your MHz with say Rivatuner.
see if that explains why some people are getting such low points.
of course, this is just a theory, since I dont actually know how this is working.
When I OC in either ATI OverDrive or in Rivatuner, Rivatuner reads my clocks as the ones I set the OC as when F@H is running.
Are you guys using Cat 8.3 or 8.3 hotfix?
Hi guys,
Here's an early run on my sig rig. Cat 8.3 (No hotfix) Vista X64 SP1. I've HR-03's on the 2900's and they rarely go over 50c when gaming so 44,45c seems right. Q6600 @ 40% load. Azureus likes to swallow 1.5GB of RAM on me so the RAM usage isn't caused by the client.
Right. Lurkers clocks are 3Ghz. I reckon if you got them up, that time would change between steps./
PS: What part of Ireland are you from?
I wish I could up the clocks, anything over 333 FSB causes it to become unstable. Definitely not stable enough for folding. Curse this 975x chipset :p: I'm in Dublin BTW.
I've just noticed that it saves the logfile to C:\Users\<user>\Appdata\Roaming\Folding@home-gpu. - That's FAHmon back up and running for me. Now to disable crossfire and run 2 gpu clients.
Edit: 2 clients up and running, it's only running 2d clocks apparently. No increase in 3d clock usage according to Riva.
@Lurker: have you tried some 1.4 volts for GO and run mems 1:1? ~380 fsb depends of course your cooling cpu&nb
Yeah, I've tried just about every bios/setting/voltage combo possible. This board doesn't like quads running over 333fsb. I can bench at around 360 x 9 1:1 but as I said anything above 333 is unstable, especially for folding :( . Core2's can reach 400-420fsb on the mobo. Though I think I'd be better off upgrading my mobo to an X48 :D rather than slapping an E8500 in my current board.
I think an x38 with your current DDR2 will be sufficient :p
I get more PPD when I set the FahCore_11.exe affininty to a SNGLE CORE. Unfortunately you have to redo that at the beginning of every work unit (an affinity changer program for that would be handy). Try it out and see what type of PPD increase you get.
Asus Rampage X48 is ready to roll with DDR2 :D :D :D
Looks like I'm staring down the barrell of a CPU bottlekneck. The default 2d clock on my cards is 507Mhz. I overclocked the 2d clocks with AMD GPU tool to 750Mhz and it didn't make a bit of difference to PPD.
Also can't use ATI tool on Vista x64, it's seriously fooked. The gpu client gets an EUE if I force 3d clocks in ATI tool with Vista- It's crashed my system a couple of times too.
This Beta certainly looks promising though... seen this on the folding forums
http://foldingforum.org/viewtopic.php?f=10&t=2020
toTOW wrote:
Number from HD3870.
Q6600 @ 2.4 GHz (266 * 9), GPU @ 780 MHz : 1310 PPD
Q6600 @ 3.4 GHz (485 * 7), GPU @ 780 MHz : 1860 PPD
Q6600 @ 3.4 GHz (485 * 7), GPU @ 850 MHz : 1860 PPD
More numbers with the 2900XT ...
Dual Opteron 2212 @ 2.7 GHz : 1350 PPD (with GPU @ 739 and 847 MHz)
Q6600 @ 3.4 GHz (485 * 7), GPU @ 739 and 847 MHz : 1860 PPD
Q6600 @ 3.8 GHz (475 * 8), GPU @ 739 MHz : 1860 PPD
Q6600 @ 3.8 GHz (475 * 8), GPU @ 847 MHz : 2100 PPD
Q6600 @ 3.8 GHz (475 * 8), GPU @ 860 MHz : 2100 PPD
Using cat 8.3s on XP SP2 with an E6850 @4ghz I get roughly 2090ppd from my 2900XT set to 820/945 in ati tool
I was right lol :D
I'll try and put something together.
Had a more optimistic message here but :mad: :mad:
The code looked simple but I'm getting permission errors here when trying it on my smp client so that will happen with fahcore_11 to I guess.
thats exactly what I thought was happening.
since these arent 3d applications, they run the same speed as the desktop.
so they arent getting thier full potential.
things like GPUZ arent good for varifying core speeds.
neither is catalyst control center, or ATITool.
you need something that does that in real time, which is why I suggest rivatuner.
there are various methods for overclocking ATI Hardware.
but few I can recall well for the 2900xt.
I've been using Rivatuner to OC my 3850 and it keeps 2d and 3d clocks the same for me.
I just flashed higher 2d (slightly higher then default high 3d) clocks into my 3870x2. Of course now my pc consumes more power in desktop use even without gpu client.
Hi folks,
Been running 18 hours stable with 2 gpu clients and a console client. It's putting out 1935 PPD, which isn't too bad I think for Vista 64.
I left the gpu client running for a few hours with 2d clocks @ 700Mhz but it doesn't make a blind bit of difference to PPD.
Hopefully someone out there is smart enough to get 3d clocks running in Vista 64 and the PPD should hopefully improve a bit. :)
I crunch on wcg on my q6600's but both my rigs have radeon 3XXX series cards, 2 x3870's and one 3850, how will running this affect my WCG points as it is so cpu dependent?
So do you have some clock speeds and frametimes?
Did it improve frame times?
Read the posts above, flash 2d clocks the same as 3d clocks or use RT to set them the same, and please post your frametimes before/after and maybe temps before/after?
Gpu client needs a dedicated cpu core, so wcg and fah gpu client might work but both projects will take a big hit as supposed to running them seperatly.
I get 2095.20 PPD for the 2799 wu with the specs below. Also, according to rivatuner, my card is correctly using 3d clocks while crunching.
Also, people may want to know that using my card HD3850 864/999 with the cpu at 3.0ghz i got ~1611 PPD. With the cpu at 4.05ghz i got the ppd above.
This is all using a single gpu2 client.
Well, you could either set Boinc to use only 3 cores, or have it try to share that 4th one and see what happens. As Boinc just runs a separate work unit on each core it might not be so bad, at worst you'd probably only receive 3/4 of your normal on WCG, but at the same time you'd be getting F@H points. I'd give it a try :up:
Yeah, I guess I was only thinking in terms of one video card per machine, brain fart :yawn:
It's pretty close, 5/8 and 3/4 or 6/8 but I thought I should clarify. Runing boink on 5 cores should leave the 3 spare one's dedicated to gpu folding so it could be worth it if he's willing to let WCG slip abit ( his call lol ).
Still pretty strange, ps3 and gpu folding seems to be quite on par at the moment it's really only the smp client which get's the highest ppd. Was hoping with the new approach gpu client would settle somewhere inbetween the ps3 and smp with points.
http://folding.typepad.com/news/2008...e-gpu2-co.html
Quote:
"There have been some misunderstandings on how the GPU2 core works. In particular, for small proteins like villin on GPU's with large number of stream processors (SP's) like the 3850 or 3870, the protein is too small to use a larger number of SP's unless the CPU is very fast. Some people have guessed that there is some internal SP limit. This is incorrect; the problem is that small proteins can't be parallelized amongst a large number of SP's.
We are working to release larger proteins (about 2x the number of atoms) as they are more interesting scientifically and use the GPU's (even the high end ones) much closer to 100%. The exciting part for us is that the larger proteins run at almost the same speed as the slower ones on GPU's (whereas on CPU's, they're 4x slower); this is where the GPU2 code should shine. In parallel, Mike Houston at AMD is working to optimize CAL such that it has lower CPU overhead.
For now, we're pushing out villin WU's as a test (good to know that the code is working well), but we expect the larger WU's to be going out soon (say a week or two, pending internal testing)."
End quote
I get the same PPD at 2D clocks as well and it saves A TON of power on this 2900XT of mine. Basically on my slow dual core (Opty 185), I have improved my PPD by 300 points while decreasing my power usage by 20watts compared to the SMP client. The GPU client is economical in my particular case.
Unfortunately for any 65 or 45nm Intel quad core user, it is not economical. A quad core puts out more PPD and uses less power than a GPU. I put a post up on the F@H forums asking for an adjustment on the GPU work unit points. 97 points per WU isn't worth it for most users so they go back to using the SMP client; However, the GPU WUs are very important for the F@H project. Because of this, I think we should get more points per GPU work unit. Id' like to see 150 to 200 points per GPU WU.
Didn't you read the post above yours? I think your whish will come true ( but still crossing my fingers when I say that.. should say; I hope with the newer protiens the awarded points will change as well even without the mentioned speed increase these bigger proteins will have due to increased paralellization ).
:yepp:
AFAIK you need to dedicate at least 30% CPU to the GPU client in 3D Mode or it may error and/or produce low results, it'll use around 25-28% of 4-cores with stock 2.4G Q6600 or 2.5G 9850BE [tested]. My PPD for 13 hours with only a 2600XT is 986 ATM. GPU fan is 45-7% load at 64C and low CPU/RAM clocks. 96-99% GPU stream utilization though.
http://img236.imageshack.us/img236/7587/testhk3.th.png
Better hardware and clocks should get higher. I'm still testing low end yet. After 24 hours will turn CPU/MEM clocks up to the stable settings of >3700/550 and see what it gives, then run the SMP client in combo.
Why use confusing terms like 30% off the cpu in 3d mode or 25-28% of a quad.
First of all it can not use more then 25% since it's a single thread which occupies one core so all your numbers are off anyway.
Way to post something which is entirely not usefull..
If you have both a q6600 and the 9850be why didn't you post comparison shots between them? Much more interesting :yepp:
:shrug:
Task Manager CPU usage is not confusing, they are a common reference with home systems.
You're wrong. If I set it to one core PPD falls to 972.Quote:
First of all it can not use more then 25% since it's a single thread which occupies one core so all your numbers are off anyway.
EDIT: 1-core affinity set, max CPU usage seen as 25% consistent, PPD has been consistent 986 before now but within 5mins fell to 974
http://img225.imageshack.us/img225/203/test1zf8.th.png
This is exactly what you're doing here. Maybe you didn't understand the purpose of the post, in which case it would be better for you to ask before going off on a tangent. It was to show what a 2600XT can do with it. A few pages back someone suggests only 29xx or above cards are supported which was also not true.Quote:
Way to post something which is entirely not usefull..
If that's what you wanted then you can easily ask for it in a decent manner. But that's useless in a thread for a GPU client since I implied the PPD and the core usage is the same for both Q6600 and 9850BE at stock. :shakes:Quote:
If you have both a q6600 and the 9850be why didn't you post comparison shots between them? Much more interesting :yepp:
As for "not loading GPU" possibilities, that is also not true. AC watt system, DC watt tool as well as the P-Tuner DC measurements all show the same power draw frm the system as gathered if I run ATi Tool at full GPU load and the CPU core usage is exactly the same as what I get if I run the ATi Tool GPU test.
Where am I wrong then? Please I just said it can't be more then 25% I never referenced ppd besides asking you to post comparison from the phenom and in c2q. You still missed the entire point trying to show single screen shots claiming something. It's a single thread, period. I know you're better then this, you know a single thread can only use one core so what are you trying to do?
Please, you're the one withholding valid information ( like screenshots off phenom and c2q direct comparisons ) and instead spreading things like over 25% cpu utilization which just is impossible.
And 'the meaning of the post' got pretty lost in wrongly presented utilization numbers. About the 2600xt folding capabilities, current wu's are off small proteins which aren't parallized over all available sp's something the coming wu's will. At that time the 2600 will fall drasticly behind the cards with more streaming processors.
Instead of implying, show screenshots. Pretty please, with sugar on top?
Don't think this was aimed at me so I won't respond.
Edit:
Got a pm asking me why I seem to be picking a fight. Infact I'm not, I just don't like posts like yours where you're so unclear and leave so much open for interpretation. It's not really usefull information then, but that doesn't give me the right to 'demand' you post other things so I apologize for my attitude, and I edited this post accordingly. I appriciate you tinkering with your hardware and putting some effort into the folding project, just would like you to post your findings in a clearer manner.
KTE. Your X2600 behaves a little differently than others.My X2900 XT I know loads up rather slightly with these WU's. However, as the Alien from Netherlands said, once the bigger WU's come out we'll see high end cards getting rather warm.
that's why I've held off on testing this client after my 3 runs posted on earlier in this thread.. the behavior of the 97point beta wu's probably don't reflect an accurate data set of what is to come when real production wu's get here.. these beta units are nice to play around with, but I wouldn't put too much emphasis on getting nitty gritty with them..
7im from fah posted a couple pages back on this..
74% load @ 3ghz amd 95% @ 4.05ghz. I have no other crunching programs running so, setting affinity shouldn't make a difference. These ppds are purely through a single gpu2 client.
I don't usually fold and i only used this to see how my hd3850 would perform. Because my hd3850 is overclocked and voltmodded and at the brink of instability, i dont think putting it under 24/7 load would be very good for it even though temps don't go above 55C.
Ghost! How do you tell what load is on teh GFX?
just installed my hd3870, first numbers are here, running gpu2 client only
[01:22:47] Completed 1%
[01:24:07] Completed 2%
[01:25:26] Completed 3%
[01:26:46] Completed 4%
[01:28:03] Completed 5%
[01:29:21] Completed 6%
[01:30:39] Completed 7%
[01:31:57] Completed 8%
[01:33:14] Completed 9%
[01:34:32] Completed 10%
[01:35:51] Completed 11%
ppd is 1074
this is the dual opteron in my sig, only its back to all stock, i let the vapo chill go more than a year ago, and my dual stager i sold today. i just had my appendix pulled and need to pay fot the operation, couldnt use the 2 stager anyway, cold bug. i'll play with the clocks what little i can, see what performance we can get out of it. anyone know how much ram speed affects this client?
Been running a few /gpu/ clients...
This one shows the most ppd:)