They cripple the DP compute performance. The SP performance should be nice, comparable to ATI cards. I presume these WUs are DP heavy though, so ATI will indeed be the way to go.
Type: Posts; User: joshd; Keyword(s):
They cripple the DP compute performance. The SP performance should be nice, comparable to ATI cards. I presume these WUs are DP heavy though, so ATI will indeed be the way to go.
From what I've seen, Win8 benches faster than Win7 in some benchmarks, definitely worth giving BOINC a go with it in case its faster?
Are you installing from the package manager?
!!
Anyway, we will never be able to "control" the Higgs field.
The Higgs has nothing to do with understanding gravity, it has to do with understanding mass. No it won't lead to breakthroughs in levitation.
I guess we'll find out in about 8.5 hours...
It doesn't, using DMA simply allows you to maximise your use of the existing PCIe bandwidth. How much bandwidth do you see actually being used in profiling?
OpenCL or CUDA?
OpenCL allows pinned memory allocation, and then transfers happen via DMA. Allocate your device memory with the CL_MEM_ALLOC_HOST_PTR flag. I would presume CUDA provides similar...
But you can make the manager connect to a remote client and get WUs from there right? Or would it simply be monitoring the WUs running on the remote machine?
If you have your BOINC manager connect to the BOINC server on another machine instead of the default localhost, does it still appear on the stats as a distinct host?
Obligatory full load task manager screenshot, so we can all go "Phwoar, look at that!"? ;)
Nope... if you have a static or "fairly static" IP, and use the public swarms that the site monitors, it will get you bang on. It doesn't claim to do anything that it doesn't actually do.
I dunno...
Surely IPC is not a particularly meaningful metric to take so seriously? Sure the graphs in the article suggest that FX55 has the higher IPC, but you should really be comparing the flagship chips at...
The prototype chips were 32 cores, 1.2GHz, 4 threads per core. They are pretty weak cores, not even any OOE support. I'd imagine they'll be more suited to the sort of code running on GPUs already.
...
Well I've never seem one big enough to stand on before, that's pretty cool.
He's not the only physicist before Einstein to suggest E=kmc^2 for some k. I think someone proposed that E=(3/8)mc^2 as well.
Einstein derived that E=mc^2 with k=1 specifically, as part of his...
So who do you attribute the first derivation of E=mc2 to?
Might be interesting:
http://indico.cern.ch/conferenceDisplay.py?confId=155620
Oh I'm sure... I just thought people might find it interesting.
http://cdsweb.cern.ch/record/489839/files/0103051.pdf
possibly if the cards are stock and the cpu is idle? Maybe just run 2 of them
I'm going for a pair of 5870s on a corsair 650
Well, I picked up a pair of 5870s, at current rates they should pay for themselves in 3 weeks. :D
So the blades mount a shared home dir or whatever with NFS I guess? Can't you make them a boinc build each, so they can run completely independently of each other?
I think if you install boinc on a Kerrighed system it will run a task on each avaliable core. It'd be an SSI system so show up as "one machine"
http://www.kerrighed.org/wiki/index.php/Main_Page
Isn't rocks for SSI clusters? The head node should "see" the other CPUs as native cpus afaik. Presumably you shouldn't be running a separate instance on each blade, the head node should migrate...