I used to own a Cuda, great american muscle car![]()
I used to own a Cuda, great american muscle car![]()
call it GPi![]()
Last edited by cookerjc; 05-25-2008 at 08:13 PM.
has there been any HONEST EDUCATED specualtion as to the true power of CUDA vs todays dual and quad core cpu's ?
i read the quick article about photoshop running with the GPU plugin and although nothing technical was given the 1 line statement they gave makes it seem like a GPU is light years beyond todays current cpu's.
if this is the case then why hasnt AMD or intel made the CgPU ? i mean,, if AMD/ATI want to win the war,, make the dam next thing an ATI CPU.....
if the GPU is so powerful,,, then,,, ... wtf are they waiting for???
"These are the rules. Everybody fights, nobody quits. If you don't do your job I'll kill you myself.
Welcome to the Roughnecks"
"Anytime you think I'm being too rough, anytime you think I'm being too tough, anytime you miss-your-mommy, QUIT!
You sign your 1248, you get your gear, and you take a stroll down washout lane. Do you get me?"
Heat Ebay Feedback
and what about SLI-Pi ?? or X-Pi...
given the piss poor multi core support Pi currently has,, IF the gpu's are multi multi multi cores then do you really expect it to work all that well ? and if it does then fix the dam pc version to be multicore caple,, TRUELY capable,, not just 80% one core and 20% the other..
"These are the rules. Everybody fights, nobody quits. If you don't do your job I'll kill you myself.
Welcome to the Roughnecks"
"Anytime you think I'm being too rough, anytime you think I'm being too tough, anytime you miss-your-mommy, QUIT!
You sign your 1248, you get your gear, and you take a stroll down washout lane. Do you get me?"
Heat Ebay Feedback
Ever seen Folding @ Home on ATI GPUs? There ya go. They have released toolkits do to so. The thing with GPUs is that they are very powerful very parallel "CPUs". They are only very good at doing a few specific tasks, most of time its with images and image related things. The F@H GPU client can't crunch all the same Work Units the CPU version gets, because of limitations of what the GPU can do.
Fold for XS!
You know you want to
I can see intel shaking in their boots.
lol, what exactly is the point?
pi became a common bench cause it turned out to indicate gaming and memory performance with only a quick benchmark run. im curious about this, but just running pi on a gpu might be pretty pointless and not have any relation to real world performance. it might scale when real apps dont scale anymore, or stop scaling when real apps still scale, or scale different etc
But a new bench is always exciting, and im looking forward to more cuda stuff!
im actually surprised there is barely any app that uses a gpu to crunch on data even tho ati and nvidia both support this since years... i know its hard to code, but still, the perf you can tab into with such an app is quite big, so it should be worth the effort?
I´m still waiting for QMC Boinc CUDA client, please somebody make that happen![]()
First f@h, and now this. Nvidia's marketing team works very2 well indeed.
Let's just hope they'll deliver.
-tam2-
Before everyone thinks GPGPU is the future, i would like to point some things out
1) GPU's are good at calculating parallell stuff
2) GPU's truly truly suck when it comes to branching, because when you branch you stall every other shader...
3) CUDA is slower then when you implement a similar thing in directx, you can program gpu's with directX also instead of using CUDA
And ATI has something like CUDA, called CTM (close to metal), to bad you have to program in assembler with CTM, in CUDA you can program in C & C++
[QUOTE=GoThr3k;3015331]Before everyone thinks GPGPU is the future, i would like to point some things out
1) GPU's are good at calculating parallell stuff
2) GPU's truly truly suck when it comes to branching, because when you branch you stall every other shader...
3) CUDA is slower then when you implement a similar thing in directx, you can program gpu's with directX also instead of using CUDA
And ATI has something like CUDA, called CTM (close to metal), to bad you have to program in assembler with CTM, in CUDA you can program in C & C++[/QUOTE]
So ATI F@H is programmed in Assembly? Anyways, that keeps the nubs out of programming![]()
Fold for XS!
You know you want to
i have seen a few posts on it (the folding client), but to me that doesnt prove alot. in one specific area yes.
and yes i understand the GPU being image/graphic limited my comment was one more of hope than misunderstanding or disbelief.
BUT taking that same knowledge and metallity and making a cpu can't be that life altering that intel/amd can't do it.
i fully understand and appreciate the technology and how different chips are designed to do different things but we do not live, and never have, on a flat plane,,, the world is round,,, to believe that we can't do something will force us to stay in the dark ages of technology.
and yes i have read a few tidbits here and there about tapping into the gpu for extra computing power.
let's just hope they can expand on that in our life time. expand on it in such a way that everyday computing graduates to the next level.
"These are the rules. Everybody fights, nobody quits. If you don't do your job I'll kill you myself.
Welcome to the Roughnecks"
"Anytime you think I'm being too rough, anytime you think I'm being too tough, anytime you miss-your-mommy, QUIT!
You sign your 1248, you get your gear, and you take a stroll down washout lane. Do you get me?"
Heat Ebay Feedback
Interesting! Hope this will work fine and can represent something interesting.
They were more and more software using GPGPU to help CPU in heavy load like Adobe Premiere Pro plugin for HD videos, Photoshop CS4 (S Next), Folding @ home etc...
Maybe the next version of Sisoft sandra will have a measurement for such "GPGPU" calcul ?
CTM allows both low and high level programming apart from high-level only for CUDA.And ATI has something like CUDA, called CTM (close to metal), to bad you have to program in assembler with CTM, in CUDA you can program in C & C++
One very important thing to note:
You will have to have (at least) two NVIDIA GPUs in your system to run SuperPI or any other CUDA applicaton on a GPU for more than 5 seconds.
Primary display adapter in Windows cannot work under full load for more than 5 seconds without the task being aborted by the operating system because the OS assumes that the driver has got stuck.
Such a limitation does not apply to the secondary adapter. Therefore, you will be able to use only secondary adapter for running any CUDA applications which take more than 5 seconds to complete their workload.
That means you will have to invest in a dual PCI-Express x16 mainboard and into another NVIDIA card, even if it is only 9600GT (or cheaper) for the primary display adapter.
And when you are already investing, why not go SLI? That way NVIDIA sells two cards and a mainboard chipset. Nice way of to boost the sales.
Some of you already have SLI, and some may want to get it because of this "exciting" announcement so a word of warning to you:
1. You won't be able to run CUDA applications with SLI enabled EVER. Each CUDA application must manage multiple GPUs on its own.
2. Multi-GPU CUDA applications require that each GPU thread be associated with a distinct CPU thread. It means that for maximum performance on a Quad GPU setup you would need Quad-Core CPU as well.
ewwwwww....
thats not good,,,,
pi doesnt run full core % though maybe we'll be ok.. (atleast it doesnt for CPU)
still an interesting twist
that would be a neat tool since you touched on it, a GPU based task manager... something to show GPU load, processes and memory usage.
someone get on that lol...
"These are the rules. Everybody fights, nobody quits. If you don't do your job I'll kill you myself.
Welcome to the Roughnecks"
"Anytime you think I'm being too rough, anytime you think I'm being too tough, anytime you miss-your-mommy, QUIT!
You sign your 1248, you get your gear, and you take a stroll down washout lane. Do you get me?"
Heat Ebay Feedback
QX 9650 5ghz with 1.55v 4.8ghz with 1.5v 24/7 in a VAPOLI V-2000B+ Single stage phase cooling.![]()
DFI LP LT X-38 T2R
2X HD4850's water cooled , volt modded
Thermaltake 1KW Psu
4x Seagate 250GB in RAID 0
8GB crucial ballistix ram
superpi on cpu: battle of the seconds
superpi on gpu: battle of the milliseconds
so wholl brake 9ms ?
is calculating pi a process thats easily multi-thread'able?
that thing about needing a second GPU kinda killed my interest in the last second. Hope they can make a work-around and we see good software
![]()
X3350 | DFI LP X38 T2R | d9gkx
9800gtx | Raptor1500AHFD/5000AACS/WD3201ABYS
Corsair 620HX | Coolermaster CM690
kinda strange you need a second GPU, some of my comrades using CUDA dont need tthat
Can't see why you'd need a second GPU - just don't stress it 100%, it looks like.
If you need a second GPU to run any GPGPU application I wonder how the ATI f@h client does it, or how folding on nvidia will work.
can't you just get a cheapo pci card for your primary display?
Bookmarks