Even in the best case scenario for Nvidia 5870x2 will be faster than a GTX 380. But faster isn't always better. I much rather would get a single core 380 instead of a 5870x2.
Printable View
I think GT300 will be around $500 and 5870x2 around $600
Okay Intel is working on Larrabee which, as I understand it, will be more a GPGPU then just a GPU. Now why would a leading CPU maker get into the GPGPU market if they already have working 6 core 12 thread processor which will be availible for retail in a couple of months? Please keep in mind, I'm only asking to learn more.Quote:
Bro, you do understand that PhysX can and does run on the CPU...! It doesn't require a Nvidia GPU... until Nvidia bought Ageia and decided to code idle GPU resources to speed up PhysX. Then decided to market THAT ability as revolutionary.
Not in a game environment, but it looks pretty impressive, given the fact that GPU is processing the graphics and physics.Quote:
So, how often do you have idle GPU time when playing a modern game..? Show me a PhysX simulation that has 3500 objects swirling around a tornado on a single Nvidia card...
It won't happen.. not even on 2 nVidia GPU's.. not even on THREE nVidia GPU's.. Go ahead, find a PhysX demo where they have any amount of real physical objects being moved around, specially that move as fluently as those in the Velocity engine demo.
http://www.youtube.com/watch?v=r17UOMZJbGs
http://www.youtube.com/watch?v=RuZQp...eature=related
Speaking of CPU physics. What does the CPU test in Vantage test? The second CPU test. Now if the CPU is so much better then the GPU why does the CPU score jump considerably when you enable the PhysX driver when running a Nv card. Yes a Gulftown does pretty well with its 6 cores and 12 threads, but why does it need such a high core speed to do what a GTX295 can do at 650Mhz core speed?
If those really are the prices I think I'll be waiting until the next gen before I upgrade.
Specially since DX11 is only now being adopted.
been said x2 = 499 by ati theirselves.. so basically even if gt300 itll be beaten price/perf wise. then also
in the terms of
5870X2 > gtx380
5850X2 ~ gtx380
5890 ------------- no competiton and i suspect itll go in the same price point as 5870 atm.. and literally be best price/perf card.. so im holding out for it this time :)
5870 ~ gtx360
500$ for HD5870X2 and 450$ for HD5850X2. So when they'll release the HD5890 they'll adjust the price of HD5870 to 350$ and price the HD5890 at $400?
Wow, if 5870x2 is $500 then GT300 will have to be priced in the $400-500 range, and I don't think that'll be easy for Nvidia.
I'm still expecting 2 5870s to out perform the X2 so ATI doesn't need as strong of a lead with the 5870x2 as they otherwise would if Nvidia could manage a dual card out of the gate ( which all indications suggest this is beyond unlikely if not impossible and will remain so for the considerable future - eg die shrink ) I personally expect the high end card to be $499 USD MSRP at launch. If it can trail the 5870x2 in most games that have decent multi gpu scaling, then it will get my attention as it is sure to face roll it in games without good profiles / scaling engines.
Because GPU's by their design are VERY FAST at dealing with simple opreations, and they are very parallel. If the core can be run easily in parallel then it will be very fast. Usually the code is very simple though. One GPU "shader core" is simplier than FPU core in CPU.
Problems arise when the GPU has to start to deal with memory mapping, stack management, hardawre interrupts, be compatible from 16 to 64 bits ISA, carry on the legacy stuff, have huge caches and very agile branch predictors and prefetchers. Oh, add that to a GPU and voila, you've got a CPU.
...why aren't we using GPU for general purpose central processing if they're so awesome? :p:
And as people liked to compare GPU flops and CPU flops.. Someone write a Queens benchmark for GPU and we will speak. It's a very branchy benchmark, and basically GPUs don't like that kind of a code.
Because at certain tasks - i.e. certain mathematical operations, a GPU will blow a CPU into the weeds. With a CPU, those operations are expressed in software and combine generic operations on the CPU to achieve the desired effect.
On a GPU, those operations are in the silicon on the chip. In very basic terms, you simply give them the data you want to process, and set them off.
It's the same reason why 3D rendering on a CPU sucks - very complex, and needs to pull together LOADS of instructions to make it happen. With a GPU, Direct3D and OpenGL operations are expressed on the silicon.
So, there is still a market for CPUs. Most of a game's logic will run on a CPU, as will the OS and applications such as MS Word. GPUs cannot do things like that.
As much as they are being put to more general purpose tasks, it is only by the grace of the 3D mathematics they use being suited to other tasks, that we are seeing this hoo-hah over GPGPU.
It would be interesting to see when (or if) ATi release the 2GB version of the HD5870, as IMHO that is the only reason people who are not convinced the GT300 is coming soon are not purchasing (or preording) HD5870's. I for one see a 1GB as pointless now. I am currently on a 1GB GTX280 and already have VRAM limited situations. This GT300 with 2+GB OF DDR5 sounds like a breath of fresh air, I just hope it lives up to the expectations, if not then the new Radeon cards do sound fantastic.
There is no denying though that on paper and from what we have heard about the computing aspectsl, the GT300 is sounding like another "VooDoo2 or Geforce 8 GTX".
John
article on the handicapped versions of fermi. they will have decreased DP performance and gddr3.
http://www.xbitlabs.com/news/video/d...ics_Cards.html
Any ideas when they (crippled Fermi) will be out? Because without them Nvidia can't compete with ATI on the mid-performance range while being profitable. I can't imagine a GTX 260 or 275 will do much good against the new Junipers without a serious price slash
how are high margins bad? they will obviously have a crippled version that will compete with the 5870 and competitively priced of course. if you want the fastest graphics on the planet you must realise there is a diminishing return for how much money you spend in relation to speed.
Sounds intresting, I really think nvidia is on to something here.
http://www.fudzilla.com/content/view/15832/34/
all these features were seeing with CUDA, i think they really need to start making some IGPs that do everything. would be a great market. (use my cheap worthless weak northbridge GPU to run physics, virus scans, video decoding/encoding, video playback, folding, and other random things that CPUs dont do so well)
Since were seeing cpu limited senarios on the 5870, who thinks that gt300 may be seriously cpu bottlenecked. Doesnt matter how fast the gpu is if the cpu cant keep up, this could kill any performance gains from the architecture. Its almost to the point that cpus need a dedicated co-processor.