hey..I miss the days when you had to copy Glide.DLL over in games, not even to talk about different versions of glide. And each game more or less had its own GFX drivers :p:
Nomatter how you see DirectX. Its also been the saviour.
Printable View
hey..I miss the days when you had to copy Glide.DLL over in games, not even to talk about different versions of glide. And each game more or less had its own GFX drivers :p:
Nomatter how you see DirectX. Its also been the saviour.
I am interested in exchanging ideas on this topic, so, if you have something constructive to say, help yourself.
8800GT, X1900GT before that. Mainly MMOs and a few RTS/Simulation like Spore. When I was alittle younger also FPS games. But in short, I never had an issue. Nor had any of my friends.
This is ofcourse with no SLI/CF in mind.
I would consider my friends and me a pretty good testing ground for DirectX/OpenGl. Since we download alot of games and try now and then. (Yes, its legal here).
I just simply dont see the issues with "average joe" that doesnt use the latest beta drivers and such.
Well I think you got a point there. However I don't see how introducing a totally new architecture in this market is going to solve this. In fact, maybe, it will make it worse. Introducing a new way of programming for the GPU will only add to the diversity and incompatibility that is already there not make it disappear. Especially with the current state of things between the dominant computer companies right now. If a kind of honest dialogue would happen, between Intel, Nvidia and AMD, even VIA and most importantly Microsoft when Larabee will be entering the market some sort of cooperation towards a standard could be possible. But with the animosity and wars that are out there I fear that Larabee will only make the gaps even wider...
Well, and again, this is my personal opinion, I am not saying that Intel is commited to this, but I am myself convince that the Larrabee like cores are like 387, and to the long term, I ll do all i can to get this into the CPU ...
if you remember well, 387 was a big debate, do I wanted to use the i80387 or the cyrix version ... the AMD version?
http://www.chipdb.org/cat-387-112.htm
http://www.chipdb.org/cat-387-306.htm
http://www.chipdb.org/cat-287-945.htm (only 287) ...
The solution later on was the 486DX (http://www.chipdb.org/cat-486-317.htm )
Again, this is me, not intel ...
it does make a lot of sense to stop the crazyness of many many chip, with incompatibility, time will tell, I ll put a lot of my time on this to convince the world :yepp:
I am starting here... :clap:
of course, I can't speak about our internal discussion, so don't ask me, but this is where I want my PC to go. x86 is the only way to legacy.
what makes you think that end users want to go to legacy?
You guys are already late to the party... after CS4 is reasonable to expect that more and more apps will try to find a way to exploit GPUs via one or another "non-x86" approach...
You will need to provide lots of ISV support and money (or both) to bend things in "legacy" way... I don't you are not capable of doing it, but you've proved once that EPIC size of the human and financial force can produce EPIC fail of something that's completely opposite to market trends... and right now it doesn't seams that market trend in this age of acceleration is towards the legacy!
just mine 2 "personal" cents ;)
Remember that are talking to somebody who works for a Company that has the power of God. For all intents and purposes, in the tech world, they ARE God. They are perfectly capable of doing anything they choose. It's not a matter of IF...it's only a matter of *when*, and *how much*.
please ... we are humans, and we try to follow the x86 legacy.
I wish people do not look at us as a BOX with people inside, we are a group of people, like nvidia or any other company.
I try to keep a Olympic spirit to the competition, and I expect the same from any other guys. At the end, the best solution will win, and I think it is the compatibility to the legacy that will be the best way for the consumers.
CS4 is a nice step to use Blitters, it has nothing to do with CUDA, They start using the feature we were using on the amiga 15 years ago ... ou la la , what a revolution ;-) ... kidding!
seriously, ATi has few years back some Avivo application, find them and try them on your new 4870x2 ... it does not work, but you can still boot Dos 3.21 on your phenom or Nehalem ...
This is the main point i am trying to make, Legacy is required to live long in this industry.
This is not going to get sloved in few minutes, neither in few years, but it is where we have to go, otherwise, one day, you ll end up with an nVidia game shop, and ATi one, and an intel one, a sony one, an apple one ... Welcome to the world of Consumer hostage of their special flavor of the PC ...
We need competition, but we need it with Compatibility and legacy.
Let's try to keep the Big money talk and business side a side, I want to speak about the right way to make PCs in the future, to free consumer from Drivers worries, and all of those kins of stuffs
:shrug: no?
By the way, atleast here in Latvia Intel X58 Smackover boards are already available for sale at stock.
When the Asus boards come out, it will be time. :yepp:
Fire 'em up DrWho? we want the nda to be lifted on the nda lift date! :D
remember, we are removing the FSB and moving the QPI. QPI is much more complexe than Hypertransport, it toke a long time to design because it can manage memory coherancy, and this is very complexe.
so, how long is a corporate secret, but we spend quite some time on validation.
On the top of this, Nehalem has a band new PCU (Power controle Unit), power gating and Turbo, a much more efficent Hyperthreading.
Validating 731 millions transistors plus the transistors of TYLERSBURG, it is a lot to look for in... more transistors and than hays in an hay stacks ...
so, you think that looking for a needle in the hay stack is difficult? we have a scale 1000x more complexe here!
so, it takes time, a lot of my co workers are very dedicated at this, that does explain a part of the price of the CPU :yepp:
I admire the people doing the QA, so much the check!
for the 13th months, I don t know :-P
Well, if you can arrange a test sample, then sure i will be glad to test it out, but if i have to spend my money on product i will be using everyday, then hard to say nothing can beat ASUS and Biostar.
Intel x38/x48 boards had HUGE problems, 50% of them died by saving bios settings after first boot. When i worked at retail shop some time ago, we stopped selling them. No PS2 is PITA too, i simply couldn't do anything without recognized usb mouse/keyboard. Overclocking headroom on overclocking board was almost nonexistant.
If we mention legacy for PC, would it be fair to draw parallels to the console world?
No matter how open the conversations become between companies, they will all have their own path of getting to an efficient answer. IMHO there should be some kind of legacy bond, but not too strong else everyone ends up making essentially the same thing, and to some extent, that would limit innovation. I like seeing the different ways Intel, AMD, nV and ATI work :0
Francois...may I ask- is there a "reason" for the QPI ceilings to be where they are? Do you expect them to improve with silicon revisions and as Intel gain experience with the architecture? 177-220max limit the "point" of an upgrade unless we all buy the QX.
Thanks!
Kenny
Also- is the NDA/embargo lift date really a secret? I know it and I didnt have to sign any NDA to find out..
As far as i know, i have been told that QPI wall with i920/i940 on Smackover board was around 215MHz. Also board/cpu failed to post over 1.8Vdimm. But we all know bios is bugged, board is early revision so we have long way to go.
:up: Good post
I agree, when I was in my film class I would of killed for one of these processors, epically since there was a limited amount of computers in the class and limited amount of time on them, and limited amount of time till your film/project was due and I hated sitting there waiting for stuff to get done. If the school had a few i7's we would of been some happy students.
To the lecacy discussion (if I got everything correct, because my english is too rusty):
Legacy seems to be the best way. Today we have CPUs capable of 0,4 teraflops? And right next to it sits a processor which does 1 teraflop like a charm, idling, eating power for nothing. I'd like to see system, which is fully aware of the capabilities of each part it consists of, to feed each component with tasks, that runs best on them.
to general Nehalem:
Hoping to see Nehalem is not slower in less-threaded games than current wolfdales, if it is, shame on you, Intel. :lol: And I like to see lower idle power consumption than equivalent Penryn quad.
Intel's current penryn based quad cores are capable of around 0,1 TeraFLOPS, not the 0,4 you mentioned.
About power consumption, my guess would be that it will be lower on Nehalem because Nehalem has been greatly improved in this respect and this is also what the PCU (Power controle Unit) is all about.
FWIW early tests show nehalem @2.93 to consume 7 watts less than the Q9770 when idling. Which might make them equal "clock for clock". (isn't the q9770 platform extremely power hungry because of the high fsb) It's probably difficult to judge from this little information, but the power saving features should be improved.
http://diy.pconline.com.cn/cpu/revie...438115_10.html
Well, it was just a figure, not so intelligent guessed, I think.
7W? Not that much...
Just for your understanding: my PC is idling around 80% of the day: internet, TV, music, and so on. The other 20% I play some games or need raw raytracing power. When I upgrade, it should not raise my power bill ;)
How about waiting for the NDA lift, with all the "nice" reviews instead of all the speculation and chinese sites crap and then decide? ;)
Core i7 965 >> $1999 - 2724 ~~~ USD$1380 - US$1880
Core i7 940 >> $962 - 1225 ~~~ USD$663 - US$845
Core i7 920 >> $538 - 691 ~~~ USD$371 - US$478
seeing as there's no competitive rival anywhere to be seen...
why not charge 'em up the a$$$$$$$$
take ati and the 4870 as an example
top of the line gpu, new architecture, shrunk die, DDR5
but because there was a competitive rival with a comparable product
they release it for $300 AUD a very :welcome: price
but if there was no nvidia, the 4870 would have launched at twice the price :yepp:
that's capitalism for you (and that's monopolism for you too)
DISCLAMER: all of the above is imho
That's right 'cause nvidia clearly dominated with it's products for quite some time there over ati, until the 48**'s gave 'em a good kick to the nuts. :)
Just trying to say that if AMD was clipping at the heals of intel, nehalems would be MUCH cheaper for sure.
I want to crunch faster. I also want a smoother experience in AutoCAD. I want my ray-tracing to be faster and better optimized for more cores. And later, I want to run games better than currently possible. I want the bottlenecks removed whenever possible, because 1-2% today could be 50+% in a year or two years or more.
Yep.
I hear for the last 15 years that we don t need more processing power ... Some people lost 50 Deutch mark betting with me that nobody will ever need more than Pentium III 500Mhz ... what a monster!
well, i was told as well that 90nm was the maximum possible ... I heard that the earth is flat too ... and the earth is the center of the universe ... the sun is turning over the earth ... no?
I don t know if it is man made or not, but we know for sure that Co2 reflects back the ray back on earth, so, even if we are responsible for 0.0001%, it is cool to try to avoid it. As a mathematicien, I see high match between Co2 level and temperature increase curves. It is not my specialty, but the numbers are matching ... It is out of context for here any way ...
Solar cycles can't be responsibles ... a recent study proved in Europe that the temp was out of cycle since 1929...
The only disturbing science fact I was told about is the magnetic field polarity change, this could have an impact. Again, I know nothing about this, but now, a lot of smart famous scientist agree ... I recommand that you get connected to them and try to exchange information with them, may be you got the truth and you can convince them ... I trust BBC weither sim, and it shows warming up ...
I agree it's out of context here but i am at least entitled one quick reply -
A report titled "The First Global Revolution," (1991) published by The Council of the Club of Rome,Are the scientists the new priests now?Quote:
"In searching for a new enemy to unite us, we came up with the idea that pollution, the threat of global warming, water shortages, famine, and the like would fit the bill. All these dangers are caused by human intervention ...The real enemy, then, is humanity itself."
I thought it's only natural for the earth to experience long term temperature changes up and down then up and back down
the earth has undergone many ice ages so it must have experienced global warming in between those ice ages
or it would be one big ice age
using the south pole cap ice, we know 100% for sure that the earth never got so much CO2, ever! Since the earth is a adiabatic system, with energy getting in and out, it is a fine balance ... even a volcanic explosion of the size of Yellow stone did not generate so much CO2, something is wrong, and having the smart guys looking for it is important.
You can t ignore the scientific facts because few scientist generated a stupid paper somewhere in a corner room ... or may be just not quoted properly ... Facts are facts, ice is melting all over the earth, if the men are responsible for it, it is goood to try to solve it, if men are not responsible, we got to understand to what's going on. One think is sure, you can't wait to be sure to hit the wall to start breaking otherwise, you are dead!
so, I don t know if we are responsible, and I don't want to hit the wall, so, my commute distance is 3 miles.:up:
Global warming is man effected nomatter how you put it. Its completely out of its natural cycle. And thats the reason next year ships will start to sail over the north pole from Europe to China/Korea/Japan and cut about 25 days of sailing of.
Ice drilled cores dating 100 of thousand years back can tell you that. Plus its not really complicated for most people to see its a runaway greenhouse situation.
Or maybe you wish us to end like Venus?
First the worst predictions of the northpole being ice free was 2070. Then 2050. Now we might have it in 2010.
This is the northpole september 2008:
http://i2-images.tv2.dk/s/74/1046337...0efe7e2c8.jpeg
Compared to earlier:
http://upload.wikimedia.org/wikipedi...ic_Sea_Ice.jpg
Its abit like playing russian roulette to do nothing.
more people
more cores
more farts more co2
more deforestation
more power usage
more burning
more.
oh, but less ice.
:rolleyes:
Ok there have been global fluctuation in average temperatures for ever. HOWEVER over the last century the average temperature has increased by 0.5C. This is an unprecedented rapid change and is one for concern.
Global carbon trading schemes is the way to force business to reduce their carbon dioxide footprint. The funny thing is that I'm writing an essay on it right now.
plus less ice means even hotter due to less reflected light...but i guess that is obvious to everyone...
'theyre' too busy trading 'their' way into a recession, but yes, that is the intent of trying to encourage clean and green i suppose.Quote:
Global carbon trading schemes is the way to force business to reduce their carbon dioxide footprint. The funny thing is that I'm writing an essay on it right now.
the bean counters wont have anything to do unless a number is attached to 'it'
BTW there's a rumour that 2P server is late, because of validation, so it's going to trickle into the market in Q1. Someone posted it on RWT, but the source is probably theinq or another FUD-machinery, could be true regardless. I think this would make sense, because there's no launch date on nehalem EP yet. If true than that's pretty bad for Intel and AMD is quite lucky.
screw ice, there where peridos in earth history where the poles where completely ice free. In fact we are still living at the near edge of a ending iceage.
Sure humane race accelerated the melting of the ice caps, but suggesting we turn earth into a venus is pure bs. :ROTF:
Venus has what 96% CO2 in the atmosphere? Even if you burn every organic material including humans, plants plus vaporizing the oceans, you never can reach this level of concentration in earths atmosphere. ;)
Even if they were to bring the CO2 levels down they won;t bring the Methane levels down. Hell, my farts account for more melting than anything else! :D
Global Warming. :ROTF: It's almost as silly as that Gore character. If you belive what a Politician tells you they have you hook line and sinker. The scientists have said different. It is nothing more than normal temp swing.
You cannot go screwing with Mother Nature. Let her do her job, and we need to keep our noses out of it. That's what science says.
I hardly doubt the scientists at NASA and other centers are "deluding themselves". I also hardly doubt the Colombia Supercompueter is "deluding itself".
Regardless of Nobel Prizes (which have become meaningless now) they cannot argue with proven scientiffic fact.
OH god can we stop with global warming and get backto i7?
Like for example, what d'you think of that "confirmed" release date Francois? =D
yeap, i can confirm, the release date is the day of the release of Core i7 :rofl: :rofl: :rofl:..
I forgot ... Core i7 will be able to run Dos 3.2 , even Dr Dos ...lol Legacy!
try to run Moto Racer on your graphics card ... I just tried ... does not work! no legacy there!
if you go there http://www.old-pc-games.com/ , you can pick up one of the game and try! As soon as Direct X 3, 5 or 6 are involved, you are screwed!
What I call a Royal mess!
Talking about legacy, sure you would like legacy when the other players must pay for the use of intellectual property rights,
with don´t we seek legacy in a free open code? offff course there is no greenies in there right?
I would love some legacy, some free open source legacy, not the monopoly waanabe were intel is trying to go.
There are more company producing industrially x86 than there is GPU vendors ... what are you talking about? :shocked:
I am not interested in talking about licenses, I don't know anything about it, and I don t want to know either ...
What I know is that the world of 3D games is full of whoop ass cans, try to install any games that you bought 5 years ago ... it does not work. In my case, some games, I can t stop playing to it, Mario 64 ... (N64) , and I like Duke Nukem 3D too, I got to have a old GPU with a old Windows to run this, this is just so frustrating! I like playing wilth all of those vantage games.
Just imaging if tomorrow, we tell you that you can run Windows XP on the next CPU ... wait wait ... they already did it for DX10!
This is just a royal mess.
[edit]
I forgot , creating legacy is more difficult that it seams, you got to plan for the future, and design it to scale in the future, Every 2 or 3 years, ATi and NV had to change the way they were design, and I understand why they had to do so, in the mean time, they did not do the effort to try to plan the next steps, they went for immediat competition, without thinking further. If you look at the Moto 68xxx world against the x86, the scalability of the architecture got them in the long run.
PCI, PCIex , AGP, and USB specs after a while become the main connecting points, all of those are open specs ... What I am brain storming about here is about legacy, Legacy always won, because it generate free consumers, no Whoop ass can at the corner of the store.
[/edit]
:yepp:
last remark : The x86 providers never agreed on price ... some other guys did ;-) and agree the settle down about it ... (Very personal comment!!!)
the end of the NDA have been advanced of 2 weeks (2nd of november instead of 17th of november), is the release date too ?
This is a different time, when the gains to be had specifically in heavy workload arenas with GPGPU and Larrabee could very well make the CPU's moderate micro-arch improvements irrelevant. They're also up against limits of physics (thermal wall), as well as thresholds of perception in some cases like every day usage. I believe the average PC doesn't have enough workload to justify a new CPU like this (in light of other technologies which can massively speed up parallel applications).
Why spend top dollar on a new processor to process video at 10-40% improved speeds when you can do already do it at 400%-1,000%?
Granted, if they can stick more cores on there, and software developers can figure out how to thread 8 and 16 ways, you'll get comparable speed-ups. But such is the challenge before them.
wait wait ... there is a lot of marketing done about GPGPU right now, but your GPGPU will never boot your OS, or will never send email, or do the spelling correction
Your GPGPu can t encode for DivX 3.0 (or 99% of the video codecs), neither for MP3 (or 99% of the audio files) or many other legacy format, and thinking that all of those tasks will be done on the GPGPU is thinking that the world will be recompiled: This is naive.
You may get a marketing hype going, but the majority of the software will stay on the processor, why? : LEGACY!
people told me many time that Pentium III was more than what they need ... I hear this for 15 years.
Another question for you, DrWho??
Socket 1366 comes in November, we all know, in the 2. half of 09 socket 1156 will arrive to replace 1366 in the m1 and p1 segment, if I remember that roadmap correctly.
My concern is: what happen to 1366 after the 1156 was launched? Only XE? Do we have to buy XeonDP for our bloomfield rigs?
Tell us please, if you know and allowed to. ;)
Thanks
See, your perspective is so legacy-centric, you've missed my point. My point is that.. who *cares* what boots your PC? Sure it makes the CPU an essential component, but not a special/interesting component that can sell for large margins. There is a difference between "essential" and "special". For example, every system must have ram slots, but who gives a crap where the motherboard makers buy them from. Catch my drift? Essential components does not imply *important* components, in terms of what is doing the heavy lifting and saving you a lot of time.
If you're not doing the heavy lifting on a system, you become a commodity. Just because you end up in every system doesn't imply that you'll be able to sell your technology for large margins.. look at onboard sound and networking chips.
What about GPGPU is stopping someone from making an MP3 or divx encoder? Do you mean "it's impossible", or "it's not out there right now"? If it's the latter, your point is fairly weak.
There's no doubt the majority of software will stay on the processor. That's the hole in your argument though.. the majority of software doesn't need 8 cores/threads either. And who cares if the majority stays on the processor, if the *interesting* work is done elsewhere?
I don 't know if you remember, with the Pentium II, if you wanted to play DVD, you needed a Special MPEG2 card ... many people were saying that you need only a low Pentium II and this card, and you had a good PC.
Now, DVD play back takes 5% of a Core 2 Quad ... The generic processor always catch up with custome pipe line, it is just a matter of time, and you keep the Legacy :)
See my point now?
As for the subject of encoding and such, It's not impossible to make MP3 or divx encoders for each GPGPU API available out there. It just has to be done for each and every variety of API. That API also has to be rewritten and expanded to cover each generation of hardware that falls under it's scope. Right now, just about every time the API gets updated to add new features, on some level programs coded to work with older versions of the API have to be rewritten to work with the new API, as the update broke some functionality in order to make room for a new way.
Everything is incompatible without someone going in and making that compatibility. CUDA and other API's do grant huge speedups to certain types of workloads, but until there's some standard legacy support, updates and new hardware ends up breaking functionality for something. It's still progress, but it requires a lot more work to shoehorn programs onto ever changing hardware that at the base level, looks incredibly dissimilar to it's previous incarnations.
I think what some people miss in this is that a computer is a total of the systems involved.
Yes, today all the excitement is on GPU processing but that is just a part of the whole process.
I look at it all and see that one step progresses then another catches up or even surpasses and then the others follow.
You have to look at it from that perspective to understand the cause and effect that drives the development of these systems.
I see your point but that was long before CPUs had hit the thermal wall, when each subsequent CPU generation really was night and day from it's predecessor. I feel like once you hit the thermal wall, you must rely on micro-arch improvements, and even integrating the IMC gets you a 1 time speed-up. Where do you go from there? It seems like all you have is more cores to throw at the problem, leaving developers to fend for themselves WRT utilizing them via threading and parallelized workloads.
I don't doubt they WILL get faster, I'm just thinking maybe only trivially faster for current day workloads, and not faster by as much as P2->P3->P4->Core 2 transitions were.
If you can't do encoding/decoding/transcoding nearly as fast as another technology *today*, then what relevant workload are you left with that you're going to show your new processors performance benefits via, in the average system? Opening browsers at 2ms instead of 10ms. My point is you end up with gains where the % gain is technically huge, but where the absolute gains are below a perceptible threshold in applications where people don't really care/notice.
Updates to APIs don't break functionalities... that's the whole point of an API. To rely on that function being there and completing the task you expect it to. What you're saying is like me saying that updated CPU generations break legacy programs. It doesn't happen. Think of an instruction set as a hardware version of an API.
On a GPU, inter-generationally, you end up doing the architecture specific part of the API in a manner that is not exposed to the API user. So they don't worry about the architecture differences.
I agree that's what average joe does. So in terms of absolutes, sure, maybe I exaggerated. But between my 2.33 Core 2 Duo and Core 2 Quad at 3.0GHz that I use, I don't see much/any gain in terms of sites like this. And keep in mind this is supposed to be the difference between $80 and $550... this thing that isn't perceptible to many users out there.
By the way, now I can get a Core 2 E8400 at 3.0GHz for $150. That kind of product used to cost in the $300-$450 range. Think about what that means.
take 3Dmark up to 05, and try to run it on a 4.0Ghz Core 2 Quad with fast memory (OC too), you are up to a surprise with a G45... if you choose the software rendering drivers :)
Raterization is very memory limited, so, when you can, try it on Core i7 again ...
But the reality is the current cpu's do show increases over previous generations and also run cooler.
I see that myself with the Harpertowns vs the previous clovertowns.
Clovers( on good air) max in the 3150 range while the Harpers max in the close to 4000 range and with identical cooling run 15C less.
Then they also produce close to 40% more work in a given timeframe.
Cooler and more work done in the same time.
That is the advantage of the newer cpu's then add in the lesser current draw.
My clovers at 100% load at 3150 draw 420w, the Harpers at 3758 draw 320w at 100% load.
There is your "absolute gains"" in real numbers..
Well one year ago there whern't cheap 45nm quads around... :p:
why do you insist that Ci7 has no use, the people who are buying it know what the buy, everyone else buys cheap dualcore anyway.
Heck on my main forum the regulars there hardly recommend buying quads, cause most guys dont need it. Only the guys who want to have fun and strive for benching have quadcores. :yepp:
You're missing the point entirely:
1) more of *what* work? Speed up up twice as fast if you want. What do you expect to do with it?
2) Sure you can go all enthusiast on the problem, but this isn't what 99% of the market look at.
Don't get me wrong. I'm not saying there's no use, or dictating that no one should buy them... but I like to feel like I've paid an extra amount for some substantial, reliable, across-the-board gain, and in this case, I don't know that I would see that. I have no doubts I'd see *a* gain, I'm just a bit skeptical right now in terms of the usefulness here. We'll see.
My point is that for the work I do the newer processors are a vast improvement.
My work is in pure computational power, numbers crunching in Aids and cancer research and that is ALL my machines do with the exception of my "daily driver" which is a Q6600 P35 based machine.
Let's look at this in perspective:
In 2005 just 3 years ago I built a "state of the art" machine.
A dual xeon 3600/2mb/800 Irwindale based machine.
Best DP Intel board and cpu's that money could buy back then.
My current Harpertown machine does SIX TIMES the work that the Irwindale machine did while drawing the same electric and running actually cooler.
That to me is progress.
while the gforce 8 series ( http://en.wikipedia.org/wiki/GeForce_8_Series) used up to 185 watts at peak , and the new Gtx 280 use 178 watts at peak (http://en.wikipedia.org/wiki/GeForce_200_Series ) ... They are trying to find more way to increase the power, but it does crack the dice if you go higher! :nono:
It is, reread, the other machines ONLY crunch, this Quad I do use for crunching but I do post here from it and get my email also..:ROTF:
Sorry, your misreading me.
I'm talking CPU only not GPU.
I have no expertise or knowledge in the GPU area as what I do doesn't require high end graphics.
I actually run an old 1997 4mb STB/Virge PCI vid card in one of the Clovertowns.
The newest vid card I own is a 2005 Leadtek 7800GTX card.
Power consumption goes up, then with the newer generation TDP goes down but you can be sure at some point a new generation will consume more then the older generation.
Look at the power consumption of a GF5 compared to a GF8 or the power consumption between Pentium 3 and C2Q.
I love how far this went off of topic.
does anyone know the Actual pricing of these new chips and how fast they are?
So it basically comes down to this: ?
Sockets > Chipsets > CPU > Memory
S1160 > P55 > Nehalem > dualchannel DDR3
S1366 > X58 > Nehalem > triplechannel DDR3
So pardon me if this is common knowledge, but what specific tasks have been reserved for the NB at this point? PCI interface, I/O? I mean, insofar as I know there's no real reason for expecting a different chipset to OC any better or worse than another? I mean, what basis do we have for any ensuing rampant speculation on the performance of P55?
These CPu's are still under NDA, and so is the NDA lift date. Anything else you have heard from other sites is meaningless babble. No date has been confirmed yet. When it is, I can assure you that alot more will be revealed. I've seen topic after topic on these and we basically don't know very much at all.
Most of the stuff that has been posted has proven to be completely false. Most of it has come from places like Fud and the Inq. It should never have been posted here. All it has done is create confusion.
yes the cpu's are under NDA as is the NDA lift date but I don't think the general info on the cpu's are all FUD.
If you sift through it all you come up with these assumptions:
1) Pretty close to the same as a top yorkie in single threaded games
2) massive increase in multithreaded apps
3) massive increase in bandwidth with the tri-channel
That's all based on what I've read and not what I've seen and to make that clear this old fool forgot to get a SATA DVDRW to load the OS with so it's not based on anything I've seen with my own eyes yet..:rofl:
All my systems except this Nehalem are server boards and they still carry IDE controllers onboard. The Nehalem board doesn't..
Yeah I said most. That doesn;t include the ones that have limited info with ceertain screens of tests. DrWho? has also helped with alot of the Fud that's been posted.
That stuff about the memory really had me worried and it was a nonissue. I mean they put the fear of God in people and made it sound like something had gone terribly wrong with the boards or the memory subsystem. It wasn't anything but a standard for the new CPU.
I'm really on the warpath right now over those two sites. It has nothing to do with the members here. Heck, people here are trying to straighten stuff like that out.
I questuioned the title right at the beginning. It said confirmed and I looked and looked and couldnl;t find where it was confirmed. That's adding to some of this confusion. People like me really need this CPU, but if I can;t get it by year's end I absolutely must start building ona rig. That's why I;m really staying on top of this info, becasue i do prefer going with an i7, and need the date, but I might have to end up building a rig with a 775 Q9550 or something.
These false alarms are killing me slowly. heh ;)
X58 has PCIe, QPI, and DMI onboard. I believe it also does the clock splitting for the CPU. It requires a SB pared with it (ICH10) to do all the I/O
P55 is pretty much a ICH10 with an extra lane from the CPU to carry display data from the Havendale/Aburndale CPU with onboard GPU, and may also do the clock splitting. It really has nothing on it that functions like a NB other then the possible clock splitting.
I understand how you feel.
Look at what you plan to use this for and then weigh out all the options.
For gaming I'd do a E8500 or E8600, for general use a Q9550 makes for a hell of a system, if bandwith is a issue then look at the i7 but it's not all that simple and only you can decide based on what you plan to use the machine for.
Good luck!:up:
I can really use this power. I mean *really* use it. All my machines fold 24/7, and that's automatic. The only time they are paused is when i really need resources, but with something like this I'd only need to pause 2 of the clients as 2 cores would be more than enough.
I run some massively powerful 3D simulation programs that run flow analysis on rocket motors. I also have some some CADD, and simulation programs that predict flightpaths that take an enormous amount of power that I can't run on my own machine yet. That's why i really need this this winter to get started with some real work. I am semi-retired, but I wanted to get some work done on one of my own projects that's been held up becasue I just don;t have the machine to run that program. I have gone to a freinds house to run it a few times, but am not gonna bother him anymore with that. I'm mainly in it for the hobby now.
I also want to do some flight simming with FSX and there isn't enough power for that program. I don't weven think i7 will allow one to max that thing out yet. Maybe not even 2 i7's.
The rest are "wants" not needs, but I do want to do some benching again. It's been a long time. I am getting into watercooling and really want the power on top of the needs I had above.
I can roll back to a q9550 if I have to, but would rather not. I know when it comes out I will kick myself, becasue once it's bought and built, there won't be another for at least 3 years if not more.
I need the Xtreme, and want the Xtreme nased on what we do know so far. I know it's only gonna get better when we se what these things can do full on Xtreme. :)
See Dr Who, that's the whole problem... general purpose processors (GPP) are just too slow to adapt. Who cares about DVD playback *now*. It's 2008, let's do some Hi-Def blu-ray decoding !! Can your GPP handle that without a GFX card to help? Must we wait for a new series of instructions to get added to an already huge instruction set?
The solution is simple... tell your superiors to start releasing some chips with some efficient reconfigurable logic!! 4 cores + a good chuck of reconfigurable logic that can be dynamically configured on the fly using software for *ANY* task. How long will this take?, I'm waiting :D Let us make our own instructions :yepp: