https://i.imgur.com/vGLsfNy.jpg
competition FTW!!!! :eek: :woot:
Printable View
https://i.imgur.com/vGLsfNy.jpg
competition FTW!!!! :eek: :woot:
The memory-controller configuration raises questions, but I'll withhold judgement until we have proper benchmarks and reviews.
This core think is officially out of control.
Wow.AMD managed to surprise me 2 times in 2 years, first that they are making cheap fast 8core, and now this.These parts obviously are gonna have some drawbacks because half of the dies wont be directly connected to memory, but still...
And this heavy metal thing.
Right on dudes, right on ...
I agree to a point. From a raw compute perspective this is true, but I think that scheduling and being able to split the micro-ops over several cores is really where growth can happen. Just think of how few applications support more than 4 cores right now. If the CPU scheduler itself could coordinate the cache and parallelize task scheduling on its own, the speedup we'd see across the board would be easily polynomial.
Hitting a dead-end.
The amount of applications that truly benefit from 20+ cores is basically... erm... counted easily with 10 fingers...
For video editing for example a highly clocked 6c12t is still the king.
We want something like a 16c32t 4.8GHz daily driver ( if not 5-5.2GHz at least with 6 cores active to speed up the post-processing, while the 16c @ 4.8GHz will take on the encoding part of the process )
In theory yes but in real life developing applications that parallel is mostly not possible.
Parallel development is a real pain especially when you have to share memory with other threads. Also finding situations which will really benefit parallelism is also very hard. Overhead of opening and closing threads are very expensive and you can easily find your application running slow with parallelism than a single thread.
And my final thoughts is more core is good for servers but for desktops and workstations there are really very little examples that you can benefit a 32 core 64 thread cpu.
For sure, no one should buy this for a gaming machine. My point was simply that in general, scheduling is really where large scale improvements could be had. If the CPU's internal micro-op scheduler could parallelize the work over multiple cores that share low level cache, I bet the improvements would be at minimum polynomial. It would be in effect turning standard CPUs into quantum machines.
In the mean time we'll just have to keep relying on the software people. It actually surprises me how few programmers I know that really understand multithreading. It's much simpler than people realize if you have a good kernel for task management, but a complete pain to translate old code into a real time approach.
I see people always say the same..no applications or just a few..while we in the chess world have waited for year to get more cores!
When first dual cores came out we had already chess engines who can handle 2048cores..and if i remember right it was in 2005 with the Athlon 64 X2 3800+
So that's more then 10years ago ..and i'm still looking for more cores ;)
Check this list : http://www.xtremesystems.org/forums/...test-AMD-Intel
Chess world moving fast..now finally a engine that use your GPU cuda cores! using NN started from Zero chess knowledge..very curious when they will
pass engine strenght on CPU !
That isn't the panacea that you might think it is. Instructions are often sequence dependent. The ones that aren't already do what you're talking about. The combination of superscalar design and out of order execution describe what you want, but there is no need for nor would there be benefit from shifting those instructions to other cores to do it.
I don't think I agree with this sentiment. In general everyday computing we've long ago reached a point where nearly any processor is capable of completing the most common and frequent of tasks effectively instantly. The things that are really heavy and fall outside of the scope of standard tasks are the ones that fortunately tend to be very parallel--compression, compilation, encoding, encryption, and others.
Games have historically been one of the best examples of a heavy task where that isn't the case, but that isn't because it's not possible. It's a lack of effort by most developers. There are examples of games built on extensive task queue architectures which thread out quite well.
Most lengthy tasks are that way not because the algorithms themselves are expensive (by modern standards--such was not the case in the 80s and 90s) but because the algorithms are applied to a very large data set. Most of the time, it's pretty trivial to break the data set into chunks and work on as many of them in parallel as you have execution resources for.
Source: I am a programmer. I've written a lot of parallel code over decades.
Agree, a code prototyped for a test, gets most of the time in production.
Why bother speed-up or code it in a better way, if it works already ?
Software engineer are just the problem of computing. more ressources they have, more lazy they are. Just look at Android ...
Hardware did improve a lot since 90's. Software didn't followed fast enough.
I had in a not so far past some very large models to simulate. The simulation process time was around a week, worst case two weeks on a huge server with tons of cores and memory. If I do a minor change, I have to do again all simulation of the system. The simulation was multiprocess. One simulation by core. But a lot of tests and runs to do, so was almost like multi-core in a way.
I was lucky I know matlab very well. I simplified the model a bit, use made the computing on 3D matrix. It was the same compute stuff, I used 3D matrix to do the same operation on all data in a parrallel way. And I multi-threaded the computing too because it is easy in matlab and fun. I got the simulation results in a 1% accurracy with the full model and parasitics, Good enough for my needs. The process time was around 120 seconds on an old i5, the size of memory required gets from 12-20GBytes to 100MBytes/500MBytes. So a huge improvement.
It took's me a week to write and improve the programm. After that I could do hundreds of full simulations in a day.
Today believe or not we still couldn't find innovative multi-threaded design patterns for most of the scenarios. So it is not laziness or just developers don't care but because what we know is limiting us.
That's because chess is a very good example/game for multi-threading.
By nature, the computer and a normal person ( a very good at chess person that is ) do think a lot of moves ahead, even compute different plays and outcomes at the same time.
A high speed and multi-multi-core system will execute a thousand plays in the background while you think about your current move.
In the other hand, a multiplayer fps game can't because of the infinite variables it has ( from movements, to weapon changes, third party entering a fight for which it had calculated the most probable outcomes, etc etc )
You are totally right. I am a computer engineer experienced on software. I am a partner of a company that develops mission critical software solutions for NATO. I am the head of R&D and I don't care about the costs. I won't use any of my previous libraries if they aren't developed by recent software architectural requirements. I am on parallelism 2007 that Microsoft announced its amp library. First of all multi threaded applications and parallelism is not same. While parallelism is a sub set of multi threaded applications actually they are totally different. A 32 core 64 thread cpu means parallelism not just multi threading. And what we have today is just this. We are waiting for brilliant people to show as new ways to use parallelism but I belive we have to start with hardware improvements.
I disagree, I moved from the hardware to embedded role myself and think that there's a lot more opportunity than people realize. Sure parallelizing over 32 cores is complicated, but there's little preventing lot of embedded controls or vision systems couldn't be easily extended over 4 or 8 cores aside from the cost of redoing the architectures.
Each Chrome tab can now get its own core.
And 2GB of memory.
Cores are great for cell phones since you can pair them off for an energy efficient basic core and a powerful core, and if you have an OS that does multi threading like android you can put one active app on each core for better battery. The same is true for newer laptops with real 8th gen intel parts and battery in laptops.
I dont get 16 or 32 cores for a desktop, maybe for a special use work station, but if turbo and scheduling works right more cores are better most of the time. I really like six cores so I can run a plex/file server on my gaming desktop.
I want a 32 core box, because nerd reasons.