Quote:
Originally Posted by GspotProductions
(Post 21523717)
thatīs a graphics card not a processor :2 cents:
|
Not exactly the same but pretty close for the applications they are talking about (video encoding, scientific computing). It's really the power efficiency that is the breakthrough.
The use cases they listed are parallel algorithms and GPUs are optimized for those already. Effectively it's the same difference. Video encode/decode, computer vision, machine learning, scientific computing. From the little information that is there, these sound more like generic cores but not sure what applications there are for non-parallel applications with that many cores.
Unless each core has its own cache / memory then I can't see it behaving much differently. What's the point of having 1000 cores if they all have to share the same memory bus. You can execute 100 instructions on a single core in the time it takes to make a single memory fetch. A cache line on an i7 is still only 64 bytes. So to full take advantage of this chip you would need an application that needed to perform 10,000 CPU instructions per 64 bytes. Even the most intensive computation algorithms don't need that much processing. The cores are going to be sitting idle waiting on memory 99.99% of the time.
Graphics cards are designed for parallel algorithms and have to execute the same instruction across all cores at the same time. Each core gets its own memory but all cores have to execute the same instruction at the same time. This chip sounds like it doesn't have that limitation. That's the key distinction. Graphics cards are not ideal for applications that involve branching like database indexes (B-trees), but perfect for any media processing or scientific computing where every core executes the same instruction at the same time.
I'm trying to think of non-parallel (non SIMD) algorithms that could take advantage of this. Other than databases, I think any kind of graph processing like Apache Spark, Beam, Apex, Flink, Google's Pregel architecture could really benefit from this chip.
It would be ideal for any kind of Complex Event Processing (CEP) systems. Real time analytics over massive streaming datasets like a stock market feed, lots of sensor data, real time website analytics. I can definitely see the defense department using it for threat analysis from massive incoming datasets of camera, phone conversations, and other sensor data.
It seems like that processor is much more power efficient so that's really the huge win. It makes more things economically viable.