- Joined
- Mar 3, 2018
- Messages
- 1,713
The "CPU" has been the heart of the modern computer for decades. As chip design costs. and speeds, have increased, the tech industry has typically defaulted to dumping more resources into flexible, integrated general purpose processors rather than spending millions on specialized chips for specific tasks. But, citing a paper from Thompson and Spanuth, The Next Platform believes that the era of general purpose computing may be coming to an end.
The meteoric rise of graphics processors is perhaps one of the earliest and most visible examples on the trend. At first, these "semi-specialized" processors only took over very specific graphics workloads from the CPU. Eventually, specialized logic was added to handle video decoding and encoding, general purpose compute, and as as of recent times, machine learning workloads. But processors specifically tailored for machine learning are already starting to overtake GPUs, while demands from the IoT market are also making custom tailored, efficient processor designs more economically viable. Thanks to cageymaru for the tip.
Thompson and Spanuth offer a mathematical model for determining the cost/benefit of specialization, taking into account the fixed cost of developing custom chips, the chip volume, the speedup delivered by the custom implementation, and the rate of processor improvement. Since the latter is tied to Moore's Law, its slowing pace means that it's getting easier to rationalize specialized chips, even if the expected speedups are relatively modest. "Thus, for many (but not all) applications it will now be economically viable to get specialized processors - at least in terms of hardware," claim the authors. "Another way of seeing this is to consider that during the 2000-2004 period, an application with a market size of ~83,000 processors would have required that specialization provide a 100x speed-up to be worthwhile. In 2008-2013 such a processor would only need a 2x speedup."
The meteoric rise of graphics processors is perhaps one of the earliest and most visible examples on the trend. At first, these "semi-specialized" processors only took over very specific graphics workloads from the CPU. Eventually, specialized logic was added to handle video decoding and encoding, general purpose compute, and as as of recent times, machine learning workloads. But processors specifically tailored for machine learning are already starting to overtake GPUs, while demands from the IoT market are also making custom tailored, efficient processor designs more economically viable. Thanks to cageymaru for the tip.
Thompson and Spanuth offer a mathematical model for determining the cost/benefit of specialization, taking into account the fixed cost of developing custom chips, the chip volume, the speedup delivered by the custom implementation, and the rate of processor improvement. Since the latter is tied to Moore's Law, its slowing pace means that it's getting easier to rationalize specialized chips, even if the expected speedups are relatively modest. "Thus, for many (but not all) applications it will now be economically viable to get specialized processors - at least in terms of hardware," claim the authors. "Another way of seeing this is to consider that during the 2000-2004 period, an application with a market size of ~83,000 processors would have required that specialization provide a 100x speed-up to be worthwhile. In 2008-2013 such a processor would only need a 2x speedup."