The need for speed is back. An explosion in the amount of data that needs to be collected and processed is driving a new wave of change in hardware, software and overall system design.
After years of emphasizing power reduction, performance has re-emerged as a top concern in a variety of applications such as smarter cars, wearable devices and cloud data centers. But how to get there has changed significantly. In the past, increased density was the preferred method of delivering power and performance improvements. In effect, the cheapest solution was to throw more transistors and memory bits at a problem. That is no longer the case.
Even 10nm and 7nm may be a stretch for many chipmakers. After that, the semiconductor road map is hazy, partly due to physics, and partly because there is some skepticism that it will be affordable by enough companies to develop it. As a result, chipmakers are examining new hardware and software architectures, machine learning, and better data throughput both inside and outside of devices. And they are doing this on a market-by-market basis, because with limited power budgets one size no longer is ideal for all applications.
“It’s all about how you optimize bandwidth and latency,” said Mark Papermaster, chief technology officer at AMD. “How big is your pipe feeding those engines? How fast can you move data in and out of those engines? And you have to design a balanced machine. That extends outside the chip, as well, to how you connect to the rest of the world. It’s the same thing on memory and I/O. You have to have enough pipes, or bandwidth, to optimize your latency to ensure that you don’t create bottlenecks.”
Those same principles apply, whether it is a blazing fast computer or a wearable device. But this also creates a quandary for semiconductor companies. If they develop custom solutions for specific markets it becomes more difficult to define performance, and possibly more difficult to prove its value.Read more here
Article from Ed Sperling, Semiconductor Engineering