Processing in memory is one approach to overcoming the von Neumann bottleneck, which is a limitation on throughput caused by the latency inherent in the standard computer architecture. In the standard model, known as the von Neumann architecture, programs and data are held in memory; the processor and memory are separate and data moves between the two. In that configuration, latency is unavoidable. Furthermore, although processor speeds have increased significantly in recent years, memory improvements have mostly been in density – the ability to store more data in less space – rather than transfer rates. As a result, the processor has spent an increasing amount of time waiting for data to be fetched from memory. In effect, a processor is limited to the rate of transfer at the bottleneck.
In PIM chip fabrication, CMOS logic devices and memory cells are tightly coupled; processor logic is directly connected to the memory stack. In effect, the integration of processing and memory increases processing speed and memory transfer rate and decreases latency and power usage.
According to Anthony Deighton, senior vice president for marketing at QlikTech, 64-bit processors have significantly increased the amount of data that can be stored in memory, resulting in lower prices for memory and spurring its use in enterprise applications. As memory prices have fallen, processing in memory has become practical for more and more applications. Current applications of PIM technologies include computer graphics, in-memory databases and real-time analytics. In the not-too-distant future, the PIM architecture could be used for personal computers and other computing devices.