Traditional chips that you're probably using to read this article now work with binary data, and they typically separate the processor from the main memory pool. But, in an effort to curb increasingly slow jumps in processors performance, Researchers at Princeton University have come out with a chip that combines analog and in-memory computing. The chip, which they claim works with "standard programming languages", has about 590kb of on-die SRAM that it uses as the main memory pool. And, instead of using transistors to do the math, it uses tiny capacitors, which can store a variable amount of charge to represent a wide range of values. The prototype chip is manufactured on a 65nm process, is relatively small, and unsurprisingly, specifically targets AI workloads. The researchers say this approach saves power and offers increased performance over traditional methods, and say similar AI accelerators could be used in phones or medical devices. Capacitors also can be made very precisely on a chip, much more so than transistors. The new design pairs capacitors with conventional cells of static random access memory (SRAM) on a chip. The combination of capacitors and SRAM is used to perform computations on the data in the analog (not digital) domain, yet in ways that are reliable and amenable to including programmability features. Now, the memory circuits can perform calculations in ways directed by the chip’s central processing unit. "In-memory computing has been showing a lot of promise in recent years, in really addressing the energy and speed of computing systems," said Verma. "But the big question has been whether that promise would scale and be usable by system designers towards all of the AI applications we really care about. That makes programmability necessary."