Don't have enough AI in your IoT devices yet? Eta Compute has you covered. At ARM con, the California-based startup demonstrated low power processors that can train their own neural networks, which they call "One of the holy grails of machine learning." In typical low power devices, neural networks are trained on powerful external hardware, like high power GPUs in datacenters. At best, the smart device runs the inference algorithm itself, although many services do that in a datacenter too. But Eta Compute managed to squeeze a chip that can train neural networks into a 50-500 microwatt power envelope. Like Brainchip, the device uses a spiking neural network architecture. Production is expected to begin in early 2019. The TENSAI chip consists primarily of an Arm M3 core and an NXP CoolFlux digital signal processor core. The DSP has two arrays dedicated to doing deep learning's main computation-multiply and accumulate. These cores are implemented in a manner that's called asynchronous, subthreshold technology. It allows them to operate at as low as 0.2 volts (compared with the 0.9 V most chips use), and with a clock period that can be scaled up or down to suit the computational need. The feat required a good deal of analog design work, even for the digital parts, says Washkewicz. "We spent a lot of engineering time on analog; it's a source of advantage."