embedded vision
The Akida NSoC provides a low-power, high throughput solution for complex object classification applications. It can directly connect to a variety of sensor inputs, including Lidar, pixels, dynamic vision sensors (DVS), as well as others. The on-chip data conversion complex then transforms these datatypes into spikes, to be processed by the Akida Neuron Fabric. Hosted on the Akida Neuron Fabric is the spiking neural network model - in this application the SNN model is crafted to perform object classification. All of these functions are housed in a compact package and consumes less than 1 watt. An SNN model crafted to perform object classification for the CIFAR-10 dataset provides a performance of 1,400 images per second per watt (img/s/w) on the Akida NSoC.
embedded vision