Abstract
Spiking neural networks are expected to bring high resources, power, and energy efficiency to machine learning hardware implementations. In this regard, they could facilitate the integration of Artificial Intelligence in highly constrained embedded systems, such as image classification in drones or satellites. If their logic resource efficiency is widely accepted in the literature, their energy efficiency still remains debated. In this article, a novel high-level metric is used to characterize the expected energy efficiency gain when using Spiking Neural Networks (SNN) instead of Formal Neural Networks (FNN) for hardware implementation: Synaptic Activity Ratio (SAR). This metric is applied to a selection of classification tasks including images and 1D signals. Moreover, a high-level estimator for logic resources, power usage, execution time, and energy is introduced for neural network hardware implementations on FPGA, based on four existing accelerator architectures covering both sequential and parallel implementation paradigms for both spiking and formal coding domains. This estimator is used to evaluate the reliability of the Synaptic Activity Ratio metric to characterize spiking neural network energy efficiency gain on the proposed dataset benchmark. This study led to the conclusion that spiking domain offers significant power and energy savings in sequential implementations. This study also shows that synaptic activity is a critical factor that must be taken into account when addressing low-energy systems.
- [1] . 1999. Lapicque’s introduction of the integrate-and-fire model neuron (1907). Brain Res. Bull. 50, 5-6 (1999), 303–304.Google Scholar
Cross Ref
- [2] . 2020. Hardware Design of Spiking Neural Networks for Energy Efficient Brain-inspired Computing. Ph.D. Dissertation. Université Côte d’Azur.Google Scholar
- [3] . 2020. Design space exploration of hardware spiking neurons for embedded artificial intelligence. Neural Netw. 121 (2020), 366–386.Google Scholar
Digital Library
- [4] . 2018. Deep learning using rectified linear units (ReLU). arXiv preprint arXiv:1803.08375 (2018).Google Scholar
- [5] . 2018. Deep neural network for traffic sign recognition systems: An analysis of spatial transformers and stochastic optimisation methods. Neural Netw. 99 (2018), 158–165.Google Scholar
Digital Library
- [6] . 2017. N2D2-neural network design & deployment. Manual Available on Github (2017).Google Scholar
- [7] . 2015. Mirrored STDP implements autoencoder learning in a network of spiking neurons. PLoS Computat. Biol. 11, 12 (2015), e1004566.Google Scholar
Cross Ref
- [8] . 2020. A branching and merging convolutional network with homogeneous filter capsules. arXiv preprint arXiv:2001.09136 (2020).Google Scholar
- [9] . 2018. A configurable event-driven convolutional node with rate saturation mechanism for modular ConvNet systems implementation. Front. Neurosci. 12 (2018), 63.Google Scholar
Cross Ref
- [10] . 2015. Spiking deep convolutional neural networks for energy-efficient object recognition. Int. J. Comput. Vis. 113, 1 (2015), 54–66.Google Scholar
Digital Library
- [11] . 2012. A large-scale spiking neural network accelerator for FPGA systems. In International Conference on Artificial Neural Networks. Springer, 113–120.Google Scholar
Digital Library
- [12] . 2021. Comparison of artificial and spiking neural networks on digital hardware. Front. Neurosci. 15 (2021), 345.Google Scholar
Cross Ref
- [13] . 2020. Encoding, model, and architecture: Systematic optimization for spiking neural network in FPGAs. In IEEE/ACM International Conference on Computer Aided Design (ICCAD). IEEE, 1–9.Google Scholar
Digital Library
- [14] . 2008. Rapid neural coding in the retina with relative spike latencies. Science 319, 5866 (2008), 1108–1111.Google Scholar
Cross Ref
- [15] . 1988. Analysis of hidden units in a layered network trained to classify sonar targets. Neural Netw. 1, 1 (1988), 75–89.Google Scholar
Cross Ref
- [16] . 2018. VHDL auto-generation tool for optimized hardware acceleration of convolutional neural networks on FPGA (VGT). (2018).Google Scholar
- [17] . 2020. Deep spiking neural network: Energy efficiency through time based coding. In European Conference on Computer Vision. Springer, 388–404.Google Scholar
Digital Library
- [18] . 2016. On the energy benefits of spiking deep neural networks: A case study. In International Joint Conference on Neural Networks (IJCNN). IEEE, 971–976.Google Scholar
Cross Ref
- [19] . 2020. An FPGA implementation of deep spiking neural networks for low-power and fast classification. Neural Computat. 32, 1 (2020), 182–204.Google Scholar
Digital Library
- [20] . 2018. Confronting machine-learning with neuroscience for neuromorphic architectures design. In International Joint Conference on Neural Networks (IJCNN). IEEE, 1–8.Google Scholar
Cross Ref
- [21] . 2019. Written and Spoken Digits Database for Multimodal Learning.
DOI :DOI: Google ScholarCross Ref
- [22] . 2021. S2N2: A FPGA accelerator for streaming spiking neural networks. In ACM/SIGDA International Symposium on Field-programmable Gate Arrays. 194–205.Google Scholar
- [23] . 2010. Convolutional deep belief networks on CIFAR-10. Unpublished Manuscript 40, 7 (2010), 1–9.Google Scholar
- [24] . 2009. Learning multiple layers of features from tiny images. (2009).Google Scholar
- [25] . 2021. Spike-thrift: Towards energy-efficient deep spiking neural networks by limiting spiking activity via attention-guided compression. In IEEE/CVF Winter Conference on Applications of Computer Vision. 3953–3962.Google Scholar
Cross Ref
- [26] . 1998. Gradient-based learning applied to document recognition. Proc. IEEE 86, 11 (1998), 2278–2324.Google Scholar
Cross Ref
- [27] . 2020. An FPGA-based hybrid neural network accelerator for embedded satellite image classification. In IEEE International Symposium on Circuits and Systems (ISCAS). IEEE, 1–5.Google Scholar
- [28] . 2019. Surrogate gradient learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks. IEEE Sig. Process. Mag. 36, 6 (2019), 51–63.Google Scholar
Cross Ref
- [29] . 2016. Deep spiking networks. arXiv preprint arXiv:1602.08323 (2016).Google Scholar
- [30] . 2015. HFirst: A temporal approach to object recognition. IEEE Trans. Pattern Anal. Mach. Intell. 37, 10 (2015), 2028–2040.Google Scholar
Digital Library
- [31] . 2018. Over-the-air deep learning based radio signal classification. IEEE J. Select. Topics Sig. Process. 12, 1 (2018), 168–179.Google Scholar
Cross Ref
- [32] . 2020. Toward scalable, efficient, and accurate deep spiking neural networks with backward residual connections, stochastic softmax, and hybridization. Front. Neurosci. 14 (2020), 653.Google Scholar
Cross Ref
- [33] . 2016. Rank order coding: A retinal information decoding strategy revealed by large-scale multielectrode array retinal recordings. Eneuro 3, 3 (2016).Google Scholar
Cross Ref
- [34] . 2021. DIET-SNN: A Low-Latency Spiking Neural Network with Direct Input Encoding & Leakage and Threshold Optimization. Retrieved from: https://openreview.net/forum?id=u_bGm5lrm72.Google Scholar
- [35] . 1986. Learning representations by back-propagating errors. Nature 323, 6088 (1986), 533–536.Google Scholar
Cross Ref
- [36] . 2019. Going deeper in spiking neural networks: VGG and residual architectures. Front. Neurosci. 13 (2019), 95.Google Scholar
Cross Ref
- [37] . 2012. Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition. Neural Netw. 32 (2012), 323–332.Google Scholar
Digital Library
- [38] . 2019. Deep learning in spiking neural networks. Neural Netw. 111 (2019), 47–63.Google Scholar
Digital Library
- [39] . 1998. Rank order coding. In Computational Neuroscience. Springer, 113–118.Google Scholar
Cross Ref
- [40] . 2013. An FPGA implementation of a polychronous spiking neural network with delay adaptation. Front. Neurosci. 7 (2013), 14.Google Scholar
Cross Ref
- [41] . 2018. Speech commands: A dataset for limited-vocabulary speech recognition. arXiv preprint arXiv:1804.03209 (2018).Google Scholar
- [42] . 2019. Deep spiking neural network with spike count based learning rule. In International Joint Conference on Neural Networks (IJCNN). IEEE, 1–6.Google Scholar
Cross Ref
- [43] . 2019. ResNet for Radio Recognition. Retrieved from: https://github.com/liuzhejun/ResNet-for-Radio-Recognition.Google Scholar
Index Terms
Synaptic Activity and Hardware Footprint of Spiking Neural Networks in Digital Neuromorphic Systems
Recommendations
Sparse and burst spiking in artificial neural networks inspired by synaptic retrograde signaling
The bursting of action potential and sparse activity are ubiquitously observed in the brain. Although the functions of these activity modes remain to be understood, it is expected that they play a critical role in information processing. In addition, ...
A Hardware Architecture for Image Clustering Using Spiking Neural Networks
ISVLSI '12: Proceedings of the 2012 IEEE Computer Society Annual Symposium on VLSISpiking Neural Networks (SNNs) have become an important research theme due to new discoveries and advances in neurophysiology, which states that information among neurons is interchanged via pulses or spikes. FPGAs are widely used for implementing high ...
Neuromorphic implementations of neurobiological learning algorithms for spiking neural networks
The application of biologically inspired methods in design and control has a long tradition in robotics. Unlike previous approaches in this direction, the emerging field of neurorobotics not only mimics biological mechanisms at a relatively high level ...






Comments