skip to main content
research-article

Synaptic Activity and Hardware Footprint of Spiking Neural Networks in Digital Neuromorphic Systems

Published:12 December 2022Publication History
Skip Abstract Section

Abstract

Spiking neural networks are expected to bring high resources, power, and energy efficiency to machine learning hardware implementations. In this regard, they could facilitate the integration of Artificial Intelligence in highly constrained embedded systems, such as image classification in drones or satellites. If their logic resource efficiency is widely accepted in the literature, their energy efficiency still remains debated. In this article, a novel high-level metric is used to characterize the expected energy efficiency gain when using Spiking Neural Networks (SNN) instead of Formal Neural Networks (FNN) for hardware implementation: Synaptic Activity Ratio (SAR). This metric is applied to a selection of classification tasks including images and 1D signals. Moreover, a high-level estimator for logic resources, power usage, execution time, and energy is introduced for neural network hardware implementations on FPGA, based on four existing accelerator architectures covering both sequential and parallel implementation paradigms for both spiking and formal coding domains. This estimator is used to evaluate the reliability of the Synaptic Activity Ratio metric to characterize spiking neural network energy efficiency gain on the proposed dataset benchmark. This study led to the conclusion that spiking domain offers significant power and energy savings in sequential implementations. This study also shows that synaptic activity is a critical factor that must be taken into account when addressing low-energy systems.

REFERENCES

  1. [1] Abbott Larry F.. 1999. Lapicque’s introduction of the integrate-and-fire model neuron (1907). Brain Res. Bull. 50, 5-6 (1999), 303304.Google ScholarGoogle ScholarCross RefCross Ref
  2. [2] Abderrahmane Nassim. 2020. Hardware Design of Spiking Neural Networks for Energy Efficient Brain-inspired Computing. Ph.D. Dissertation. Université Côte d’Azur.Google ScholarGoogle Scholar
  3. [3] Abderrahmane Nassim, Lemaire Edgar, and Miramond Benoît. 2020. Design space exploration of hardware spiking neurons for embedded artificial intelligence. Neural Netw. 121 (2020), 366386.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. [4] Agarap Abien Fred. 2018. Deep learning using rectified linear units (ReLU). arXiv preprint arXiv:1803.08375 (2018).Google ScholarGoogle Scholar
  5. [5] Arcos-García Álvaro, Alvarez-Garcia Juan A., and Soria-Morillo Luis M.. 2018. Deep neural network for traffic sign recognition systems: An analysis of spatial transformers and stochastic optimisation methods. Neural Netw. 99 (2018), 158165.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. [6] Bichler O., Briand D., Gacoin V., Bertelone B., Allenet T., and Thiele J. C.. 2017. N2D2-neural network design & deployment. Manual Available on Github (2017).Google ScholarGoogle Scholar
  7. [7] Burbank Kendra S.. 2015. Mirrored STDP implements autoencoder learning in a network of spiking neurons. PLoS Computat. Biol. 11, 12 (2015), e1004566.Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] Byerly Adam, Kalganova Tatiana, and Dear Ian. 2020. A branching and merging convolutional network with homogeneous filter capsules. arXiv preprint arXiv:2001.09136 (2020).Google ScholarGoogle Scholar
  9. [9] Camuñas-Mesa Luis A., Domínguez-Cordero Yaisel L., Linares-Barranco Alejandro, Serrano-Gotarredona Teresa, and Linares-Barranco Bernabé. 2018. A configurable event-driven convolutional node with rate saturation mechanism for modular ConvNet systems implementation. Front. Neurosci. 12 (2018), 63.Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Cao Yongqiang, Chen Yang, and Khosla Deepak. 2015. Spiking deep convolutional neural networks for energy-efficient object recognition. Int. J. Comput. Vis. 113, 1 (2015), 5466.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. [11] Cheung Kit, Schultz Simon R., and Luk Wayne. 2012. A large-scale spiking neural network accelerator for FPGA systems. In International Conference on Artificial Neural Networks. Springer, 113120.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. [12] Davidson Simon and Furber Steve B.. 2021. Comparison of artificial and spiking neural networks on digital hardware. Front. Neurosci. 15 (2021), 345.Google ScholarGoogle ScholarCross RefCross Ref
  13. [13] Fang Haowen, Mei Zaidao, Shrestha Amar, Zhao Ziyi, Li Yilan, and Qiu Qinru. 2020. Encoding, model, and architecture: Systematic optimization for spiking neural network in FPGAs. In IEEE/ACM International Conference on Computer Aided Design (ICCAD). IEEE, 19.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. [14] Gollisch Tim and Meister Markus. 2008. Rapid neural coding in the retina with relative spike latencies. Science 319, 5866 (2008), 11081111.Google ScholarGoogle ScholarCross RefCross Ref
  15. [15] Gorman R. Paul and Sejnowski Terrence J.. 1988. Analysis of hidden units in a layered network trained to classify sonar targets. Neural Netw. 1, 1 (1988), 7589.Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Hamdan Muhammad K. A.. 2018. VHDL auto-generation tool for optimized hardware acceleration of convolutional neural networks on FPGA (VGT). (2018).Google ScholarGoogle Scholar
  17. [17] Han Bing and Roy Kaushik. 2020. Deep spiking neural network: Energy efficiency through time based coding. In European Conference on Computer Vision. Springer, 388404.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] Han Bing, Sengupta Abhronil, and Roy Kaushik. 2016. On the energy benefits of spiking deep neural networks: A case study. In International Joint Conference on Neural Networks (IJCNN). IEEE, 971976.Google ScholarGoogle ScholarCross RefCross Ref
  19. [19] Ju Xiping, Fang Biao, Yan Rui, Xu Xiaoliang, and Tang Huajin. 2020. An FPGA implementation of deep spiking neural networks for low-power and fast classification. Neural Computat. 32, 1 (2020), 182204.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] Khacef Lyes, Abderrahmane Nassim, and Miramond Benoît. 2018. Confronting machine-learning with neuroscience for neuromorphic architectures design. In International Joint Conference on Neural Networks (IJCNN). IEEE, 18.Google ScholarGoogle ScholarCross RefCross Ref
  21. [21] Khacef Lyes, Rodriguez Laurent, and Miramond Benoit. 2019. Written and Spoken Digits Database for Multimodal Learning. DOI:DOI:Google ScholarGoogle ScholarCross RefCross Ref
  22. [22] Khodamoradi Alireza, Denolf Kristof, and Kastner Ryan. 2021. S2N2: A FPGA accelerator for streaming spiking neural networks. In ACM/SIGDA International Symposium on Field-programmable Gate Arrays. 194205.Google ScholarGoogle Scholar
  23. [23] Krizhevsky Alex and Hinton Geoff. 2010. Convolutional deep belief networks on CIFAR-10. Unpublished Manuscript 40, 7 (2010), 19.Google ScholarGoogle Scholar
  24. [24] Krizhevsky Alex, Hinton Geoffrey, et al. 2009. Learning multiple layers of features from tiny images. (2009).Google ScholarGoogle Scholar
  25. [25] Kundu Souvik, Datta Gourav, Pedram Massoud, and Beerel Peter A.. 2021. Spike-thrift: Towards energy-efficient deep spiking neural networks by limiting spiking activity via attention-guided compression. In IEEE/CVF Winter Conference on Applications of Computer Vision. 39533962.Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] LeCun Yann, Bottou Léon, Bengio Yoshua, and Haffner Patrick. 1998. Gradient-based learning applied to document recognition. Proc. IEEE 86, 11 (1998), 22782324.Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Lemaire Edgar, Moretti Matthieu, Daniel Lionel, Miramond Benoît, Millet Philippe, Feresin Frédéric, and Bilavarn Sébastien. 2020. An FPGA-based hybrid neural network accelerator for embedded satellite image classification. In IEEE International Symposium on Circuits and Systems (ISCAS). IEEE, 15.Google ScholarGoogle Scholar
  28. [28] Neftci Emre O., Mostafa Hesham, and Zenke Friedemann. 2019. Surrogate gradient learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks. IEEE Sig. Process. Mag. 36, 6 (2019), 5163.Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] O’Connor Peter and Welling Max. 2016. Deep spiking networks. arXiv preprint arXiv:1602.08323 (2016).Google ScholarGoogle Scholar
  30. [30] Orchard Garrick, Meyer Cedric, Etienne-Cummings Ralph, Posch Christoph, Thakor Nitish, and Benosman Ryad. 2015. HFirst: A temporal approach to object recognition. IEEE Trans. Pattern Anal. Mach. Intell. 37, 10 (2015), 20282040.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. [31] O’Shea Timothy James, Roy Tamoghna, and Clancy T. Charles. 2018. Over-the-air deep learning based radio signal classification. IEEE J. Select. Topics Sig. Process. 12, 1 (2018), 168179.Google ScholarGoogle ScholarCross RefCross Ref
  32. [32] Panda Priyadarshini, Aketi Sai Aparna, and Roy Kaushik. 2020. Toward scalable, efficient, and accurate deep spiking neural networks with backward residual connections, stochastic softmax, and hybridization. Front. Neurosci. 14 (2020), 653.Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Portelli Geoffrey, Barrett John M., Hilgen Gerrit, Masquelier Timothée, Maccione Alessandro, Marco Stefano Di, Berdondini Luca, Kornprobst Pierre, and Sernagor Evelyne. 2016. Rank order coding: A retinal information decoding strategy revealed by large-scale multielectrode array retinal recordings. Eneuro 3, 3 (2016).Google ScholarGoogle ScholarCross RefCross Ref
  34. [34] Rathi Nitin and Roy Kaushik. 2021. DIET-SNN: A Low-Latency Spiking Neural Network with Direct Input Encoding & Leakage and Threshold Optimization. Retrieved from: https://openreview.net/forum?id=u_bGm5lrm72.Google ScholarGoogle Scholar
  35. [35] Rumelhart David E., Hinton Geoffrey E., and Williams Ronald J.. 1986. Learning representations by back-propagating errors. Nature 323, 6088 (1986), 533536.Google ScholarGoogle ScholarCross RefCross Ref
  36. [36] Sengupta Abhronil, Ye Yuting, Wang Robert, Liu Chiao, and Roy Kaushik. 2019. Going deeper in spiking neural networks: VGG and residual architectures. Front. Neurosci. 13 (2019), 95.Google ScholarGoogle ScholarCross RefCross Ref
  37. [37] Stallkamp Johannes, Schlipsing Marc, Salmen Jan, and Igel Christian. 2012. Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition. Neural Netw. 32 (2012), 323332.Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. [38] Tavanaei Amirhossein, Ghodrati Masoud, Kheradpisheh Saeed Reza, Masquelier Timothée, and Maida Anthony. 2019. Deep learning in spiking neural networks. Neural Netw. 111 (2019), 4763.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. [39] Thorpe Simon and Gautrais Jacques. 1998. Rank order coding. In Computational Neuroscience. Springer, 113118.Google ScholarGoogle ScholarCross RefCross Ref
  40. [40] Wang Runchun Mark, Cohen Gregory, Stiefel Klaus M., Hamilton Tara Julia, Tapson Jonathan Craig, and Schaik André van. 2013. An FPGA implementation of a polychronous spiking neural network with delay adaptation. Front. Neurosci. 7 (2013), 14.Google ScholarGoogle ScholarCross RefCross Ref
  41. [41] Warden Pete. 2018. Speech commands: A dataset for limited-vocabulary speech recognition. arXiv preprint arXiv:1804.03209 (2018).Google ScholarGoogle Scholar
  42. [42] Wu Jibin, Chua Yansong, Zhang Malu, Yang Qu, Li Guoqi, and Li Haizhou. 2019. Deep spiking neural network with spike count based learning rule. In International Joint Conference on Neural Networks (IJCNN). IEEE, 16.Google ScholarGoogle ScholarCross RefCross Ref
  43. [43] Zhejun Liu. 2019. ResNet for Radio Recognition. Retrieved from: https://github.com/liuzhejun/ResNet-for-Radio-Recognition.Google ScholarGoogle Scholar

Index Terms

  1. Synaptic Activity and Hardware Footprint of Spiking Neural Networks in Digital Neuromorphic Systems

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          • Published in

            cover image ACM Transactions on Embedded Computing Systems
            ACM Transactions on Embedded Computing Systems  Volume 21, Issue 6
            November 2022
            498 pages
            ISSN:1539-9087
            EISSN:1558-3465
            DOI:10.1145/3561948
            • Editor:
            • Tulika Mitra
            Issue’s Table of Contents

            ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of a national government. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only.

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 12 December 2022
            • Online AM: 21 March 2022
            • Accepted: 19 February 2022
            • Revised: 4 February 2022
            • Received: 15 July 2021
            Published in tecs Volume 21, Issue 6

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article
            • Refereed

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          Full Text

          View this article in Full Text.

          View Full Text

          HTML Format

          View this article in HTML Format .

          View HTML Format
          About Cookies On This Site

          We use cookies to ensure that we give you the best experience on our website.

          Learn more

          Got it!