skip to main content
research-article

SyncNN: Evaluating and Accelerating Spiking Neural Networks on FPGAs

Published:09 December 2022Publication History
Skip Abstract Section

Abstract

Compared to conventional artificial neural networks, spiking neural networks (SNNs) are more biologically plausible and require less computation due to their event-driven nature of spiking neurons. However, the default asynchronous execution of SNNs also poses great challenges to accelerate their performance on FPGAs.

In this work, we present a novel synchronous approach for rate-encoding-based SNNs, which is more hardware friendly than conventional asynchronous approaches. We first quantitatively evaluate and mathematically prove that the proposed synchronous approach and asynchronous implementation alternatives of rate-encoding-based SNNs are similar in terms of inference accuracy, and we highlight the computational performance advantage of using SyncNN over an asynchronous approach. We also design and implement the SyncNN framework to accelerate SNNs on Xilinx ARM-FPGA SoCs in a synchronous fashion. To improve the computation and memory access efficiency, we first quantize the network weights to 16-bit, 8-bit, and 4-bit fixed-point values with the SNN-friendly quantization techniques. Moreover, we encode only the activated neurons by recording their positions and corresponding number of spikes to fully utilize the event-driven characteristics of SNNs, instead of using the common binary encoding (i.e., 1 for a spike and 0 for no spike).

For the encoded neurons that have dynamic and irregular access patterns, we design parameterized compute engines to accelerate their performance on the FPGA, where we explore various parallelization strategies and memory access optimizations. Our experimental results on multiple Xilinx ARM-FPGA SoC boards demonstrate that our SyncNN is scalable to run multiple networks, such as LeNet, Network in Network, and VGG, on various datasets such as MNIST, SVHN, and CIFAR-10. SyncNN not only achieves competitive accuracy (99.6%) but also achieves state-of-the-art performance (13,086 frames per second) for the MNIST dataset. Finally, we compare the performance of SyncNN with conventional CNNs using the Vitis AI and find that SyncNN can achieve similar accuracy and better performance compared to Vitis AI for image classification using small networks.

REFERENCES

  1. [1] Bi Guo-qiang and Poo Mu-ming. 1998. Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type. Journal of Neuroscience 18, 24 (1998), 1046410472.Google ScholarGoogle ScholarCross RefCross Ref
  2. [2] Bohte Sander M., Kok Joost N., and Poutré Han La. 2002. Error-backpropagation in temporally encoded networks of spiking neurons. Neurocomputing 48, 1 (2002), 1737.Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Booij Olaf and Nguyen Hieu tat. 2005. A gradient descent rule for spiking neurons emitting multiple spikes. Information Processing Letters 95, 6 (Sept.2005), 552558.Google ScholarGoogle ScholarCross RefCross Ref
  4. [4] Brette Romain and Goodman Dan F. M.. 2012. Simulating spiking neural networks on GPU. Network: Computation in Neural Systems 23, 4 (2012), 167182.Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Cheung Kit, Schultz Simon R., and Luk Wayne. 2016. NeuroFlow: A general purpose spiking neural network simulation platform using customizable processors. Frontiers in Neuroscience 9 (2016), 516.Google ScholarGoogle ScholarCross RefCross Ref
  6. [6] De M. A., JunCheng Shen, ZongHua G. U., Ming Zhang, XiaoLei Zhu, XiaoQiang X. U., Xu Qi, YangJing Shen, and Pan Gang. 2017. Darwin: A neuromorphic hardware co-processor based on spiking neural networks. Journal of Systems Architecture 77 (Jan.2017), 4351.Google ScholarGoogle ScholarCross RefCross Ref
  7. [7] Diehl Peter U., Neil Daniel, Binas Jonathan, Cook Matthew, Liu Shih-Chii, and Pfeiffer Michael. 2015. Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In 2015 International Joint Conference on Neural Networks (IJCNN’15). 18. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] Esser Steve K., Appuswamy Rathinakumar, Merolla Paul, Arthur John V., and Modha Dharmendra S. 2015. Backpropagation for energy-efficient neuromorphic computing. In Advances in Neural Information Processing Systems, Cortes C., Lawrence N., Lee D., Sugiyama M., and Garnett R. (Eds.), Vol. 28. Curran Associates, Inc., 11171125.Google ScholarGoogle Scholar
  9. [9] Fang H., Mei Z., Shrestha A., Zhao Z., Li Y., and Qiu Q.. 2020. Encoding, model, and architecture: Systematic optimization for spiking neural network in FPGAs. In 2020 IEEE/ACM International Conference on Computer Aided Design (ICCAD’20). 19.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. [10] Fang Haowen, Shrestha Amar, Zhao Ziyi, and Qiu Qinru. 2020. Exploiting neuron and synapse filter dynamics in spatial temporal learning of deep spiking neural network. In Proceedings of the 29th International Joint Conference on Artificial Intelligence (IJCAI’20). 27712778.Google ScholarGoogle ScholarCross RefCross Ref
  11. [11] Fidjeland A. K. and Shanahan M. P.. 2010. Accelerated simulation of spiking neural networks using GPUs. In The 2010 International Joint Conference on Neural Networks (IJCNN’10). 18.Google ScholarGoogle ScholarCross RefCross Ref
  12. [12] Han J., Li Z., Zheng W., and Zhang Y.. 2020. Hardware implementation of spiking neural networks on FPGA. Tsinghua Science and Technology 25, 4 (2020), 479486.Google ScholarGoogle ScholarCross RefCross Ref
  13. [13] Hunsberger Eric and Eliasmith Chris. 2016. Training spiking deep networks for neuromorphic hardware. arXiv:1611.05141 (2016). arxiv:1611.05141Google ScholarGoogle Scholar
  14. [14] Ju Xiping, Fang Biao, Yan Rui, Xu Xiaoliang, and Tang Huajin. 2020. An FPGA implementation of deep spiking neural networks for low-power and fast classification. Neural Computation 32, 1 (2020), 182204.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. [15] Kawao Taro, Neishi Masato, Okamoto Tomohiro, Gharehbaghi Amir Masoud, Kohno Takashi, and Fujita Masahiro. 2016. Spiking neural network simulation on FPGAs with automatic and intensive pipelining. In 2016 International Symposium on Nonlinear Theory and Its Applications (NOLTA’16). 202205.Google ScholarGoogle Scholar
  16. [16] LeCun Yann, Bengio Y., and Hinton Geoffrey. 2015. Deep learning. Nature 521 (May2015), 436444.Google ScholarGoogle ScholarCross RefCross Ref
  17. [17] Lu Alec, Fang Zhenman, Liu Weihua, and Shannon Lesley. 2021. Demystifying the memory system of modern datacenter FPGAs for software programmers through microbenchmarking. In The 2021 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays (FPGA’21). Association for Computing Machinery, 105115.Google ScholarGoogle Scholar
  18. [18] Maass Wolfgang. 1997. Networks of spiking neurons: The third generation of neural network models. Neural Networks 10, 9 (1997), 16591671.Google ScholarGoogle ScholarCross RefCross Ref
  19. [19] Mayr Christian, Höppner Sebastian, and Furber Steve. 2019. SpiNNaker 2: A 10 million core processor system for brain simulation and machine learning. arXiv:1911.02385 (2019). arxiv:cs.ET/1911.02385Google ScholarGoogle Scholar
  20. [20] McKennoch S., Liu Dingding, and Bushnell L. G.. 2006. Fast modifications of the spikeprop algorithm. In The 2006 IEEE International Joint Conference on Neural Network Proceedings. 39703977.Google ScholarGoogle Scholar
  21. [21] Merolla Paul A., Arthur John V., Alvarez-Icaza Rodrigo, Cassidy Andrew S., Sawada Jun, Akopyan Filipp, Jackson Bryan L., Imam Nabil, Guo Chen, Nakamura Yutaka, Brezzo Bernard, Vo Ivan, Esser Steven K., Appuswamy Rathinakumar, Taba Brian, Amir Arnon, Flickner Myron D., Risk William P., Manohar Rajit, and Modha Dharmendra S.. 2014. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345, 6197 (2014), 668673.Google ScholarGoogle ScholarCross RefCross Ref
  22. [22] Nageswaran Jayram Moorkanikara, Dutt Nikil, Krichmar Jeff, Nicolau Alex, and Veidenbaum Alexander. 2009. A configurable simulation environment for the efficient simulation of large-scale spiking neural networks on graphics processors. Neural Networks: The Official Journal of the International Neural Network Society 22 (Aug.2009), 791800.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. [23] Benjamin Morcos,. 2019. NengoFPGA: An FPGA Backend for the Nengo Neural Simulator. http://hdl.handle.net/10012/14923.Google ScholarGoogle Scholar
  24. [24] Neil D. and Liu S.. 2014. Minitaur, an event-driven FPGA-based spiking network accelerator. IEEE Transactions on Very Large Scale Integration (VLSI) Systems 22, 12 (2014), 26212628.Google ScholarGoogle ScholarCross RefCross Ref
  25. [25] Pani Danilo, Meloni Paolo, Tuveri Giuseppe, Palumbo Francesca, Massobrio Paolo, and Raffo Luigi. 2017. An FPGA platform for real-time simulation of spiking neuronal networks. Frontiers in Neuroscience 11 (2017), 90.Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] Pfeiffer Michael and Pfeil Thomas. 2018. Deep learning with spiking neurons: Opportunities and challenges. Frontiers in Neuroscience 12 (2018), 774.Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Ponulak Filip and Kasiundefinedski Andrzej. 2010. Supervised learning in spiking neural networks with resume: Sequence learning, classification, and spike shifting. Neural Computing 22, 2 (Feb.2010), 467510.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. [28] Rosado-Muñoz A., Bataller-Mompeán M., and Guerrero-Martínez J.. 2012. FPGA implementation of spiking neural networks. IFAC Proceedings Volumes 45, 4 (2012), 139144. 1st IFAC Conference on Embedded Systems, Computational Intelligence and Telematics in Control.Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Rueckauer Bodo, Lungu Iulia-Alexandra, Hu Yuhuang, and Pfeiffer Michael. 2016. Theory and tools for the conversion of analog to spiking convolutional neural networks. arXiv:1612.04052 (2016). arxiv:stat.ML/1612.04052.Google ScholarGoogle Scholar
  30. [30] Schemmel J., Brüderle D., Grübl A., Hock M., Meier K., and Millner S.. 2010. A wafer-scale neuromorphic hardware system for large-scale neural modeling. In 2010 IEEE International Symposium on Circuits and Systems (ISCAS’10). 19471950.Google ScholarGoogle Scholar
  31. [31] Sengupta Abhronil, Ye Yuting, Wang Robert, Liu Chiao, and Roy Kaushik. 2019. Going deeper in spiking neural networks: VGG and residual architectures. Frontiers in Neuroscience 13 (2019), 95.Google ScholarGoogle ScholarCross RefCross Ref
  32. [32] Shrestha Amar, Fang Haowen, Wu Qing, and Qiu Qinru. 2019. Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In Proceedings of the International Conference on Neuromorphic Systems (ICONS’19). Association for Computing Machinery, New York, NY, Article 10, 8 pages.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. [33] Shrestha Sumit Bam and Orchard Garrick. 2018. SLAYER: Spike layer error reassignment in time. In Advances in Neural Information Processing Systems, Bengio S., Wallach H., Larochelle H., Grauman K., Cesa-Bianchi N., and Garnett R. (Eds.), Vol. 31. Curran Associates, Inc., 14121421.Google ScholarGoogle Scholar
  34. [34] Stromatias Evangelos, Neil Dan, Galluppi Francesco, Pfeiffer Michael, Liu Shih-Chii, and Furber Steve. 2015. Scalable energy-efficient, low-latency implementations of trained spiking deep belief networks on SpiNNaker. In 2015 International Joint Conference on Neural Networks (IJCNN’15). 18.Google ScholarGoogle ScholarCross RefCross Ref
  35. [35] Taherkhani A., Belatreche A., Li Y., and Maguire L. P.. 2015. DL-ReSuMe: A delay learning-based remote supervised method for spiking neurons. IEEE Transactions on Neural Networks and Learning Systems 26, 12 (2015), 31373149.Google ScholarGoogle ScholarCross RefCross Ref
  36. [36] Tavanaei Amirhossein and Maida Anthony. 2019. BP-STDP: Approximating backpropagation using spike timing dependent plasticity. Neurocomputing 330 (2019), 3947.Google ScholarGoogle ScholarCross RefCross Ref
  37. [37] Wang Runchun M., Thakur Chetan S., and Schaik André van. 2018. An FPGA-based massively parallel neuromorphic cortex simulator. Frontiers in Neuroscience 12 (2018), 213.Google ScholarGoogle ScholarCross RefCross Ref
  38. [38] Wu Yujie, Deng Lei, Li Guoqi, Zhu Jun, and Shi Luping. 2018. Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12 (2018), 331.Google ScholarGoogle ScholarCross RefCross Ref
  39. [39] Xilinx. 2021. Vitis AI: Adaptable and Real-Time AI Inference Acceleration. https://www.xilinx.com/products/design-tools/vitis/vitis-ai.html.Google ScholarGoogle Scholar
  40. [40] Zhang Chen, Sun Guangyu, Fang Zhenman, Zhou Peipei, Pan Peichen, and Cong Jason. 2019. Caffeine: Toward uniformed representation and acceleration for deep convolutional neural networks. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 38, 11 (2019), 20722085.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. SyncNN: Evaluating and Accelerating Spiking Neural Networks on FPGAs

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          • Published in

            cover image ACM Transactions on Reconfigurable Technology and Systems
            ACM Transactions on Reconfigurable Technology and Systems  Volume 15, Issue 4
            December 2022
            476 pages
            ISSN:1936-7406
            EISSN:1936-7414
            DOI:10.1145/3540252
            • Editor:
            • Deming Chen
            Issue’s Table of Contents

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 9 December 2022
            • Online AM: 9 February 2022
            • Accepted: 25 January 2022
            • Revised: 18 December 2021
            • Received: 17 August 2021
            Published in trets Volume 15, Issue 4

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article
            • Refereed

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          Full Text

          View this article in Full Text.

          View Full Text

          HTML Format

          View this article in HTML Format .

          View HTML Format
          About Cookies On This Site

          We use cookies to ensure that we give you the best experience on our website.

          Learn more

          Got it!